Gates Calls for a Final Push to Eradicate Polio 3 Andromeda’s Once and Future Stars 6 Health Chip Gives Instant Diagnoses 8 Secondhand Television Exposure Linked to Eating Disorders 10 Perception of Our Heartbeat Influences Our Body Image 12 Tablet Splitting Is a Highly Inaccurate and Potentially Dangerous Practice 13 Solar Research Instrument 'Fills the Gap,' Views Sun's Innermost Corona 15 Using Cassava to Address Vitamin A Deficiency 17 'Natural' Asymmetry of Biological Molecules May Have Come from Space 19 Extinct Insect Completely Reconstructed in 3-D 21 Pesky Bacterial Slime Reveals Its Survival Secrets 23 What Carbon Cycle? College Students Lack Scientific Literacy, Study Finds 25 Packaging That Knows When Food Is Going Bad 26 Hidden Literary References Discovered in the Mona Lisa 27 Telescope Calibration May Help Explain Mystery of Universe's Expansion 28 Functionally Graded Shape Memory Polymers Developed 30 Household Sewage: Not Waste, but a Vast New Energy Resource 32 How to Soften a Diamond 33 Mother’s Milk Improves Physical Condition of Future Adolescents, Study Finds 35 Protective Properties of Green Tea Uncovered 37 Malfunctioning Gene Associated With Lou Gehrig's Disease Leads to Nerve-Cell Death 39 Helicopter Transport Increases Survival for Seriously Injured Patients, Study Finds 41 Male Pattern Balding May Be Due to Stem Cell Inactivation 43 Films for Façades: New Building Material Offers New Design Options 45 Freshwater Methane Release Changes Greenhouse Gas Equation 47 Surprising Flares in Crab Nebula 49 Deep Interior of Moon Resembles Earth's Core 51 Cancer in a Single Catastrophe: Chromosome Crisis Common in Cancer Causation 53 Brain training for tinnitus reverses ringing in ears 55 Ethereal quantum state stored in solid crystal 56 Scorn over claim of teleported DNA 57 Spinning cosmic dust motes set speed record 59 Fledgling space firm will use old Soviet gear 61 Most detailed image of night sky unveiled 62 Thunderstorms caught making antimatter 63 Smallest planet is nearly Earth-sized 64 Glasses-free 3D TV tries to broaden out its appeal 66 Mimic-bots learn to do as we do 68 Why social networks are sucking up more of your time 70 New Google Goggles app that solves Sudoku puzzles 71

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

Smart contact lenses for health and head-up displays 72 A fat tummy shrivels your brain 74 We can feed 9 billion people in 2050 75 Did magma rain on the early Earth? 76 Foxes zero in on prey via Earth's magnetic field 77 Would a placebo work for you? 78 Chess grandmasters use twice the brain 79 Mind gym: Putting meditation to the test 80 V. S. Ramachandran: Mind, metaphor and mirror neurons 84 Uncertainty principle: How evolution hedges its bets 87 Pi's nemesis: Mathematics is better with tau 91 Sozzled superconductors sizzle with speed 93 Housing 9 billion won't take techno-magic 96 Musical Chills: Why They Give Us Thrills 97 Most Distant Galaxy Cluster Identified 99 Water on Moon Originated from Comets 101 Chemical Analysis Confirms Discovery of Oldest Wine-Making Equipment 103 Brain Predicts Consequences of Eye Movements Based on What We See Next 106 Species Loss Tied to Ecosystem Collapse and Recovery 108 New Glass Stronger and Tougher Than Steel 110 Hubble Zooms in on a Space Oddity 112 NASA Radar Reveals Features on Asteroid 114 Virus Killer Gets Supercharged: Discovery Greatly Improves Common Disinfectant 115 Earth Is Twice as Dusty as in 19th Century, Research Shows 117 'Thirdhand Smoke' May Be Bigger Health Hazard Than Previously Believed 119 Polymer Membranes With Molecular-Sized Channels That Assemble Themselves 120 Eating Vegetables Gives Skin a More Healthy Glow Than the Sun, Study Shows 122 Coiled Nanowires May Hold Key to Stretchable Electronics 124 NASA Tests New Propulsion System for Robotic Lander Prototype 126 'Superstreet' Traffic Design Improves Travel Time, Safety 128 Planck's New View of the Cosmic Theater 130 New Insights Into Sun's Photosphere 132 Study Estimates Land Available for Biofuel Crops 134 NASA Image Shows La Niña-Caused Woes Down Under 136 International Space Station Begins New Era of Utilization 137 Close-Knit Pairs of Supermassive Black Holes Discovered in Merging Galaxies 139 Quantum Quirk Contained 141 Biomedical Breakthrough: Blood Vessels for Lab-Grown Tissues 143 New Measure Trumps HDL Levels in Protecting Against Heart Disease 145 Cosmology Standard Candle Not So Standard After All 147 Scientific Evidence Supports Effectiveness of Chinese Drug for Cataracts 149 No More Mrs. Nice Mom 150 The War on Logic 152 How Did Economists Get It So Wrong? 154 Pushing Petals Up and Down Park Ave. 164 The Grass Is Fake, but the Splendor Real 167 An Artful Environmental Impact Statement 169 A Pink-Ribbon Race, Years Long 171 Tests Show Promise for Early Diagnosis of Alzheimer’s 175 Close Look at Orthotics Raises a Welter of Doubts 177 When Self-Knowledge Is Only the Beginning 180 Heavy Doses of DNA Data, With Few Side Effects 183 Uganda: Male Circumcision May Help Protect Sexual Partners Against Cervical Cancer 186 Lifting a Veil of Fear to See a Few Benefits of Fever 187 Recalling a Fallen Star’s Legacy in High-Energy Particle Physics 190 Bending and Stretching Classroom Lessons to Make Math Inspire 193 A New Econ Core 196 'Academically Adrift' 198

2 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

Gates Calls for a Final Push to Eradicate Polio

By DONALD G. McNEIL Jr.

Altaf Qadri/Associated Press

TAKING UP THE FIGHT A polio awareness campaign poster was posted on the side of a small shop in a village about 110 miles from Patna, India. Bill Gates has emerged as the loudest voice for the eradication of global polio.

On Monday, in a town house that once belonged to polio’s most famous victim, Franklin D. Roosevelt, Bill Gates made an appeal for one more big push to wipe out world polio.

Although that battle began in 1985 and Mr. Gates started making regular donations to it only in 2005, he has emerged in the last two years both as one of the biggest donors — he has now given $1.3 billion, more than the amount raised over 25 years by Rotary International — and as the loudest voice for eradication.

As new outbreaks create new setbacks each year, he has given ever more money, not only for research but for the grinding work on the ground: paying millions of vaccinators $2 or $3 stipends to get pink polio drops into the mouths of children in villages, slums, markets and train stations.

He also journeys to remote Indian and Nigerian villages to be photographed giving the drops himself. Though he lacks Angelina Jolie’s pneumatic allure, his lingering “world’s richest man” cologne is just as aphrodisiacal to TV cameras.

He also uses that celebrity to press political leaders. Rich Gulf nations have been criticized for giving little for a disease that now chiefly affects Muslim children; last week in Abu Dhabi, United Arab Emirates, Mr. Gates and Crown Prince Sheik Mohammed bin Zayed al-Nahyan jointly donated $50 million each to vaccinate children in Pakistan and Afghanistan. In Davos, Switzerland, Mr. Gates and the British prime minister, David

3 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

Cameron, announced that Britain would double its $30 million donation. Last month, when the Pakistani president, Asif Ali Zardari, went to Washington for the diplomat Richard C. Holbrooke’s funeral, Mr. Gates offered him $65 million to initiate a new polio drive. Twelve days later, publicly thanking him, Mr. Zardari did so.

However, even as he presses forward, Mr. Gates faces a hard question from some eradication experts and bioethicists: Is it right to keep trying?

Although caseloads are down more than 99 percent since the campaign began in 1985, getting rid of the last 1 percent has been like trying to squeeze Jell-O to death. As the vaccination fist closes in one country, the virus bursts out in another.

In 1985, Rotary raised $120 million to do the job as its year 2000 “gift to the world.”

The effort has now cost $9 billion, and each year consumes another $1 billion.

By contrast, the 14-year drive to wipe out smallpox, according to Dr. Donald A. Henderson, the former World Health Organization officer who began it, cost only $500 million in today’s dollars.

Dr. Henderson has argued so outspokenly that polio cannot be eradicated that he said in an interview last week: “I’m one of certain people that the W.H.O. doesn’t invite to its experts’ meetings anymore.”

Recently, Richard Horton, editor of The Lancet, the influential British medical journal, said via Twitter that “Bill Gates’s obsession with polio is distorting priorities in other critical BMGF areas. Global health does not depend on polio eradication.” (The initials are for the Bill & Melinda Gates Foundation.)

And Arthur L. Caplan, director of the University of Pennsylvania’s bioethics center, who himself spent nine months in a hospital with polio as a child, said in an interview, “We ought to admit that the best we can achieve is control.”

Those arguments infuriate Mr. Gates. “These cynics should do a real paper that says how many kids they’re really talking about,” he said in an interview. “If you don’t keep up the pressure on polio, you’re accepting 100,000 to 200,000 crippled or dead children a year.”

Right now, there are fewer than 2,000. The skeptics acknowledge that they are arguing for accepting more paralysis and death as the price of shifting that $1 billion to vaccines and other measures that prevent millions of deaths from pneumonia, diarrhea, measles, meningitis and malaria.

“And think of all the money that would be saved,” Mr. Gates went on, turning sarcastic. “It’d be like 5 percent of the dog food market in the United States.”

(Americans spend about $18 billion a year on pet food, according to the American Pet Products Association.)

Both he and the skeptics agree that polio is far harder to beat than smallpox was.

One injection stops smallpox, but in countries with open sewers, children need polio drops up to 10 times.

Only one victim in every 200 shows symptoms, so when there are 500 paralysis cases, as in the recent Congo Republic outbreak, there are 100,000 more silent carriers.

4 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

Other causes of paralysis, from food poisoning to Epstein-Barr virus, complicate surveillance.

Also, in roughly one of every two million vaccinations, the live vaccine strain can mutate and paralyze the child getting it. And many poor families whose children are dying of other diseases are fed up with polio drives.

“Fighting polio has always had an emotional factor — the children in braces, the March of Dimes posters,” Dr. Henderson said. “But it doesn’t kill as many as measles. It’s not in the top 20.”

Also, the effort is hurt by persistent rumors that it is a Western plot to sterilize Muslim girls. The Afghan Taliban, under orders from their chief, Mullah Muhammad Omar, tolerate vaccination teams, but the Pakistani Taliban have killed some vaccinators.

Victory may have been closest in 2006, when only four countries that had never beaten polio were left: Nigeria, India, Pakistan and Afghanistan.

Those four have still not conquered it, although India and Nigeria are doing much better. Now four more — Angola, Chad, the Democratic Republic of Congo and Sudan — have had yearlong outbreaks, and another 13 have had recent ones: eight in Africa, along with Nepal, Kazakhstan, Tajikistan, Turkmenistan and Russia.

And polio migrates. In 2005, it briefly hit both an Amish community in Minnesota and Indonesia, the world’s fourth most populous country. Both outbreaks were stopped by vaccination.

Proponents of eradication argue that it would be terrible to waste the $9 billion already spent, and a new analysis concluded that eradication, if successful, would save up to $50 billion by 2035.

The United States is still committed.

“If we fail, we’ll be consigned to continuing expensive control measures for the indefinite future,” said Dr. Thomas R. Frieden, director of the Centers for Disease Control and Prevention, which leads the country’s effort. Dr. Ezekiel J. Emanuel, chief bioethicist for the National Institutes of Health, who is seen as a powerful influence within the Obama administration, said he had “not seen enough data to have a definitive opinion.”

“But my intuition is that eradication is probably worth it,” he added. “As with smallpox, the last mile is tough, but we’ve gotten huge benefits from it. But without the data, I defer to people who’ve really studied the issue, like Bill Gates.”

The W.H.O. recently created a panel of nine scientists meant to be independent of all sides in the debate to monitor progress through 2012 and make recommendations. Dr. David L. Heymann, a former W.H.O. chief of polio eradication, said he was still “very optimistic” that eradication could be achieved.

But if there is another big setback, he said — if, for example, cases surge again in India’s hot season — he might favor moving back the eradication goal again to spend more on fixing health systems until vaccination of infants for all diseases is better.

“When routine coverage is good, it’s no problem to get rid of polio,” he said.

Asked about that, Mr. Gates said, “We’re already doing that.”

http://www.nytimes.com/2011/02/01/health/01polio.html?_r=1&nl=health&emc=healthupdateema2

5 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

Andromeda’s Once and Future Stars

The Andromeda Galaxy is our nearest large galactic neighbour, containing several hundred billion stars. Combined, these images show all stages of the stellar life cycle. The infrared image from Herschel shows areas of cool dust that trace reservoirs of gas in which forming stars are embedded. The optical image shows adult stars. XMM-Newton’s X-ray image shows the violent endpoints of stellar evolution, in which individual stars explode or pairs of stars pull each other to pieces. Credits: (Credit: Infrared: ESA/Herschel/PACS/SPIRE/J. Fritz, U. Gent; X-ray: ESA/XMM-Newton/EPIC/W. Pietsch, MPE; optical: R. Gendler) ScienceDaily (Jan. 7, 2011) — Two ESA observatories have combined forces to show the Andromeda Galaxy in a new light. Herschel sees rings of star formation in this, the most detailed image of the Andromeda Galaxy ever taken at infrared wavelengths, and XMM-Newton shows dying stars shining X-rays into space. During Christmas 2010, ESA's Herschel and XMM-Newton space observatories targeted the nearest large spiral galaxy M31. This is a galaxy similar to our own Milky Way -- both contain several hundred billion stars. This is the most detailed far-infrared image of the Andromeda Galaxy ever taken and shows clearly that more stars are on their way. Sensitive to far-infrared light, Herschel sees clouds of cool dust and gas where stars can form. Inside these clouds are many dusty cocoons containing forming stars, each star pulling itself together in a slow gravitational process that can last for hundreds of millions of years. Once a star reaches a high enough density, it will begin to shine at optical wavelengths. It will emerge from its birth cloud and become visible to ordinary telescopes. Many galaxies are spiral in shape but Andromeda is interesting because it shows a large ring of dust about 75 000 light-years across encircling the centre of the galaxy. Some astronomers speculate that this dust ring may have been formed in a recent collision with another galaxy. This new Herschel image reveals yet more intricate details, with at least five concentric rings of star-forming dust visible.

6 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

Superimposed on the infrared image is an X-ray view taken almost simultaneously by ESA's XMM-Newton observatory. Whereas the infrared shows the beginnings of star formation, X-rays usually show the endpoints of stellar evolution. XMM-Newton highlights hundreds of X-ray sources within Andromeda, many of them clustered around the centre, where the stars are naturally found to be more crowded together. Some of these are shockwaves and debris rolling through space from exploded stars, others are pairs of stars locked in a gravitational fight to the death. In these deadly embraces, one star has already died and is pulling gas from its still-living companion. As the gas falls through space, it heats up and gives off X-rays. The living star will eventually be greatly depleted, having much of its mass torn from it by the stronger gravity of its denser partner. As the stellar corpse wraps itself in this stolen gas, it could explode. Both the infrared and X-ray images show information that is impossible to collect from the ground because these wavelengths are absorbed by Earth's atmosphere. The twinkling starlight seen from Earth is indeed a beautiful sight but in reality contains less than half the story. Visible light shows us the adult stars, whereas infrared gives us the youngsters and X-rays show those in their death throes. To chart the lives of stars, we need to see it all and that is where Herschel and XMM-Newton contribute so much.

Story Source: The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by European Space Agency. http://www.sciencedaily.com/releases/2011/01/110105071149.htm

7 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

Health Chip Gives Instant Diagnoses

SINTEF scientist Liv Furuberg believes that the chip will not be expensive, in spite of all the advanced technology it uses. (Credit: Image courtesy of SINTEF) ScienceDaily (Jan. 7, 2011) — Soon, your family doctor will no longer have to send blood or cancer cell samples to the laboratory. A little chip will give her test results on the spot. Today, a blood sample whose protein content, genes and so on are to be read needs to be submitted to a series of complex processes, such as centrifugation, heat treatment, mixing with enzymes and concentration of disease markers. This means that samples are sent to central laboratories for analysis, and weeks may pass before the results are returned. The same thing happens when women are checked for cervical cancer by taking a cell scrape from the cervix. The samples are then sent off and studied under the microscope. Diagnostic error rates can be high when abnormal cell appearance is determined by even experienced eyes. Automated The EU's MicroActive project has developed an integrated system based on microtechnology and biotechnology, that will enable a number of conditions to be diagnosed automatically in the doctor's own office. The new "health chip" looks like a credit card and contains a complete laboratory. The EU project has used cells taken to diagnose cervical cancer as a case study, but in principle the chip can check out a number of different diseases caused by bacteria or viruses, as well as various types of cancer. SINTEF has coordinated the project, whose other members include universities, hospitals and research institutes from Germany and Ireland. The Norwegian NorChip company had the idea for the chip, and has carried out full-scale tests during the project. Advanced "credit card" The chip is engraved with a number of very narrow channels that contain chemicals and enzymes in the correct proportions for each individual analysis. When the patient's sample has been drawn into the channels, these reagants are mixed.

8 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

"The health chip can analyse your blood or cells for eight different diseases," say Liv Furuberg and Michal Mielnik of SINTEF. "What these diseases have in common is that they are identified by means of special biomarkers that are found in the blood sample. These "labels" may be proteins that either ought or ought not to be there, DNA fragments or enzymes. "This little chip is capable of carrying out the same processes as a large laboratory, and not only does it perform them faster, but the results are also far more accurate. The doctor simply inserts the card into a little machine, adds a few drop of the sample taken from the patient via a tube in the cardholder, and out come the results." Scientists at SINTEF's MiNaLaB have developed a number of techniques for interpreting the results when the biomarkers have been found. For example, they can read them off in a spectrophotometer, an optical instrument in which the RNA molecules in different markers emit specific fluorescent signals. "SINTEF's lab-on-a-chip projects have shown that it is possible to perform rapid, straightforward diagnostic analyses with the aid of microchips, and we are now working on several different types of chip, including a protein analysis chip for acute inflammations," says Liv Furuberg. Mass production NorChip has just started a new two-year EU project that aims to industrialise the diagnostic chip to the mass- production stage while the company will also evaluate market potential and industrial partners. Chief scientist Frank Karlsen in NorChip says that the ways in which the chip can be used can be extended to enable patients themselves to take samples at home, and he expects that such special sampling systems will be ready for testing within a few years.

Story Source: The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by SINTEF, via AlphaGalileo. http://www.sciencedaily.com/releases/2010/12/101213071117.htm

9 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

Secondhand Television Exposure Linked to Eating Disorders

A house in a Fijian village. For parents wanting to reduce the negative influence of TV on their children, the first step is normally to switch off the television set. But a new study suggests that might not be enough. It turns out indirect media exposure, i.e., having friends who watch a lot of TV, might be even more damaging to a teenager's body image. (Credit: Anne Becker) ScienceDaily (Jan. 7, 2011) — For parents wanting to reduce the negative influence of TV on their children, the first step is normally to switch off the television set. But a new study suggests that might not be enough. It turns out indirect media exposure, i.e., having friends who watch a lot of TV, might be even more damaging to a teenager's body image. Researchers from Harvard Medical School's Department of Global Health and Social Medicine examined the link between media consumption and eating disorders among adolescent girls in Fiji. What they found was surprising. The study's subjects did not even need to have a television at home to see raised risk levels of eating disorder symptoms. In fact, by far the biggest factor for eating disorders was how many of a subject's friends and schoolmates had access to TV. By contrast, researchers found that direct forms of exposure, like personal or parental viewing, did not have an independent impact, when factors like urban location, body shape and other influences were taken into account. It appeared that changing attitudes within a group that had been exposed to television were a more powerful factor than actually watching the programs themselves. In fact, higher peer media exposure were linked to a 60 percent increase in a girl's odds of having a high level of eating disorder symptoms, independently of her own viewing. Lead author Anne Becker, vice chair of the Department of Global Health and Social Medicine at Harvard Medical School, said this was the first study to attempt to quantify the role of social networks in spreading the negative consequences of media consumption on eating disorders. "Our findings suggest that social network exposure is not just a minor influence on eating pathology here, but rather, IS the exposure of concern," she said.

10 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

"If you are a parent and you are concerned about limiting cultural exposure, it simply isn't going to be enough to switch off the TV. If you are going to think about interventions, it would have to be at a community or peer-based level." Becker hopes the paper will encourage debate about responsible programming and the regulation of media content to prevent children from secondhand exposure. "Up until now, it has been very difficult to get people who produce media as entertainment to come to the table and think about how they might ensure that their products are not harmful to children," she said. This is Becker's second study of media's impact in Fiji, which is an ideal location for broadcast media research because of the recent arrival of television, in the 1990s, and the significant regional variations in exposure to TV, the Internet and print media. Some remote areas in the recent study still did not have electricity, cell phone reception, television or the Internet when the data were collected in 2007. Her first study found a rise in eating disorder symptoms among adolescent girls following the introduction of broadcast television to the island nation in 1995. What makes Fiji a particularly interesting case is that traditional culture prizes a robust body shape, in sharp contrast to the image presented by Western television shows such as Beverly Hills 90210, Seinfeld and Melrose Place, which were quite popular in Fiji when television debuted there in the 1990s. Girls would see actresses as role models, says Becker, and began noting how a slender body shape was often accompanied by success in those shows. This perception appears to have been one of the factors leading to a rise in eating pathology among the Fijian teenagers. But until now, it was not known how much of this effect came from an individual's social network. Nicholas Christakis, professor of medical sociology in the department of health care policy at Harvard Medical School, has studied the spread of health problems through social networks. "It shouldn't be that surprising to us, even though it is intriguing, that the indirect effects of media are greater," Christakis said. "Most people aren't paying attention to the media, but they are paying attention to what their friends say about what's in the media. It's a kind of filtration process that takes place by virtue of our social networks." Becker says that although the study focused on Fijian schoolgirls, remote from the US, it warrants concern and further investigation of the health impact on other populations. This research was funded by the National Institute of Mental Health, Harvard University and the Radcliffe Institute.

Story Source: The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by Harvard Medical School. The original article was written by Kit Chellel.

Journal Reference:

1. A. E. Becker, K. E. Fay, J. Agnew-Blais, A. N. Khan, R. H. Striegel-Moore, S. E. Gilman. Social network media exposure and adolescent eating pathology in Fiji. The British Journal of Psychiatry, 2011; 198 (1): 43 DOI: 10.1192/bjp.bp.110.078675

http://www.sciencedaily.com/releases/2011/01/110106144743.htm

11 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

Perception of Our Heartbeat Influences Our Body Image

How we experience the internal state of our body may also influence how we perceive our body from the outside, new research suggests. (Credit: iStockphoto) ScienceDaily (Jan. 7, 2011) — A new study, led by Dr Manos Tsakiris from Royal Holloway, University of , suggests that the way we experience the internal state of our body may also influence how we perceive our body from the outside, as for example in the mirror. The research appears in the Proceedings of the Royal Society B. Psychologists measured how good people are at feeling their body from within by asking them to count their heartbeats over a few minutes. They then measured how good people are at perceiving their own body-image from the outside by using a procedure that tricks them into feeling that a fake, rubber hand is their own hand. Looking at a rubber hand being touched at the same time as one's own unseen hand creates the illusion that the rubber hand is part of one's body. The less accurate people were in monitoring their heartbeat, the more they were influenced by the illusion. The study shows for first time that there may be a strong link between how we experience our body from within and how we perceive it from the outside. Dr Manos Tsakiris from the Department of Psychology at Royal Holloway says: "We perceive our own bodies in many different ways. We can look at our bodies, feel touch on our bodies, and also feel our body from within, such as when we experience our hearts racing or butterflies in our stomachs. It seems that a stable perception of the body from the outside, what is known as "body image," is partly based on our ability to accurately perceive our body from within, such as our heartbeat." The study, which was funded by the Economic and Social Research Council, UK, is important as it may shed new light into pathologies of body-perception; exploring how certain people feel about or perceive the internal states of their body may help us understand why they perceive their body-image in distorted ways such as those who suffer from anorexia or body dysmorphia.

Story Source: The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by University of Royal Holloway London, via AlphaGalileo.

Journal Reference:

1. Manos Tsakiris, Ana Tajadura-Jiménez, Marcello Costantini. Just a heartbeat away from one's body: interoceptive sensitivity predicts malleability of body-representations. Proceedings of the Royal Society B, 2011; DOI: 10.1098/rspb.2010.2547

http://www.sciencedaily.com/releases/2011/01/110105071151.htm

12 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

Tablet Splitting Is a Highly Inaccurate and Potentially Dangerous Practice, Says Drug Study

ScienceDaily (Jan. 7, 2011) — Medical experts have issued a warning about the common practice of tablet splitting, after a study found that nearly a third of the split fragments deviated from recommended dosages by 15 per cent or more.

Their study, published in the January issue of the Journal of Advanced Nursing, points out that the practice could have serious clinical consequences for tablets that have a narrow margin between therapeutic and toxic doses.

And they are calling on manufacturers to produce greater dose options and liquid alternatives to make the practice unnecessary.

Researchers from the Faculty of Pharmaceutical Sciences at Ghent University, Belgium, asked five volunteers to split eight different-sized tablets using three techniques commonly used in nursing homes. They found that 31 per cent of the tablet fragments deviated from their theoretical weight by more than 15 per cent and that 14 per cent deviated by more than 25 per cent. Even the most accurate method produced error margins of 21 per cent and eight per cent respectively.

"Tablet-splitting is widespread in all healthcare sectors and a primary care study in Germany found that just under a quarter of all drugs were split" says study lead Dr Charlotte Verrue.

"It is done for a number of reasons: to increase dose flexibility, to make tablets easier to swallow and to save money for both patients and healthcare providers. However, the split tablets are often unequal sizes and a substantial amount of the tablet can be lost during splitting."

The five researchers comprised a pharmacy student, researcher and professor, an administrative worker and a laboratory technician, ranging from 21 to 55 years of age. With the exception of the technician, none of the other study participants had tablet-splitting experience. The authors believe this replicated nursing home conditions where splitting is not always performed by professional nurses.

Between them they split tablets into 3,600 separate quarters or halves using a splitting device, scissors and a kitchen knife. The eight different tablets were different shapes and sizes, three were unscored, three had one score line and the others had two.

The drugs were prescribed for a range of health conditions, including Parkinson's, congestive heart failure, thrombosis and arthritis.

After splitting, each fragment was weighed to see how much they deviated from the theoretical weight. Key results included:

• Using a splitting device was the most accurate method. It still produced a 15 to 25 per cent error margin in 13 per cent of cases, but this was lower than the 22 per cent for scissors and the 17 per cent for the knife. • The splitting device produced a deviation of more than 25 per cent in eight per cent of cases, compared with 19 per cent for the scissors and 17 per cent for the knife. • Some drugs were much easier to split accurately than others. The easiest to split produced an overall error margin (15 per cent deviation or more) of two percent and the most difficult tablets produced an error margin of 19 per cent.

"Tablet splitting is daily practice in nursing homes" says Dr Verrue. "However, not all formulations are suitable for splitting and, even when they are, large dose deviations or weight losses can occur. This could

13 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

have serious clinical consequences for drugs where there is a small difference between therapeutic and toxic doses. "Based on our results, we recommend use of a splitting device when splitting cannot be avoided, for example when the prescribed dose is not commercially available or where there is no alternative formulation, such as a liquid. "Staff who are responsible for splitting tablets should receive training to enable them to split as accurately as possible. They should also be made aware of the possible clinical consequences of dose deviations. "We would also like to see manufacturers introduce a wider range of tablet doses or liquid formulations so that tablet splitting becomes increasingly unnecessary."

Story Source: The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by Wiley - Blackwell, via AlphaGalileo.

Journal Reference:

1. Charlotte Verrue, Els Mehuys, Koen Boussery, Jean-Paul Remon, Mirko Petrovic. Tablet-splitting: a common yet not so innocent practice. Journal of Advanced Nursing, 2011; 67 (1): 26 DOI: 10.1111/j.1365-2648.2010.05477.x

http://www.sciencedaily.com/releases/2011/01/110105071143.htm

14 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

Solar Research Instrument 'Fills the Gap,' Views Sun's Innermost Corona

This photograph of the Sun, taken by the Atmospheric Imaging Assembly (AIA) instrument on NASA's Solar Dynamics Observatory, shows how image processing techniques developed at SAO can reveal the faint, inner corona. At the Sun's limb, prominences larger than the Earth arc into space. Bright active regions like the one on the Sun's face at lower center are often the source of huge eruptions known as coronal mass ejections. (Credit: NASA/LMSAL/SAO) ScienceDaily (Jan. 7, 2011) — During a total eclipse of the Sun, skywatchers are awed by the shimmering corona -- a faint glow that surrounds the Sun like gossamer flower petals. This outer layer of the Sun's atmosphere is, paradoxically, hotter than the Sun's surface, but so tenuous that its light is overwhelmed by the much brighter solar disk. The corona becomes visible only when the Sun is blocked, which happens for just a few minutes during an eclipse. Now, an instrument on board NASA's Solar Dynamics Observatory (SDO), developed by Smithsonian scientists, is giving unprecedented views of the innermost corona 24 hours a day, 7 days a week. "We can follow the corona all the way down to the Sun's surface," said Leon Golub of the Harvard- Smithsonian Center for Astrophysics (CfA). Previously, solar astronomers could observe the corona by physically blocking the solar disk with a coronagraph, much like holding your hand in front of your face while driving into the setting Sun. However, a coronagraph also blocks the area immediately surrounding the Sun, leaving only the outer corona visible. The Atmospheric Imaging Assembly (AIA) instrument on SDO can "fill" this gap, allowing astronomers to study the corona all the way down to the Sun's surface. The resulting images highlight the ever-changing connections between gas captured by the Sun's magnetic field and gas escaping into interplanetary space. The Sun's magnetic field molds and shapes the corona. Hot solar plasma streams outward in vast loops larger than Earth before plunging back onto the Sun's surface. Some of the loops expand and stretch bigger and bigger until they break, belching plasma outward.

15 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

"The AIA solar images, with better-than-HD quality views, show magnetic structures and dynamics that we've never seen before on the Sun," said CfA astronomer Steven Cranmer. "This is a whole new area of study that's just beginning." Cranmer and CfA colleague Alec Engell developed a computer program for processing the AIA images above the Sun's edge. These processed images imitate the blocking-out of the Sun that occurs during a total solar eclipse, revealing the highly dynamic nature of the inner corona. They will be used to study the initial eruption phase of coronal mass ejections (CMEs) as they leave the Sun and to test theories of solar wind acceleration based on magnetic reconnection. SDO is the first mission and crown jewel in a fleet of NASA missions to study our sun. The mission is the cornerstone of a NASA science program called Living with a Star, the goal of which is to develop the scientific understanding necessary to address those aspects of the sun-Earth system that directly affect our lives and society. Goddard Space Flight Center built, operates, and manages the SDO spacecraft for NASA's Science Mission Directorate in Washington.

Story Source: The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by Harvard-Smithsonian Center for Astrophysics.

http://www.sciencedaily.com/releases/2011/01/110104114313.htm

16 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

Using Cassava to Address Vitamin A Deficiency

White cassava roots (top) are low in micronutrients, whereas yellow-rooted plants can have twenty times as much provitamin A. (Credit: International Center for Tropical Agriculture (CIAT)) ScienceDaily (Jan. 7, 2011) — The roots of cassava (Manihot esculenta) serve as the primary source of carbohydrates in the diets of people in many arid regions of the world, including more than 250 million people in sub-Saharan Africa. Unfortunately the roots of commercial cassava cultivars are quite low in micronutrients, and micronutrient deficiencies are widespread in these regions. In addition to programs designed to deliver vitamin supplements, there has been considerable effort aimed at biofortification; that is, increasing the amounts of available micronutrients in staple crops such as cassava. An article published in The Plant Cell this week describes the results of a collaborative effort led by Professor Peter Beyer from Freiberg University in Germany, together with researchers at the International Center for Tropical Agriculture (CIAT) in Colombia. These researchers studied a naturally arising variant of cassava with yellow roots in order to understand the synthesis of provitamin A carotenoids, dietary precursors of vitamin A. Beyer was also co-creator of Golden Rice, a biofortified crop which provides precursors of vitamin A not usually present in the rice that people eat. In this work, the scientists compared different cassava cultivars with white, cream, or yellow roots -- more yellow corresponding to more carotenoids -- in order to determine the underlying causes of the higher carotenoid levels found in the rare yellow-rooted cassava cultivar. They tracked the difference down to a single amino acid change in the enzyme phytoene synthase, which functions in the biochemical pathway that produces carotenoids. The authors went on to show that the analogous change in phytoene synthases from other species also results in increased carotenoid synthesis, suggesting that the research could have relevance to a number of different crop plants. Furthermore, they were able to turn a white-rooted cassava cultivar into a yellow-rooted plant that accumulates beta-carotene (provitamin A) using a transgenic approach that increased the enzyme phytoene synthase in the root. This work beautifully combines genetics with biochemistry and molecular biology to deepen our understanding of carotenoid biosynthesis. "It paves the way for using transgenic or conventional breeding methods to generate commercial cassava cultivars containing high levels of provitamin A carotenoids, by the exchange of a single amino acid already present in cassava" says Beyer. Thus, it has the potential be a big step the battle against vitamin A deficiency, which is estimated to affect approximately one third of the world's preschool age children.

17 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

This research was supported by the HarvestPlus research consortium, which received a grant from the Bill & Melinda Gates Foundation.

Story Source: The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by American Society of Plant Biologists, via EurekAlert!, a service of AAAS.

Journal Reference:

1. R. Welsch, J. Arango, C. Bar, B. Salazar, S. Al-Babili, J. Beltran, P. Chavarriaga, H. Ceballos, J. Tohme, P. Beyer. Provitamin A Accumulation in Cassava (Manihot esculenta) Roots Driven by a Single Nucleotide Polymorphism in a Phytoene Synthase Gene. THE PLANT CELL ONLINE, 2010; DOI: 10.1105/tpc.110.077560

http://www.sciencedaily.com/releases/2010/10/101004130109.htm

18 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

Origin of Life on Earth: 'Natural' Asymmetry of Biological Molecules May Have Come from Space

The circularly polarized ultraviolet light (UV-CPL) beam on the DESIRS beamline at the SOLEIL synchrotron facility, shown as it passes through a gas filter filled with xenon. (Credit: © Thomas Lannes) ScienceDaily (Jan. 7, 2011) — Certain molecules do exist in two forms which are symmetrical mirror images of each other: they are known as chiral molecules. On Earth, the chiral molecules of life, especially amino acids and sugars, exist in only one form, either left-handed or right-handed. Why is it that life has initially chosen one form over the other? A consortium bringing together several French teams led by Louis d'Hendecourt (1), CNRS senior researcher at the Institut d'astrophysique spatiale (Université Paris-Sud 11 / CNRS), has for the first time obtained an excess of left-handed molecules (and then an excess of right-handedones) under conditions that reproduce those found in interstellar space. This result therefore supports the hypothesis that the asymmetry of biological molecules on Earth has a cosmic origin. The researchers also suggest that the solar nebula formed in a region of massive stars. This work has just been published online on the web site of The Astrophysical Journal Letters. The experiment was carried out at the SOLEIL synchrotron facility in collaboration with the Laboratoire de chimie des molécules bioactives et des arômes (Université de Nice/CNRS) and with the support of CNES. Chiral molecules are molecules that can exist in two forms (enantiomers) which are symmetrical mirror images of each other, one left-handed and the other right-handed. For instance, our hands are chiral since they come in two forms, the left hand and the right hand, that are symmetrical with their mirror image but not super imposable on it. Biological molecules are mostly chiral, with some forms being favored over others. For instance, the amino acids that make up proteins only exist in one of their two enantiomeric forms, the left- handed (L) form. On the other hand, the sugars present in the DNA of living organisms are solely right- handed (D). This property that organic molecules have of existing in living organisms in only one of their two structural forms is called homochirality. What is the origin of such asymmetry in biological material? There are two competing hypotheses. One postulates that life originated from a mixture containing 50% of one enantiomer and 50% of the other (known as a racemic mixture), and that homochirality progressively emerged during the course of evolution. The other hypothesis suggests that asymmetry leading to homochirality preceded the appearance of life and was of cosmic origin. This is supported by the detection of L excesses in certain amino acids extracted from primitive meteorites. According to this scenario, these amino acids were synthesized non-racemically in interstellar space and delivered to Earth by cometary grains and meteorites.

19 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

To lend more weight to this hypothesis, the researchers first reproduced analogs of interstellar and cometary ices in the laboratory (2). The novel aspect of their experiment was that, using the DESIRS beamline at the SOLEIL synchrotron facility, the ices were subjected to circularly polarized ultraviolet radiation (UV-CPL) (3), which is supposed to mimic the conditions encountered in some space environments. When the ices were warmed up, an organic residue was produced. A detailed analysis of this mixture revealed that it contained a significant enantiomeric excess in one chiral amino acid, alanine. The excess, which was over 1.3%, is comparable to that measured in primitive meteorites. The researchers thus succeeded in producing, under interstellar conditions, asymmetrical molecules of life from a mixture that did not contain chiral substances. This is the first time that a scenario that explains the origin of this asymmetry has been demonstrated using an experiment that reproduces an entirely natural synthesis. This result reinforces the hypothesis that the origin of homochirality is prebiotic and cosmic, in other words genuinely interstellar. According to this scenario, the delivery of extraterrestrial organic material containing an enantiomeric excess synthesized by an asymmetrical astrophysical process (in this case, UV-CPL radiation) is the cause of the asymmetry of life's molecules on Earth. This material may even have formed outside the solar system. Finally, the solar nebula may have formed in regions of massive star formation. In such regions, infrared radiation circularly polarized in the same direction has been observed. These findings imply that the selection of a single enantiomer for the molecules of life observed on Earth is not the result of chance but rather of a deterministic physical mechanism. Notes: (1) 2003 CNRS Silver Medal (2) In the 1980s, Louis d'Hendecourt developed a technique to generate interstellar ice analogs in the laboratory. (3) The Orion nebula produces circularly polarized light at levels of 17% in the infrared. It is calculated that it also radiates in the ultraviolet, radiation which is able to break the (strong) covalent bonds betweens the atoms of ice molecules.

Story Source: The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by CNRS (Délégation Paris Michel-Ange), via AlphaGalileo.

Journal Reference:

1. Pierre de Marcellus, Cornelia Meinert, Michel Nuevo, Jean-Jacques Filippi, Grégoire Danger, Dominique Deboffle, Laurent Nahon, Louis Le Sergeant d'Hendecourt, Uwe J. Meierhenrich. Non- racemic amino acid production by ultraviolet irradiation of achiral interstellar ice analogs with circularly polarized light. The Astrophysical Journal, 2011; 727 (2): L27 DOI: 10.1088/2041- 8205/727/2/L27

http://www.sciencedaily.com/releases/2011/01/110107145634.htm

20 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

Extinct Insect Completely Reconstructed in 3-D

Anaglyph image in 3D: 3D-reconstruction of the fossilized twisted-wing Mengea tertiaria (Strepsiptera) from approx. 42 million years ago (colon green, musculature red, nervous system yellow). Insect researchers at german University Jena have reconstructed a fossile conserved in a piece of amber with the help micro- computer tomography. (Credit: Hans Pohl/University Jena) ScienceDaily (Jan. 7, 2011) — Its stay on this planet was actually meant to be a very short one. Male twisted- wing parasites (Strepsiptera) usually have a life span of only few hours. However, accidentally a specimen of Mengea tertiara, about the size of an aphid, became preserved for 'eternity': during its wedding flight about 42 million years ago it was caught in a drop of tree resin and subsequently almost perfectly conserved in a piece of amber. PD Dr. Hans Pohl of Friedrich Schiller University Jena (Germany) calls this "a very exceptional stroke of luck." Together with colleagues from Jena, and New York, the insect researcher at the Institute of Systematic Zoology and Evolutionary Biology with Phyletic Museum has now 'resurrected' the fossil insect: using high resolution micro-computer tomography (micro-CT) the anatomy of an extinct insect was completely reconstructed three-dimensionally for the first time. The researchers did not only get a detailed and realistic impression of the external form of the extinct insect. "The micro-CT also allows us to look into the interior," Dr. Pohl stresses. Whereas the inner organs were destroyed during the process of petrification under high pressure, internal soft tissues are occasionally largely preserved in amber fossils. About 80 percent of the inner tissues of the fossilized twisted-wing parasite were exceptionally well preserved, as revealed by the recent evaluation of the micro-CT data. Musculature, nervous system, sense organs, digestive and reproductive systems were displayed to the Jena scientists like an open book. With 3D- glasses the insect can be viewed in three dimensions. Only a few mouse clicks are needed to turn it around or to produce virtual sections. "This leads to important insights in the phylogeny and evolution of these insects," Professor Dr. Rolf Beutel of the University Jena explains. Until today the placement of Strepsiptera in the phylogenetic tree of insects remained an enigma. "The females of these strange animals are almost always endoparasitic, i. e. live inside their hosts," Beutel continues. However, according to Beutel, the females of the analyzed species must have

21 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

been free living. This conclusion is based on the simple shape of the external genitalia of the Mengea male. The males of species with females parasitizing in winged insects always have an anchor-shaped penis. "This firmly connects the males with the females, which are embedded in fast moving hosts such as for instance plant hoppers or bees." This specific docking mechanism is clearly missing in Mengea. Moreover, the Jena research team could confirm the position of the extinct Mengea within the evolutionary tree. "These are ancestral predecessors of strepsipteran species existing today," says Dr. Pohl. Finding females and copulating was the only mission of the males during their extremely short life span. "This is clearly reflected by their anatomy," says the insect researcher. Highly efficient antennal sense organs and 'raspberry eyes' help to track the female. The flight apparatus and the genitalia were particularly well developed. In contrast to this, the mouth parts and the digestive tract are distinctly reduced compared to other insects. "The males were not able to ingest food, at least not in solid form," Professor Beutel concludes. It is possible that the intestine was filled with air, which improves the flying capacity of these tiny insects. The Jena researchers will scan more amber insects in the near future. "This method has an enormous potential," Dr. Pohl says confidently. It not only allows a very detailed study of external and internal structures but is also non- destructive, in contrast to other techniques. Both aspects combined will guarantee an immense progress in the investigation of fossil and extant insects. Like Mengea other fossils will be preserved for critical investigations and re-evaluations of future scientists.

The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by Friedrich-Schiller-Universitaet Jena. http://www.sciencedaily.com/releases/2011/01/110107094854.htm

22 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

Pesky Bacterial Slime Reveals Its Survival Secrets

Slimy bacterial coatings known as biofilms (Bacillus subtilis colony superimposed at center) exhibit an unmatched ability to repel a wide range of liquids and even vapors. An electron microscope image shows the biofilm surface's resilient meshwork made from proteins and polysaccharides and assembled into a multiscale, hierarchical structure. (Credit: Courtesy of the laboratory of Joanna Aizenberg, Harvard School of Engineering and Applied Sciences.) ScienceDaily (Jan. 7, 2011) — By rethinking what happens on the surface of things, engineers at Harvard University have discovered that Bacillus subtilis biofilm colonies exhibit an unmatched ability to repel a wide range of liquids -- and even vapors. Centimeters across yet only hundreds of microns thick, such slimy bacterial coatings cling to the surfaces of everything from pipes to teeth and are notoriously resistant to antimicrobial agents. The researchers now suspect they know the secret to a biofilm's resiliency. Published in the January 5th early edition of the Proceedings of the National Academy of Sciences (PNAS), the study holds promise for both creating bio-inspired non-wetting materials and developing better ways to eliminate harmful biofilms that can clog pipes, contaminate food production and water supply systems, and lead to infections. "By looking at biofilms from a materials perspective rather than a cellular or biochemical one, we discovered that they have a remarkable ability to resist wetting to an extent never seen before in nature," says lead author Alex Epstein, a graduate student at the Harvard School of Engineering and Applied Sciences (SEAS). "In fact the biofilm literally resisted our initial efforts to study it." The finding came about serendipitously, as the original intention of the researchers was to study the structure of the biofilm. To image the interior of the biofilm, the team had to soak it with liquids such as ethanol and acetone, which normally spread and seep easily into a surface. "But to our surprise, it was impossible. The liquids kept beading up on the surface and wouldn't infiltrate the colonies," says Epstein, who is a member of the laboratory of Joanna Aizenberg, Amy Smith Berylson Professor of Materials Science at SEAS; Susan S. and Kenneth L. Wallach Professor at the Radcliffe Institute; and a core member of the Wyss Institute for Biologically Inspired Engineering at Harvard. As the Aizenberg lab studies materials and wetting, the engineers immediately recognized the significance of what they were observing. It turns out that biofilm has an unprecedented liquid-repellent surface, thereby revealing a critical clue to what may be responsible for its broad antimicrobial resistance.

23 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

Nature offers numerous examples of water-resistant surfaces, such as the lotus leaf, a longstanding inspiration for creating synthetic materials. Until now, however, no model natural systems have been found for broadly repellent materials. While such surfaces can be manufactured, the top-down process is costly, labor intensive, and reliant on toxic chemicals and brittle structures. A biofilm, however, is living proof that only the simplest and most natural of components are required -- namely, a resilient meshwork made from proteins and polysaccharides assembled into a multi-scale, hierarchical structure. At the same time, the finding offers a completely new perspective on how biofilms are immune to so many different types of biocides. Even the most sophisticated biochemical strategy will be ineffective if a biocide cannot enter the slime to reach the bacteria. In short, the antimicrobial activity of alcohols and other solvents becomes compromised by the strongly non-wetting behavior at clinically relevant concentrations. The team expects that their newfound knowledge will help alert researchers to the need to consider this requirement when designing ways to destroy harmful biofilms. "Their notorious resistance to a broad range of biocide chemistries has remained a mysterious and pressing problem despite two decades of biofilm research," says Aizenberg, a pioneer in the field of biomimicry. "By looking at it as a macroscopic problem, we found an explanation that was just slightly out of view: antimicrobials can be ineffective simply by being a non-wetting liquid that cannot penetrate into the biofilm and access subsurface cells." Aizenberg and her colleagues speculate that such strong liquid repellence may have evolved in response to the bacteria's natural soil environment where water can leach heavy metals and other toxins. Moreover, the property may underlie the recent success of the use of biofilm as an eco-friendly form of biocontrol for agriculture, protecting plant roots from water-borne pathogens. Looking ahead, the Harvard team plans to investigate precisely how the biochemical components of biofilms give rise to their exceptional resistance and to test the properties of other bacterial species. "The applications are exciting, but we are equally thrilled that our findings have revealed a previously undocumented phenomenon about biofilms," says Aizenberg. "The research should be an inspiring reminder that we have only scratched the surface of how things really work." Just as with biofilm, she adds, "It has been a challenge to get deep into the core of the problem." Epstein and Aizenberg's co-authors included Boaz Pokroy, a former postdoctoral fellow in Aizenberg's group and now a faculty member at Technion (the Israel Institute of Technology), and Agnese Seminara, a postdoctoral fellow at SEAS and participant in the Kavli Institute for Bionanoscience and Technology at Harvard University. The research was funded by the BASF Advanced Research Initiative at Harvard University.

Story Source: The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by Harvard University, via EurekAlert!, a service of AAAS.

Journal Reference:

1. A. K. Epstein, B. Pokroy, A. Seminara, J. Aizenberg. Bacterial biofilm shows persistent resistance to liquid wetting and gas penetration. Proceedings of the National Academy of Sciences, 2010; DOI: 10.1073/pnas.1011033108

http://www.sciencedaily.com/releases/2011/01/110107101900.htm

24 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

What Carbon Cycle? College Students Lack Scientific Literacy, Study Finds

ScienceDaily (Jan. 7, 2011) — Most college students in the United States do not grasp the scientific basis of the carbon cycle -- an essential skill in understanding the causes and consequences of climate change, according to research published in the January issue of BioScience. The study, whose authors include several current and former researchers from Michigan State University, calls for a new way of teaching -- and, ultimately, comprehending -- fundamental scientific principles such as the conservation of matter. "Improving students' understanding of these biological principles could make them better prepared to deal with important environmental issues such as global climate change," said Charles "Andy" Anderson, MSU professor of teacher education and co-investigator on the project. The study was led by Laurel Hartley, assistant professor at the University of Colorado Denver who started the work as a postdoctoral researcher at MSU. Co-researchers include Anderson, Brook Wilke, Jonathon Schramm and Joyce Parker, all from MSU, and Charlene D'Avanzo from Hampshire College. The researchers assessed the fundamental science knowledge of more than 500 students at 13 U.S. colleges in courses ranging from introductory biology to advanced ecology. Most students did not truly understand the processes that transform carbon. They failed to apply principles such as the conservation of matter, which holds that when something changes chemically or physically, the amount of matter at the end of the process needs to equal the amount at the beginning. (Matter doesn't magically appear or disappear.) Students trying to explain weight loss, for example, could not trace matter once it leaves the body; instead they used informal reasoning based on their personal experiences (such as the fat "melted away" or was "burned off"). In reality, the atoms in fat molecules leave the body (mostly through breathing) and enter the atmosphere as carbon dioxide and water. Most students also incorrectly believe plants obtain their mass from the soil rather than primarily from carbon dioxide in the atmosphere. "When you see a tree growing," Anderson said, "it's a lot easier to believe that tree is somehow coming out of the soil rather than the scientific reality that it's coming out of the air." The researchers say biology textbooks and high-school and college science instructors need to do a better job of teaching the fundamentals -- particularly how matter transforms from gaseous to solid states and vice- versa. It won't be easy, Anderson said, because students' beliefs of the carbon cycle are deeply engrained (such as the misconception that plants get most of their nutrients from the soil). Instructors should help students understand that the use of such "everyday, informal reasoning" runs counter to true scientific literacy, he said. The implications are great for a generation of citizens who will grapple with complicated environmental issues such as clean energy and carbon sequestration more than any generation in history, Anderson said. "One of the things I'm interested in," he said, "is students' understanding of environmental problems. And probably the most important environmental problem is global climate change. And that's attributable to a buildup of carbon dioxide in the atmosphere. And understanding where that carbon dioxide is coming from and what you can do about it fundamentally involves understanding the scientific carbon cycle."

The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by Michigan State University.

http://www.sciencedaily.com/releases/2011/01/110107094904.htm

25 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

Packaging That Knows When Food Is Going Bad

Professor Andrew Mills with food packaging incorporating the intelligent plastic indicator. (Credit: Image courtesy of University of Strathclyde) ScienceDaily (Jan. 7, 2011) — Packaging that alerts consumers to food which is starting to go bad is being developed by researchers at the University of Strathclyde in Glasgow. The project aims to improve food safety and cut unnecessary food waste by developing a new type of indicator, made of 'intelligent plastics' which give a warning, by changing colour, of when food is about to lose its freshness because it has broken or damaged packaging, has exceeded its 'best before' date or has been poorly refrigerated. An estimated 8.3 million tonnes of household food- most of which could have been eaten- is wasted in the UK each year. The indicator is to be used as part of a form of food packaging known as modified atmosphere packaging, which keeps food in specially-created conditions that prolong its shelf life. Freshness indicators typically take the form of labels inserted in a package but these come at a significant cost. Strathclyde researchers are looking to create a new type of indicator which is an integral part of the packaging, and so is far less expensive. The project has received £325,000 in support from the Scottish Enterprise Proof of Concept Programme. Professor Andrew Mills, who is currently leading the Strathclyde project, said: "At the moment, we throw out far too much food, which is environmentally and economically damaging. "Modified atmosphere packaging is being used increasingly to contain the growth of organisms which spoil food but the costs of the labels currently used with it are substantial. We are aiming to eliminate this cost with new plastics for the packaging industry. "We hope that this will reduce the risk of people eating food which is no longer fit for consumption and help prevent unnecessary waste of food. We also hope it will have a direct and positive impact on the meat and seafood industries." By giving a clear and unambiguous sign that food is beginning to perish, the indicators being developed at Strathclyde could resolve potential confusion about the different significances of 'best before' dates and 'sell- by' dates. They could also help to highlight the need for food to be stored in refrigerators which are properly sealed. Lisa Branter, acting head of the Proof of Concept Programme, said: "Through the Proof of Concept Programme, we are creating the opportunities to build high value, commercially viable spin-out companies from ground-breaking research ideas. What we want to achieve are more companies of scale created as a result of the Programme, and this project is a great example of an idea which offers real business opportunities."

Story Source: The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by University of Strathclyde.

http://www.sciencedaily.com/releases/2011/01/110107083739.htm

26 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

Hidden Literary References Discovered in the Mona Lisa

It was common for Renaissance artists such as Leonardo da Vinci to take passages from literature and incorporate them into a painting. (Credit: Image courtesy of Queen's University) ScienceDaily (Jan. 6, 2011) — Queen's University Classics professor emeritus Ross Kilpatrick believes the Leonardo da Vinci masterpiece, the Mona Lisa, incorporates images inspired by the Roman poet Horace and Florentine poet Petrarch. The technique of taking a passage from literature and incorporating it into a work of art is known as 'invention' and was used by many Renaissance artists. "The composition of the Mona Lisa is striking. Why does Leonardo have an attractive woman sitting on a balcony, while in the background there is an entirely different world that is vast and barren?" says Dr. Kilpatrick. "What is the artist trying to say?" Dr. Kilpatrick believes Leonardo is alluding to Horace's Ode 1. 22 (Integer vitae) and two sonnets by Petrarch (Canzoniere CXLV, CLIX). Like the Mona Lisa, those three poems celebrate a devotion to a smiling young woman, with vows to love and follow the woman anywhere in the world, from damp mountains to arid deserts. The regions mentioned by Horace and Petrarch are similar to the background of the Mona Lisa. Both poets were read when Leonardo painted the picture in the early 1500s. Leonardo was familiar with the works of Petrarch and Horace, and the bridge seen in the background of the Mona Lisa has been identified as the same one from Petrarch's hometown of Arezzo. "The Mona Lisa was made at a time when great literature was well known. It was quoted, referenced and celebrated," says Dr. Kilpatrick. Dr. Kilpatrick has been looking at literary references in art for the past 20 years. He has recently found references to the mythical wedding of Greek gods Ariadne and Dionysus in Gustav Klimt's famous painting The Kiss. Dr. Kilpatrick's Mona Lisa findings have now been published in the Italian journal MEDICEA. .

Story Source: The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by Queen's University.

http://www.sciencedaily.com/releases/2011/01/110106153123.htm

27 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

Telescope Calibration May Help Explain Mystery of Universe's Expansion

NIST expertise helped calibrate the 1.4 billion pixels in the camera of the Pan-STARRS telescope in Hawaii, whose observations may reveal details about the expansion of the universe. (Credit: Rob Ratkowski, Copyright PS1SC) ScienceDaily (Jan. 6, 2011) — Is the expansion of the universe accelerating for some unknown reason? This is one of the mysteries plaguing astrophysics, and somewhere in distant galaxies are yet-unseen supernovae that may hold the key. Now, thanks to a telescope calibrated by scientists from the National Institute of Standards and Technology (NIST), Harvard University and the University of Hawaii, astrophysicists can be more certain of one day obtaining an accurate answer. The NIST scientists traveled to the summit of Haleakala volcano in Hawaii to fine-tune the operation of billions of light-collecting pixels in the Pan-STARRS telescope, which scans the heavens for Type IA supernovae. These dying stars always shine with the same luminosity as other Type IA supernovae, making them useful to observers as "standard candles" for judging distance in the universe. Any apparent shift in the supernova's spectrum gives a measure of how the universe has expanded (or contracted) as the light traveled from the supernova to Earth. Because Type IA's are valuable as signposts, astrophysicists want to be sure that when they observe one of these faraway stellar cataclysms, they are getting a clear and accurate picture -- particularly important given the current mystery over why the rate of expansion of the universe appears to be increasing. For that, they need a telescope that will return consistent information about supernovae regardless of which of the roughly 1,400,000,000 pixels of its collector spots it. "That's where we came in," says NIST's John Woodward. "We specialize in measurement, and they needed to calibrate the telescope in a way that has never been done before." Ordinary calibrations involve a telescope's performance at many light wavelengths simultaneously, but Pan- STARRS needed to be calibrated at many individual wavelengths between 400 and 1,000 nanometers. For the job, Woodward and his colleagues used a special laser whose wavelength can be tuned to any value in that range, and spent three days testing the telescope's huge 1.4 gigapixel camera-the largest in the world, Woodward says. "Pan-STARRS will scan the same areas of the sky repeatedly over many months," Woodward says. "It was designed to look for near-Earth objects like asteroids, and it also pulls double duty as a supernova hunter. But for both jobs, observers need to be sure they can usefully compare what they see from one image to the next."

28 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

Woodward says that because this is one of the first-ever such calibrations of a telescope, it is unclear just how much effect the team's work will have, and part of their future work will be determining how much they have reduced the uncertainties in Pan-STARRS's performance. They will use this information to calibrate a much larger telescope-the Large Synoptic Survey Telescope, planned for construction in Chile.

Story Source: The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by National Institute of Standards and Technology (NIST).

Journal Reference:

1. C.W. Stubbs, P. Doherty, C. Cramer, G. Narayan, Y.J. Brown, K.R. Lykke, J.T. Woodward and J.L. Tonry. Precise throughput determination of the Pan-STARRS telescope and the gigapixel imager using a calibrated silicon photodiode and a tunable laser: Initial results. Astrophysical Journal Supplement, Dec. 2010, Pages 376-388

http://www.sciencedaily.com/releases/2011/01/110106145305.htm

29 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

Functionally Graded Shape Memory Polymers Developed

SMPs are a class of "smart" materials that can switch between two shapes, from a fixed (temporary) shape to a predetermined permanent shape. (Credit: Image courtesy of Syracuse University) ScienceDaily (Jan. 6, 2011) — A team led by Patrick T. Mather, director of Syracuse Biomaterials Institute (SBI) and Milton and Ann Stevenson professor of biomedical and chemical engineering in Syracuse University's L.C. Smith College of Engineering and Computer Science (LCS), has succeeded in applying the concept of functionally graded materials (FGMs) to shape memory polymers (SMPs). SMPs are a class of "smart" materials that can switch between two shapes, from a fixed (temporary) shape to a predetermined permanent shape. Shape memory polymers function as actuators, by first forming a heated article into a temporary shape and cooling. Then, by using a second stimulus (i.e. heat), the article can spring back to its original shape. To date, SMPs have been limited to two-way and three-way shape configurations. Mather has successfully built a process where sections of one shape memory polymer independently react to different temperature stimuli. This work has been highlighted on the cover of the January 2011 issue of Soft Matter. Functionally graded materials are defined as synthetic materials where the composition, microstructure and other properties differ along sections of the material. The goal of Mather's research was to apply this theory to SMPs and create a material that could be fixed and recovered in one section without impacting the response of the other sections. Mather created a temperature gradient plate by applying heat at one end and using a cooling unit at the other end. The actual temperature gradient was verified by measuring different positions along the plate. The SMP was cured on this plate to set the different transition temperatures. Mather first tested the graded SMP by using micro-indention on the surface and then heating the polymer. When heated, each indentation recovered to the original smooth surface as each one's transition temperature was reached along the surface. The second test involved cutting the SMP and bending back the cut sections. This SMP was placed on a Pelletier plate that uniformly heated the material. It was observed that as the plate warmed, each "finger" of

30 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

the cut sheet independently recovered back to its unbent shape as the temperature of the plate reached its transition temperature. "We are very excited about this new approach to preparing shape memory polymers, which should enable new devices with complex mechanical articulations," says Mather. "The project demonstrated how enthusiastic and persistent undergraduate researchers could contribute substantively, even in the throes of their busy course schedules." There are numerous applications opportunities for Mather's functionally graded SMPs, from low-cost temperature labels that could measure temperatures in areas that are not accessible by conventional methods or not amenable to continuous monitoring, to indirectly indicate sterilization completions, or for incorporation into product packaging (for shipping industry or food storage) to indicate the maximum temperature for a product exposure. The LCS team of researchers led by Mather included graduate student Xiaofan Luo and undergraduate student Andrew DiOrio.

Story Source: The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by Syracuse University.

Journal Reference:

1. Andrew M. DiOrio, Xiaofan Luo, Kyung Min Lee, Patrick T. Mather. A functionally graded shape memory polymer. Soft Matter, 2011; 7 (1): 68 DOI: 10.1039/c0sm00487a

http://www.sciencedaily.com/releases/2011/01/110105140405.htm

31 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

Household Sewage: Not Waste, but a Vast New Energy Resource

In a finding that gives new meaning to the adage, "waste not, want not," scientists are reporting that household sewage has far more potential as an alternative energy source than previously thought. They say the discovery, which increases the estimated potential energy in wastewater by almost 20 percent, could spur efforts to extract methane, hydrogen and other fuels from this vast and, as yet, untapped resource. (Credit: iStockphoto) ScienceDaily (Jan. 6, 2011) — In a finding that gives new meaning to the adage, "waste not, want not," scientists are reporting that household sewage has far more potential as an alternative energy source than previously thought. They say the discovery, which increases the estimated potential energy in wastewater by almost 20 percent, could spur efforts to extract methane, hydrogen and other fuels from this vast and, as yet, untapped resource. Their report appears in ACS' journal Environmental Science & Technology. Elizabeth S. Heidrich and colleagues note that sewage treatment plants in the United States use about 1.5 percent of the nation's electrical energy to treat 12.5 trillion gallons of wastewater a year. Instead of just processing and dumping this water, they suggest that in the future treatment facilities could convert its organic molecules into fuels, transforming their work from an energy drain to an energy source. Based on their research, they estimate that one gallon of wastewater contains enough energy to power a 100-watt light bulb for five minutes. Only one other study had been done on wastewater's energy potential, and Heidrich thought that the results were too low because some energy-rich compounds were lost to evaporation. In the new study, the scientists freeze-dried wastewater to conserve more of its energy-rich compounds. Using a standard device to measure energy content, they found that the wastewater they collected from a water treatment plant in Northeast England contained nearly 20 per cent more than reported previously. The authors acknowledge funding from the Engineering and Physical Sciences Research Council, the School of Chemical Engineering and Advanced Materials, Newcastle University, and Northumbrian Water Limited.

Story Source: The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by American Chemical Society.

Journal Reference:

1. E. S. Heidrich, T. P. Curtis, J. Dolfing. Determination of the Internal Chemical Energy of Wastewater. Environmental Science & Technology, 2010; : 101209152112090 DOI: 10.1021/es103058w

http://www.sciencedaily.com/releases/2011/01/110105121131.htm

32 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

How to Soften a Diamond

Material removal mechanism during diamond polishing: a sharp-edged diamond particle “peels off” a dust particle from the glass-like phase at the surface of the diamond. At the same time, oxygen from the air reacts with the carbon chains at the surface to form carbon dioxide. (Credit: Image courtesy of Fraunhofer- Gesellschaft) ScienceDaily (Jan. 6, 2011) — After hundreds of years, researchers at the Fraunhofer IWM in Freiburg have managed to decode the atomic mechanism behind diamond grinding. It is the hardest material in the world, and yet it can not only be used to cut other materials, but can be machined itself. Already over 600 years ago first diamonds were cut and the same technique is still used to transform precious stones into exquisite jewelry and later into unrivaled industrial tools. Dr. Lars Pastewka's and Prof. Michael Moseler's team at the Fraunhofer Institute for Mechanics of Materials IWM in Freiburg/Germany can now reveal the secret of why it is that diamonds can be machined. The team published its findings in the current online issue of Nature Materials. This work represents major progress in tribology -the research of friction and wear. Despite the great significance for industry the scientific basics of tribology are poorly understood. Diamonds have been ground by craftsmen for hundreds of years using cast iron wheels studded with fine diamond particles turning at around 30 meters per second at the outer rim. A highly tuned sense of sound and feeling enable an experienced diamond grinder to hold the rough diamond at just the right angle to achieve a smooth and polished surface. The fact that diamonds react directionally has been known for a long time, says Lars Pastewka. The physical phenomenon is known as anisotropy. The carbon atoms in the diamond lattice form lattice planes, some of which are easier to polish than others, depending on the angle at which the diamond is held. For hundreds of years, researchers have been looking for a logical way of explaining this empirical phenomenon, and have so far been unsuccessful. Equally, no one has been able to explain why it is possible that the hardest material in the world can be machined. The scientists in Freiburg have answered both these questions with the help of a newly developed calculation method. Michael Moseler explains the method in layman's terms: "The moment a diamond is ground, it is no longer a diamond." Due to the high-speed friction between the rough diamond and the diamond particles in the cast iron wheel, a completely different "glass-like carbon phase" is created on the surface of the precious stone in a mechanochemical process. The speed at which this material phase appears depends on the crystal orientation of the rough diamond. "This is where anisotropy comes in," explains Moseler. The new material on the surface of the diamond, adds Moseler, is then "peeled off" in two ways: the ploughing effect of the sharp-edged diamond particles in the wheel repeatedly scratches off tiny carbon dust

33 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

particles from the surface -- this would not be possible in the original diamond state, which is too hard and in which the bond forces would be too great. The second, equally important impingement on the normally impenetrably hard crystal surface is due to oxygen (O) in the air. The O2 molecules bond with carbon atoms (C) within the instable, long carbon chains that have formed on the surface of the glassy phase to produce the atmospheric gas CO2, carbon dioxide. And how was it possible to determine when and which atoms would detach from the crystalline surface? "We looked extremely closely at the quantum mechanics of the bonds between the atoms at the surface of the rough diamond breaking. We had to analyze the force field between the atoms in detail," explains Lars Pastewka. If one understands these forces well enough, one can precisely describe -- and model -- how to make and break bonds. "This provided the basis for investigations into the dynamics of the atoms at the friction surface between a diamond particle on the wheel and the rough diamond itself," adds Pastewka. He and his colleague Moseler have calculated the paths of around 10,000 diamond atoms and followed them on screen. Their calculations paid off: their model is able to explain all the processes involved in the dusty and long misunderstood method of diamond grinding. The newly developed model is not only a milestone in the field of diamond research: "It proves also that friction and wear processes can be described precisely with modern material simulation methods ranging from the atomic level to macroscopic objects." emphasizes Prof. Peter Gumbsch, director of the institute. He considers this just as one example of the many questions on wear that industry needs answers to. These questions will be addressed in future by the Fraunhofer IWM within the newly founded MicroTribology Centre µTC under the motto "make tribology predictable."

Story Source: The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by Fraunhofer-Gesellschaft.

Journal Reference:

1. Lars Pastewka, Stefan Moser, Peter Gumbsch, Michael Moseler. Anisotropic mechanical amorphization drives wear in diamond. Nature Materials, 2010; DOI: 10.1038/nmat2902

http://www.sciencedaily.com/releases/2010/11/101129111742.htm

34 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

Mother’s Milk Improves Physical Condition of Future Adolescents, Study Finds

Mother breastfeeding a newborn baby. (Credit: iStockphoto/Goldmund Lukic) ScienceDaily (Jan. 6, 2011) — Breast feeding new born babies has lots of advantages in the short and in the long-term for babies. A study has confirmed the recently discovered benefits, which had not been researched until now. Adolescents who are breast fed at birth have stronger leg muscles than those who received artificial milk. Enrique García Artero, the principal author of the study and researcher at the University of Granada pointed out that, "Our objective was to analyse the relationship between the duration of breastfeeding babies and their physical condition in adolescence." "The results suggest further beneficial effects and provide support to breast feeding as superior to any other type of feeding." The authors asked the parents of 2,567 adolescents about the type of feeding their children received at birth and the time this lasted. The adolescents also carried out physical tests in order to evaluate several abilities such as aerobic capacities and their muscular strength. The paper, which was published in the Journal of Nutrition, shows that the adolescents who were breastfed as babies ha stronger leg muscles than those who were not breastfed. Moreover, muscular leg strength was greater in those who had been breastfed for a longer period of time. This type of feeding (exclusively or in combination with other types of food) is associated with a better performance in horizontal jumping by boys and girls regardless of morphological factors such as fat mass, height of the adolescent or the amount of muscle. Adolescents who were breastfed from three to five months, or for more than six months had half the risk of low performance in the jump exercise when compared with those who had never been breastfed. García Artero stressed that, "Until now, no studies have examined the association between breastfeeding and future muscular aptitude." "However, our results concur with the observations made as regards other neonatal factors, such as weight at birth, are positively related to better muscular condition during adolescence." What importance does breastfeeding have? "If all children were exclusively breastfed from birth, it would be possible to save approximately 1.5 million lives." This was stated by the UNICEF, which pointed out that breast feeding is the "perfect feed" exclusively during the first six months of life and additionally over two years.

35 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

As regards the new born, the advantages in the first years of life include immunological protection against allergies, skin diseases, obesity and diabetes, as well as a guarantee of the growth, development and intelligence of the baby. The benefits also substantially involve the woman: reduction of post-birth haemorrhage, anaemia, maternity mortality, and the risk of breast and ovarian cancer, and it strengthens the affective link between mother and child. "Let's forget about the money saved by not buying other types of milk and baby bottles," says García Artero.

Story Source: The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by Plataforma SINC, via AlphaGalileo.

Journal Reference:

1. E. G. Artero, F. B. Ortega, V. Espana-Romero, I. Labayen, I. Huybrechts, A. Papadaki, G. Rodriguez, B. Mauro, K. Widhalm, M. Kersting, Y. Manios, D. Molnar, L. A. Moreno, M. Sjostrom, F. Gottrand, M. J. Castillo, S. De Henauw. Longer Breastfeeding Is Associated with Increased Lower Body Explosive Strength during Adolescence. Journal of Nutrition, 2010; 140 (11): 1989 DOI: 10.3945/jn.110.123596

http://www.sciencedaily.com/releases/2011/01/110105071145.htm

36 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

Protective Properties of Green Tea Uncovered

Regularly drinking green tea could protect the brain against developing Alzheimer's and other forms of dementia, according to latest research by scientists at Newcastle University. (Credit: iStockphoto/Katarzyna Krawiec) ScienceDaily (Jan. 6, 2011) — Regularly drinking green tea could protect the brain against developing Alzheimer's and other forms of dementia, according to latest research by scientists at Newcastle University. The study, published in the academic journal Phytomedicine, also suggests this ancient Chinese remedy could play a vital role in protecting the body against cancer. Led by Dr Ed Okello, the Newcastle team wanted to know if the protective properties of green tea -- which have previously been shown to be present in the undigested, freshly brewed form of the drink -- were still active once the tea had been digested. Digestion is a vital process which provides our bodies with the nutrients we need to survive. But, says Dr Okello, it also means that just because the food we put into our mouths is generally accepted to contain health-boosting properties, we can't assume these compounds will ever be absorbed by the body. "What was really exciting about this study was that we found when green tea is digested by enzymes in the gut, the resulting chemicals are actually more effective against key triggers of Alzheimer's development than the undigested form of the tea," explains Dr Okello, based in the School of Agriculture, Food and Rural Development at Newcastle University. "In addition to this, we also found the digested compounds had anti-cancer properties, significantly slowing down the growth of the tumour cells which we were using in our experiments." As part of the research, the Newcastle team worked in collaboration with Dr Gordon McDougall of the Plant Products and Food Quality Group at the Scottish Crop Research Institute in Dundee, who developed technology which simulates the human digestive system. It is this which made it possible for the team to analyse the protective properties of the products of digestion. Two compounds are known to play a significant role in the development of Alzheimer's disease -- hydrogen peroxide and a protein known as beta-amyloid. Previous studies have shown that compounds known as polyphenols, present in black and green tea, possess neuroprotective properties, binding with the toxic compounds and protecting the brain cells. When ingested, the polyphenols are broken down to produce a mix of compounds and it was these the Newcastle team tested in their latest research.

37 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

"It's one of the reasons why we have to be so careful when we make claims about the health benefits of various foods and supplements," explains Dr Okello. "There are certain chemicals we know to be beneficial and we can identify foods which are rich in them but what happens during the digestion process is crucial to whether these foods are actually doing us any good." Carrying out the experiments in the lab using a tumour cell model, they exposed the cells to varying concentrations of the different toxins and the digested green tea compounds. Dr Okello explained: "The digested chemicals protected the cells, preventing the toxins from destroying the cells. "We also saw them affecting the cancer cells, significantly slowing down their growth. "Green tea has been used in Traditional Chinese medicine for centuries and what we have here provides the scientific evidence why it may be effective against some of the key diseases we face today." The next step is to discover whether the beneficial compounds are produced during digestion after healthy human volunteers consume tea polyphenols. The team has already received funding from the Biotechnology and Biological Sciences Research Council (BBSRC) to take this forward. Dr Okello adds: "There are obviously many factors which together have an influence on diseases such as cancer and dementia -- a good diet, plenty of exercise and a healthy lifestyle are all important."

Story Source: The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by Newcastle University.

Journal Reference:

1. E.J. Okello, G.J. McDougall, S. Kumar, C.J. Seal. In vitro protective effects of colon-available extract of Camellia sinensis (tea) against hydrogen peroxide and beta-amyloid (Aβ(1–42)) induced cytotoxicity in differentiated PC12 cells. Phytomedicine, 2010; DOI: 10.1016/j.phymed.2010.11.004

http://www.sciencedaily.com/releases/2011/01/110105194844.htm

38 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

Malfunctioning Gene Associated With Lou Gehrig's Disease Leads to Nerve-Cell Death in Mice

Neurons expressing the ALS-associated human protein TDP-43 (green color) show an absence of normal TDP-43 protein (red). Virginia Lee and colleagues have shown that pertubation of normal TDP-43 expression is linked to neuron death. Edward Lee, MD, PhD; University of Pennsylvania School of Medicine. (Credit: Edward Lee) ScienceDaily (Jan. 6, 2011) — Lou Gehrig's disease, or amyotrophic lateral sclerosis (ALS), and frontotemporal lobar degeneration (FTLD) are characterized by protein clumps in brain and spinal-cord cells that include an RNA-binding protein called TDP-43. This protein is the major building block of the lesions formed by these clumps. In a study published in the Journal of Clinical Investigation, a team led by Virginia M.-Y. Lee, PhD, director of Penn's Center for Neurodegenerative Disease Research, describes the first direct evidence of how mutated TDP-43 can cause neurons to die. Although normally found in the nucleus where it regulates gene expression, TDP-43 was first discovered in 2006 to be the major disease protein in ALS and FTLD by the Penn team led by Lee and John Q. Trojanowski, MD, PhD, director of the Institute on Aging at Penn. This discovery has transformed research on ALS and FTLD by linking them to the same disease protein. "The discovery of TDP-43 as the pathological link between mechanisms of nervous system degeneration in both ALS and FTLD opened up new opportunities for drug discovery as well as biomarker development for these disorders," says Lee. "An animal model of TDP-43-mediated disease similar to ALS and FTLD will accelerate these efforts." In the case of TDP-43, neurons could die for two reasons: One, the clumps themselves are toxic to neurons or, two, when TDP-43 is bound up in clumps outside the nucleus, it depletes the cell of normally functioning TDP-43. Normally a cell regulates the exact amount of TDP-43 in itself -- too much is bad and too little is also bad. The loss of function of TDP-43 is important in regulating disease because it regulates gene expression. To determine the effects of misplaced TDP-43 on the viability of neurons, the researchers made transgenic mice expressing human mutated TDP-43 in the cytoplasm and compared them to mice expressing normal human TDP-43 in the nucleus of nerve cells. Expression of either human TDP-43 led to neuron loss in vulnerable forebrain regions; degeneration of part of the spinal cord tract; and muscle spasms in the mice. These effects recapitulate key aspects of FTLD and a subtype of ALS known as primary lateral sclerosis. The JCI study showed that a dramatic loss of function causes nerve-cell death because normal mouse TDP-43 is eliminated when human mutated TDP-43 genes are put into the mice. Since cells regulate the exact amount of TDP-43, over-expression of the human TDP-43 protein prevents the mouse TDP-43 from functioning normally. Lee and colleagues think this effect leads to neuron death rather than clumps of TDP-43 because these clumps were rare in the mouse cells observed in this study. Lee says that it is not yet clear why clumps were rare in this mouse model when they are so prevalent in human post-mortem brain tissue of ALS and FTLD patients.

39 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

Neurodegeneration in the mouse neurons expressing TDP-43 -- both the normal and mutated human versions -- was accompanied by a dramatic downregulation of the TDP-43 protein mice are born with. What's more, mice expressing the mutated human TDP-43 exhibited profound changes in gene expression in neurons of the brain's cortex. The findings suggest that disturbing the normal TDP-43 in the cell nucleus results in loss of normal TDP-43 function and gene regulatory pathways, culminating in degeneration of affected neurons. Next steps, say the researchers, will be to look for the specific genes that are regulated by TDP-43 and how mRNA splicing is involved so that the abnormal regulation of these genes can be corrected. At the same time, notes Lee, "We soon will launch studies of novel strategies to prevent TDP-43-mediated nervous system degeneration using this mouse model of ALS and FTLD." The study was funded in part by funds from the National Institutes of Health.

Story Source: The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by University of Pennsylvania School of Medicine.

Journal Reference:

1. Lionel M. Igaz, Linda K. Kwong, Edward B. Lee, Alice Chen-Plotkin, Eric Swanson, Travis Unger, Joe Malunda, Yan Xu, Matthew J. Winton, John Q. Trojanowski, Virginia M.-Y. Lee. Dysregulation of the ALS-associated gene TDP-43 leads to neuronal death and degeneration in mice. Journal of Clinical Investigation, 2011; DOI: 10.1172/JCI44867

http://www.sciencedaily.com/releases/2011/01/110105131755.htm

40 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

Helicopter Transport Increases Survival for Seriously Injured Patients, Study Finds

Paramedics unloading patient from a helicopter. (Credit: iStockphoto/Catherine Yeulet) ScienceDaily (Jan. 6, 2011) — Severely injured patients transported by helicopter from the scene of an accident are more likely to survive than patients brought to trauma centers by ground ambulance, according to a new study published in The Journal of Trauma: Injury, Infection, and Critical Care. The study is the first to examine the role of helicopter transport on a national level and includes the largest number of helicopter- transport patients in a single analysis. The finding that helicopter transport positively impacts patient survival comes amid an ongoing debate surrounding the role of helicopter transport in civilian trauma care in the United States, with advocates citing the benefits of fast transport times and critics pointing to safety, utilization and cost concerns. The new national data shows that patients selected for helicopter transport to trauma centers are more severely injured, come from greater distances and require more hospital resources, including admission to the intensive care unit, the use of a ventilator to assist breathing and urgent surgery, compared to patients transported by ground ambulance. Despite this, helicopter-transport patients are more likely than ground-transport patients to survive and be sent home following treatment. "On the national level, it appears as though helicopters are being used appropriately to transport injured patients to trauma centers," said Mark Gestring, M.D., lead study author and director of the Kessler Trauma Center at the University of Rochester Medical Center. "Air medical transport is a valuable resource which can make trauma center care more accessible to patients who would not otherwise be able to reach such centers." Gestring serves as a volunteer board member for Mercy Flight Central Inc., a Canandaigua, New York-based air medical services company. Previous studies on the use of helicopters to transport injured patients report mixed results, but are limited by small patient populations from single institutions or specific regions. Some smaller studies propose helicopters are overused, transporting patients with relatively minor injuries who would likely fare as well if transported by ground. However, the new national data does not reveal such a trend. "The goal is always to get the sickest people to the trauma center as fast as possible, and our data suggest that's exactly what's happening. We're not seeing helicopters being used to transport trivial cases, which is undoubtedly a poor use of resources," noted Gestring. The study included patients transported from the scene of an injury to a trauma center by helicopter or ground transportation in 2007. Gestring and his team used the National Trauma Databank to identify 258,387 patients -- 16 percent were transported by helicopter and 84 percent were transported by ground. The helicopter-transport patients were younger, more likely to be male and more likely to be victims of motor vehicle crashes or falls, compared to ground-transport patients. Overall, almost half of the helicopter-transport

41 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

patients were admitted to the intensive care unit, 20 percent required assistance breathing for an average of one week and close to 20 percent needed an operation. Even though they arrived at the hospital in worse condition, they ultimately fared better than those transported by ground. While the study shows that air transport does make a difference in patient outcomes, there is no data available to explain why patients transported by helicopter do better than those transported by ground. Study authors assume that speed of transport -- helicopters are capable of higher speeds over longer distances regardless of terrain -- and the ability of air-medical crews to provide therapies and utilize technologies that are not universally available to ground unit crews, are the main drivers of positive patient outcomes. Helicopter transport has been an integral component of trauma care in the United States since the 1970s, due in large part to the military's experience transporting sick or injured soldiers during war time. The availability of helicopters in the civilian setting has been credited with improving trauma center access for a significant percentage of the population. According to Gestring, the study has some limitations. It is not possible to evaluate the multitude of factors that drive the individual decisions to transport a patient by helicopter in each and every case. In addition, the general nature of the dataset limits specific conclusions that may be drawn or applied to any individual trauma system. The Kessler Trauma Center at the University of Rochester is Western New York's largest trauma center, serving Rochester and the nearly 2 million people in the 17 counties which surround the Finger Lakes Region. The Center is a Level-1 trauma center, providing 24-hour access to comprehensive emergency services. Physicians treat more than 3,000 traumatic injury patients a year. In addition to Gestring, Joshua Brown, B.A., Nicole Stassen, M.D., Paul Bankey, M.D., Ph.D., Ayodele Sangosanya, M.D., and Julius Cheng, M.D., M.P.H., from the University of Rochester Medical Center participated in the research. The study was conducted and funded by the University of Rochester.

Story Source: The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by University of Rochester Medical Center.

Journal Reference:

1. Joshua B. Brown, Nicole A. Stassen, Paul E. Bankey, Ayodele T. Sangosanya, Julius D. Cheng, Mark L. Gestring. Helicopters and the Civilian Trauma System: National Utilization Patterns Demonstrate Improved Outcomes After Traumatic Injury. The Journal of Trauma: Injury, Infection, and Critical Care, 2010; 69 (5): 1030 DOI: 10.1097/TA.0b013e3181f6f450

http://www.sciencedaily.com/releases/2011/01/110105102723.htm

42 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

Male Pattern Balding May Be Due to Stem Cell Inactivation

Top panels show progenitor cells marked in green (left) and brown (right) in cross section of a hair follicle. Bottom panel shows side view of hair follicle with stem-cell- and progenitor-cell-rich areas. (Credit: George Cotsarelis, MD, University of Pennsylvania School of Medicine) ScienceDaily (Jan. 6, 2011) — Given the amount of angst over male pattern balding, surprisingly little is known about its cause at the cellular level. In a new study, published in the Journal of Clinical Investigation, a team led by George Cotsarelis, MD, chair of the Department of Dermatology at the University of Pennsylvania School of Medicine, has found that stem cells play an unexpected role in explaining what happens in bald scalp. Using cell samples from men undergoing hair transplants, the team compared follicles from bald scalp and non-bald scalp, and found that bald areas had the same number of stem cells as normal scalp in the same person. However, they did find that another, more mature cell type called a progenitor cell was markedly depleted in the follicles of bald scalp. The researchers surmised that balding may arise from a problem with stem-cell activation rather than the numbers of stem cells in follicles. In male pattern balding, hair follicles actually shrink; they don't disappear. The hairs are essentially microscopic on the bald part of the scalp compared to other spots. "We asked: 'Are stem cells depleted in bald scalp?'" says Cotsarelis. "We were surprised to find the number of stem cells was the same in the bald part of the scalp compared with other places, but did find a difference in the abundance of a specific type of cell, thought to be a progenitor cell," he says. "This implies that there is a problem in the activation of stem cells converting to progenitor cells in bald scalp." At this point, the researchers don't know why there is a breakdown in this conversion. "However, the fact that there are normal numbers of stem cells in bald scalp gives us hope for reactivating those stem cells," notes Cotsarelis. In 2007, the Cotsarelis lab found that hair follicles in adult mice regenerate by re-awakening genes once active only in developing embryos. The team determined that wound healing in a mouse model created an "embryonic window" of opportunity to manipulate the number of new hair follicles that form. By activating dormant embryonic molecular pathways stem cells were coaxed into forming new hair follicles. In the JCI study, the group also found a progenitor cell population in mice that is analogous to the human cells; these cells were able to make hair follicles and grow hair when injected into an immunodeficient mice.

43 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

The researchers say their next steps will be to study the stem and progenitor populations in other types of hair loss, including female pattern hair loss. The information may assist in developing cell-based treatments for male pattern balding by isolating stem cells and expanding them to add back to the scalp directly. They will also focus on identifying factors that could be used topically to convert stem cells to progenitor cells to generate normal large hairs. First author Luis Garza, MD, PhD, a dermatologist and former postdoctoral fellow in the Cotsarelis lab, performed much of the work and is now an assistant professor of Dermatology at Johns Hopkins University. The research was funded in part by the National Institute of Arthritis and Musculoskeletal and Skin Diseases; the Pennsylvania Department of Health; the Fannie Gray Hall Center for Human Appearance; and L'Oreal.

Story Source: The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by University of Pennsylvania School of Medicine.

Journal Reference:

1. Luis A. Garza, Chao-Chun Yang, Tailun Zhao, Hanz B. Blatt, Michelle Lee, Helen He, David C. Stanton, Lee Carrasco, Jeffrey H. Spiegel, John W. Tobias, George Cotsarelis. Bald scalp in men with androgenetic alopecia retains hair follicle stem cells but lacks CD200-rich and CD34-positive hair follicle progenitor cells. Journal of Clinical Investigation, 2011; DOI: 10.1172/JCI44478

http://www.sciencedaily.com/releases/2011/01/110104133905.htm

44 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

Films for Façades: New Building Material Offers New Design Options

The plastic film ETFE is experiencing a boom these days because it gives architects completely new design options. Using a new gluing technique many different shapes can be made. (Credit: Copyright Fraunhofer Gesellschaft IFAM) ScienceDaily (Jan. 6, 2011) — The plastic film ETFE is experiencing a boom these days because it gives architects completely new design options. You can use this material not only for futuristic sports stadiums, but also to insulate and for the heating control of buildings. The researchers at Fraunhofer are unveiling new ways to process the film façades at the fair BAU 2011 from January 17 to 22 (stand 131 in Hall C2) in . Films instead of walls. This is an idea that fascinates architects all over the world. The Eden Project in Southern England, the National Aquatics Center built for swimming events at the Olympics in Beijing and the Allianz Arena in Munich are only three examples of what you can make from plastic sheets. Ethylene tetraflourethylene (ETFE), a transparent membrane, is especially popular because it enables buildings that shine in all colors as in Munich and Peking. But, we are not just talking about colors. You can use this new foil for an intelligent improvement of existing buildings -- by regulating heat, coolness and light precisely according to needs. Experts see film construction as a market poised for the future. Whether this market develops and if so how quickly is not a question of taste but of the technical possibilities -- the financial options. Films will have to be low-cost, easy to process and free of health hazards for them to have a chance in the international construction business. This is the target that six Fraunhofer institutes are working jointly toward in the Multifunctional Membrane Cushion Construction project. Engineers have been able to use coatings to change the properties of ETFE foils specifically. For example, membrane cushions with an inner coating of tungsten trioxide turn blue when they come into contact with hydrogen and lose their color if the cushions are filled with oxygen. Thus the passage of light can easily be regulated. Project coordinator Andreas Kaufmann of the Fraunhofer Institute for Building Physics (IBP) states "you could use a foil such as this to cover the entire façade of a house and have light pass depending upon sunlight conditions." The researchers were also able to solve another problem. To date, ETFE membranes

45 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

have hardly been able to create a heat barrier, but a coat of paper-thin (and therefore transparent) layers of aluminum and paint make sure that heat radiation is effectively reflected. Kaufmann explains that "the challenge was overcoming the anti-adhesive properties of the membrane. ETFE is related to the anti-stick substance Teflon and hardly reacts with other substances chemically. This is why the surface of the foil first has to be pretreated chemically before coating." In the meantime, the researchers have not only come up with heat-insulating, but also antibacterial layers that inhibit the growth of mold and yeasts that form ugly black coverings. As Robert Hodann, the CEO at film manufacturer Nowofol and industrial partner of the research project, puts it, "we believe that ETFE will emerge as a strong market of its own. The captivating thing about ETFE foil is its transparency combined with its great strength -- no other plastic membrane can compete." For instance, it will be possible to make LED façades with ETFE foil behind which thousands of light-emitting diodes can be installed. This would be an easy way to transform facades into gigantic illuminated screens.

Story Source: The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by Fraunhofer-Gesellschaft. http://www.sciencedaily.com/releases/2010/12/101216095023.htm

46 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

Freshwater Methane Release Changes Greenhouse Gas Equation

Greenhouse gas uptake by continents is less than previously thought because of methane emissions from freshwater areas. (Credit: Copyright Michele Hogan) ScienceDaily (Jan. 6, 2011) — An international team of scientists has released data indicating that greenhouse gas uptake by continents is less than previously thought because of methane emissions from freshwater areas. John Downing, an Iowa State University professor in the ecology, evolution and organismal biology department, is part of an international team that concluded that methane release from inland waters is higher than previous estimates. The study, published in the journal Science, indicates that methane gas release from freshwater areas changes the net absorption of greenhouse gases by natural continental environments, such as forests, by at least 25 percent. Past analyses of carbon and greenhouse gas exchanges on continents failed to account for the methane gas that is naturally released from lakes and running water. Downing, a laboratory limnologist at Iowa State, has also conducted research measuring the amount of carbon sequestered in lake and pond sediment. This new study gives scientists a better understanding of the balance between carbon sequestration and greenhouse gas releases from fresh water bodies. "Methane is a greenhouse gas that is more potent than carbon dioxide in the global change scenario," Downing said. "The bottom line is that we have uncovered an important accounting error in the global carbon budget. Acre for acre, lakes, ponds, rivers and streams are many times more active in carbon processing than seas or land surfaces, so they need to be included in global carbon budgets." Methane emissions from lakes and running water occur naturally, but have been difficult to assess. David Bastviken, principal author and professor in the department of water and environmental studies, at Linköping University in Sweden, said small methane emissions from the surfaces of water bodies occur continuously. "Greater emissions occur suddenly and with irregular timing, when methane bubbles from the sediment reach the atmosphere, and such fluxes have been difficult to measure," Bastviken said.

47 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

The greenhouse effect is caused by human emission of gasses that act like a blanket and trap heat inside the Earth's atmosphere, according to the International Panel on Climate Change. Some ecosystems, such as forests can absorb and store greenhouse gasses. The balance between emissions and uptake determine how climate will change. The role of freshwater environments has been unclear in previous budgets, Downing said. The researchers studied methane fluxes from 474 freshwater areas and calculated emission based on new estimates of the global area covered by inland waters. The international team also included: Lars Tranvik, Uppsala University; Patrick Crill, University; and Alex Enrich-Prast, University Federal of Rio de Janeiro.

Story Source: The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by Iowa State University.

Journal Reference:

1. D. Bastviken, L. J. Tranvik, J. A. Downing, P. M. Crill, A. Enrich-Prast. Freshwater Methane Emissions Offset the Continental Carbon Sink. Science, 2011; 331 (6013): 50 DOI: 10.1126/science.1196808

http://www.sciencedaily.com/releases/2011/01/110106153119.htm

48 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

Surprising Flares in Crab Nebula

Fermi's Large Area Telescope has recently detected two short-duration gamma-ray pulses coming from the Crab Nebula, which was previously believed to emit radiation at very steady rate. The pulses were fueled by the most energetic particles ever traced to a discrete astronomical object. (Credit: NASA/ESA) ScienceDaily (Jan. 6, 2011) — The Crab Nebula, one of our best-known and most stable neighbors in the winter sky, is shocking scientists with a propensity for fireworks -- gamma-ray flares set off by the most energetic particles ever traced to a specific astronomical object. The discovery, reported by scientists working with two orbiting telescopes, is leading researchers to rethink their ideas of how cosmic particles are accelerated. "We were dumbfounded," said Roger Blandford, who directs the Kavli Institute for Particle Astrophysics and Cosmology, jointly located at the Department of Energy's SLAC National Accelerator Laboratory and Stanford University. "It's an emblematic object," he said; also known as M1, the Crab Nebula was the first astronomical object catalogued in 1771 by Charles Messier. "It's a big deal historically, and we're making an amazing discovery about it." Blandford was part of a KIPAC team led by scientists Rolf Buehler and Stefan Funk that used observations from the Large Area Telescope, one of two primary instruments aboard NASA's Fermi Gamma-ray Space Telescope, to confirm one flare and discover another. Their report was posted online January 6 in Science Express alongside a report from the Italian orbiting telescope Astro-rivelatore Gamma a Immagini LEggero, or AGILE, which also detected gamma-ray flares in the Crab Nebula. The Crab Nebula, and the rapidly spinning neutron star that powers it, are the remnants of a supernova explosion documented by Chinese and Middle Eastern astronomers in 1054. After shedding much of its outer gases and dust, the dying star collapsed into a pulsar, a super-dense, rapidly spinning ball of neutrons that emits a pulse of radiation every 33 milliseconds, like clockwork. Though it's only 10 miles across, the amount of energy the pulsar releases is enormous, lighting up the Crab Nebula until it shines 75,000 times more brightly than the sun. Most of this energy is contained in a particle wind of energetic electrons and positrons traveling close to the speed of light. These electrons and positrons interact with magnetic fields and low-energy photons to produce the famous glowing tendrils of dust and gas Messier mistook for a comet over 300 years ago. The particles are even forceful enough to produce the gamma rays the LAT normally observes during its regular surveys of the sky. But those particles did not cause the dramatic flares. Each of the two flares the LAT observed lasted mere days before the Crab Nebula's gamma-ray output returned to more normal levels. According to Funk, the short duration of the flares points to synchrotron radiation, or radiation emitted by electrons accelerating in the magnetic field of the nebula, as the cause. And

49 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

not just any accelerated electrons: the flares were caused by super-charged electrons of up to 10 peta-electron volts, or 10 quadrillion electron volts, 1,000 times more energetic than anything the world's most powerful man-made particle accelerator, the Large Hadron Collider in Europe, can produce, and more than 15 orders of magnitude more energetic than photons of visible light. "The strength of the gamma-ray flares shows us they were emitted by the highest-energy particles we can associate with any discrete astrophysical object," Funk said. Not only are the electrons surprisingly energetic, added Buehler, but, "the fact that the intensity is varying so rapidly means the acceleration has to happen extremely fast." This challenges current theories about the way cosmic particles are accelerated, which cannot easily account for the extreme energies of the electrons or the speed with which they're accelerated. The discovery of the Crab Nebula's gamma-ray flares raises one obvious question: how can the nebula do that? Obvious question, but no obvious answers. The KIPAC scientists all agree they need a closer look at higher resolutions and in a variety of wavelengths before they can make any definitive statements. The next time the Crab Nebula flares the Fermi LAT team will not be the only team gathering data, but they'll need all the contributions they can get to decipher the nebula's mysteries. "We thought we knew the essential ingredients of the Crab Nebula," Funk said, "but that's no longer true. It's still surprising us."

Story Source: The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by DOE/SLAC National Accelerator Laboratory.

Journal Reference:

1. A. A. Abdo et al. Gamma-Ray Flares from the Crab Nebula. Science, 2011; DOI: 10.1126/science.1199705

http://www.sciencedaily.com/releases/2011/01/110106145432.htm

50 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

Deep Interior of Moon Resembles Earth's Core

Patty Lin, a postdoctoral candidate in ASU's School of Earth and Space Exploration, holds Northwest Africa 5000, a lunar meteorite in the collection of the ASU Center for Meteorite Studies. Lin and her adviser professor Ed Garnero use seismology to study inaccessible regions of Earth's interior, a technique they are now applying on the moon to learn more about Earth's natural satellite. (Credit: Tom Story/ASU) ScienceDaily (Jan. 6, 2011) — The Moon, Earth's closest neighbor, has long been studied to help us better understand our own planet. Of particular interest is the lunar interior, which could hold clues to its ancient origins. In an attempt to extract information on the very deep interior of the Moon, a team of NASA-led researchers applied new technology to old data. Apollo seismic data was reanalyzed using modern methodologies and detected what many scientists have predicted: the Moon has a core. According to the team's findings, published Jan. 6 in the online edition of Science, the Moon possesses an iron-rich core with a solid inner ball nearly 150 miles in radius, and a 55-mile thick outer fluid shell. "The Moon's deepest interior, especially whether or not it has a core, has been a blind spot for seismologists," says Ed Garnero, a professor at the School of Earth and Space Exploration in ASU's College of Liberal Arts and Sciences. "The seismic data from the old Apollo missions were too noisy to image the Moon with any confidence. Other types of information have inferred the presence of a lunar core, but the details on its size and composition were not well constrained." Sensitive seismographs scattered across Earth make studying our planet's interior possible. After earthquakes these instruments record waves that travel through the interior of the planet, which help to determine the structure and composition of Earth's layers. Just as geoscientists study earthquakes to learn about the structure of Earth, seismic waves of "moonquakes" (seismic events on the Moon) can be analyzed to probe the lunar interior. When Garnero and his graduate student Peiying (Patty) Lin heard about research being done to hunt for the core of the Moon by lead author Renee Weber at NASA's Marshall Space Flight Center, they suggested that array processing might be an effective approach, a method where seismic recordings are added together in a special way and studied in concert. The multiple recordings processed together allow researchers to extract very faint signals. The depth of layers that reflect seismic energy can be identified, ultimately signifying the composition and state of matter at varying depths. "Array processing methods can enhance faint, hard-to-detect seismic signals by adding seismograms together. If seismic wave energy goes down and bounces off of some deep interface at a particular depth, like the Moon's core-mantle boundary, then that signal "echo" should be present in all the recordings, even if below

51 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

the background noise level. But when we add the signals together, that core reflection amplitude becomes visible, which lets us map the deep Moon," explains Lin, who is also one of the paper's authors. The team found the deepest interior of the moon to have considerable structural similarities with the Earth. Their work suggests that the lunar core contains a small percentage of light elements such as sulfur, similar to light elements in Earth's core -- sulfur, oxygen and others. "There are a lot of exciting things happening with the Moon, like Professor Mark Robinson's LRO mission producing hi-res photos of amazing phenomena. However, just as with Earth, there is much we don't know about the lunar interior, and that information is key to deciphering the origin and evolution of the Moon, including the very early Earth," explains Garnero.

Story Source: The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by Arizona State University, via EurekAlert!, a service of AAAS.

Journal Reference:

1. Renee C. Weber, Pei-Ying Lin, Edward J. Garnero, Quentin Williams, and Philippe Lognonne. Seismic Detection of the Lunar Core. Science, 6 January 2011 DOI: 10.1126/science.1199375

http://www.sciencedaily.com/releases/2011/01/110106144751.htm

52 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

Cancer in a Single Catastrophe: Chromosome Crisis Common in Cancer Causation

Evidence of chromothripsis in small cell lung cancer. (Credit: Stephens PJ et al. (2011) Massive genomic rearrangement acquired in a single catastrophic event during cancer development. Cell) ScienceDaily (Jan. 6, 2011) — Remarkable new research overthrows the conventional view that cancer always develops in a steady, stepwise progression. It shows that in some cancers, the genome can be shattered into hundreds of fragments in a single cellular catastrophe, wreaking mutation on a massive scale. The scars of this chromosomal crisis are seen in cases from across all the common cancer types, accounting for at least one in forty of all cancers. The phenomenon is particularly common in bone cancers, where the distinctively ravaged genome is seen in up to one in four cases. The team looked at structural changes in the genomes of cancer samples using advanced DNA sequencing technologies. In some cases, they found dramatic structural changes affecting highly localised regions of one or a handful of chromosomes that could not be explained using standard models of DNA damage. "The results astounded us," says Dr Peter Campbell, from the Cancer Genome Project at the Wellcome Trust Sanger Institute and senior author on the paper. "It seems that in a single cell in a single event, one or more chromosomes basically explode -- literally into hundreds of fragments. "In some instances -- the cancer cases -- our DNA repair machinery tries to stick the chromosomes back together but gets it disastrously wrong. Out of the hundreds of mutations that result, several promote the development of cancer." Cancer is typically viewed as a gradual evolution, taking years to accumulate the multiple mutations required to drive the cancer's aggressive growth. Many cancers go through phases of abnormal tissue growth before eventually developing into malignant tumours. The new results add an important new insight, a new process that must be included in our consideration of cancer genome biology. In some cancers, a chromosomal crisis can generate multiple cancer-causing mutations in a single event. "We suspect catastrophes such as this might happen occasionally in the cells of our body," says Dr Andy Futreal, Head of Cancer Genetics and Genomics at the Wellcome Trust Sanger Institute and an author on the paper. "The cells have to make a decision -- to repair or to give up the ghost. "Most often, the cell gives up, but sometimes the repair machinery sticks bits of genome back together as best it can. This produces a fractured genome riddled with mutations which may well have taken a considerable leap along the road to cancer." The new genome explosions caused 239 rearrangements on a single chromosome in one case of colorectal cancer.

53 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

The damage was particularly common in bone cancers, where it affected five of twenty samples. In one of these samples the team found three cancer genes that they believe were mutated in a single event: all three are genes that normally suppress cancer development and when deleted or mutated can lead to increased cancer development. "The evidence suggests that a single cellular crisis shatters a chromosome or chromosomes," says Professor Mike Stratton, Director of the Wellcome Trust Sanger Institute and an author on the paper, "and that the DNA repair machinery pastes them back together in a highly erroneous order. "It is remarkable that, not only can a cell survive this crisis, it can emerge with a genomic landscape that confers a selective advantage on the clone, promoting evolution towards cancer." The team propose two possible causes of the damage they see. First, they suggest it might occur during cell division, when chromosomes are packed into a condensed form. Ionizing radiation can cause breaks like those seen. The second proposition is that attrition of telomeres -- the specialized genome sequences at the tips of chromosomes -- causes genome instability at cell division. This work was supported by the Wellcome Trust and the Chordoma Foundation.

The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by Wellcome Trust Sanger Institute.

Journal Reference:

1. Philip J. Stephens, Chris D. Greenman, Beiyuan Fu, Fengtang Yang, Graham R. Bignell, Laura J. Mudie, Erin D. Pleasance, King Wai Lau, David Beare, Lucy A. Stebbings, Stuart McLaren, Meng-Lay Lin, David J. McBride, Ignacio Varela, Serena Nik-Zainal, Catherine Leroy, Mingming Jia, Andrew Menzies, Adam P. Butler, Jon W. Teague, Michael A. Quail, John Burton, Harold Swerdlow, Nigel P. Carter, Laura A. Morsberger, Christine Iacobuzio-Donahue, George A. Follows, Anthony R. Green, Adrienne M. Flanagan, Michael R. Stratton, P. Andrew Futreal, Peter J. Campbell. Massive Genomic Rearrangement Acquired in a Single Catastrophic Event during Cancer Development. Cell, 2011; 144 (1): 27-40 DOI: 10.1016/j.cell.2010.11.055

http://www.sciencedaily.com/releases/2011/01/110106144735.htm

54 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

Brain training for tinnitus reverses ringing in ears

• 12 January 2011 • Magazine issue 2795.

Repairing the damage (Image: Emily Shur/Getty) THERE may be hope for people with tinnitus. Rats with the condition showed reduced symptoms after a new treatment. People with tinnitus hear noise, such as ringing, in the absence of a corresponding external sound. The condition can cause loss of balance, depression and insomnia. Tinnitus can occur for many reasons, including exposure to long periods of loud noise, normal ageing and infection. It is thought to be caused by a reorganisation of the brain's auditory cortex, so too many neurons respond to particular sound frequencies. Navzer Engineer of the University of Texas at Dallas and his colleagues reasoned that reorganising these auditory areas might have an effect on tinnitus. The team played tones in a range of frequencies, except those that caused tinnitus, to seven rats with the condition while stimulating their vagus nerve, a cranial nerve known to affect brain plasticity. They repeated the process 300 times a day. After 18 days of this, the rats showed a significant reduction in tinnitus-related behaviour, which lasted three weeks. No change was seen in those given a sham treatment (Nature, DOI: 10.1038/nature09656). Engineer is now working on a device for use in humans. http://www.newscientist.com/article/mg20927954.100-brain-training-for-tinnitus-reverses-ringing-in- ears.html?full=true&print=true

55 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

Ethereal quantum state stored in solid crystal

• 12 January 2011 by Stephen Battersby • Magazine issue 2795.

ETHEREAL quantum entanglement has been captured in solid crystals, showing that it is more robust than once assumed. These entanglement traps could make quantum computing and communication more practical. In the quantum world, two or more objects can be entangled so that measuring one affects the outcome of measuring the others, no matter how far apart the objects are. This property is central to quantum cryptography, where it allows two people to be sure a secret key they shared was not intercepted, and to quantum computing, as entangled bits occupy a superposition of two or more states at once and so can be used to solve some problems much faster than conventional computers. A missing element is memory, which is needed to do complex calculations and to transmit quantum states over large distances. "Photons travel at the speed of light, which is bad for storage," says Wolfgang Tittel at the University of Calgary in Alberta, Canada. Chilled clouds of atoms can act as quantum memory, but this requires bulky equipment and trained physicists. My professors used to say that entanglement is like a dream. Now we can show that it is pretty robust Now Tittel's group, and a separate team led by Nicolas Gisin at the University of Geneva in Switzerland, have created practical, solid-state quantum-memory devices. Tittel's team started by channelling one photon of an entangled pair into a crystal of lithium niobate doped with ions of thulium. This sends the crystal into a quantum superposition, in which many thulium ions absorb the photon at once and vibrate at different frequencies. The frequencies aren't random, however. Beforehand, the team strategically removed some thulium ions, leaving only those that absorb a particular sequence of frequencies. This tuning ensures that after about 7 nanoseconds the oscillations all come back into sync to recreate a copy of the original photon. The researchers verified that such photons were still entangled by comparing measurements of the stored photon with measurements of its partner (Nature, DOI: 10.1038/nature09719). Gisin's device is similar but used a different type of crystal (Nature, DOI: 10.1038/nature09662). "My professors used to say that entanglement is like a dream: as soon as you think about it, it is gone," says Gisin. "Now we can show that it is pretty robust." For some applications, 7 nanoseconds might be useful, and for now this storage time is fixed. In future it could be extended to seconds, perhaps by using electric fields to alter the tuning of the ions. http://www.newscientist.com/article/mg20927953.300-ethereal-quantum-state-stored-in-solid-crystal.html

56 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

Scorn over claim of teleported DNA

• 12 January 2011 by Andy Coghlan • Magazine issue 2795.

How could it leave its mark? (Image: Pasieka/SPL)

A Nobel prizewinner is reporting that DNA can be generated from its teleported "quantum imprint" A STORM of scepticism has greeted experimental results emerging from the lab of a Nobel laureate which, if confirmed, would shake the foundations of several fields of science. "If the results are correct," says theoretical chemist Jeff Reimers of the University of Sydney, Australia, "these would be the most significant experiments performed in the past 90 years, demanding re-evaluation of the whole conceptual framework of modern chemistry." Luc Montagnier, who shared the Nobel prize for medicine in 2008 for his part in establishing that HIV causes AIDS, says he has evidence that DNA can send spooky electromagnetic imprints of itself into distant cells and fluids. If that wasn't heretical enough, he also suggests that enzymes can mistake the ghostly imprints for real DNA, and faithfully copy them to produce the real thing. In effect this would amount to a kind of quantum teleportation of the DNA. Many researchers contacted for comment by New Scientist reacted with disbelief. Gary Schuster, who studies DNA conductance effects at Georgia Institute of Technology in Atlanta, compared it to "pathological science". Jacqueline Barton, who does similar work at the California Institute of Technology in Pasadena, was equally sceptical. "There aren't a lot of data given, and I don't buy the explanation," she says. One blogger has suggested Montagnier should be awarded an IgNobel prize. Yet the results can't be dismissed out of hand. "The experimental methods used appear comprehensive," says Reimers. So what have Montagnier and his team actually found? Full details of the experiments are not yet available, but the basic set-up is as follows. Two adjacent but physically separate test tubes were placed within a copper coil and subjected to a very weak extremely low frequency electromagnetic field of 7 hertz. The apparatus was isolated from Earth's natural magnetic field to stop it interfering with the experiment. One tube contained a fragment of DNA around 100 bases long; the second tube contained pure water. After 16 to 18 hours, both samples were independently subjected to the polymerase chain reaction (PCR), a method routinely used to amplify traces of DNA by using enzymes to make many copies of the original material. The gene fragment was apparently recovered from both tubes, even though one should have contained just water (see diagram). DNA was only recovered if the original solution of DNA - whose concentration has not been revealed - had been subjected to several dilution cycles before being placed in the magnetic field. In each cycle it was diluted 10-fold, and "ghost" DNA was only recovered after between seven and 12 dilutions of the original. It was not found at the ultra-high dilutions used in homeopathy.

57 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

Physicists in Montagnier's team suggest that DNA emits low-frequency electromagnetic waves which imprint the structure of the molecule onto the water. This structure, they claim, is preserved and amplified through quantum coherence effects, and because it mimics the shape of the original DNA, the enzymes in the PCR process mistake it for DNA itself, and somehow use it as a template to make DNA matching that which "sent" the signal (arxiv.org/abs/1012.5166). "The biological experiments do seem intriguing, and I wouldn't dismiss them," says Greg Scholes of the University of Toronto in Canada, who last year demonstrated that quantum effects occur in plants. Yet according to Klaus Gerwert, who studies interactions between water and biomolecules at the Ruhr University in Bochum, Germany, "It is hard to understand how the information can be stored within water over a timescale longer than picoseconds." It is hard to understand how the information can be stored in water for more than picoseconds "The structure would be destroyed instantly," agrees Felix Franks, a retired academic chemist in London who has studied water for many years. Franks was involved as a peer reviewer in the debunking of a controversial study in 1988 which claimed that water had a memory (see "How 'ghost molecules' were exorcised"). "Water has no 'memory'," he says now. "You can't make an imprint in it and recover it later." Despite the scepticism over Montagnier's explanation, the consensus was that the results deserve to be investigated further. Montagnier's colleague, theoretical physicist Giuseppe Vitiello of the University of Salerno in Italy, is confident that the result is reliable. "I would exclude that it's contamination," he says. "It's very important that other groups repeat it." In a paper last year (Interdisciplinary Sciences: Computational Life Sciences, DOI: 10.1007/s12539-009- 0036-7), Montagnier described how he discovered the apparent ability of DNA fragments and entire bacteria both to produce weak electromagnetic fields and to "regenerate" themselves in previously uninfected cells. Montagnier strained a solution of the bacterium Mycoplasma pirum through a filter with pores small enough to prevent the bacteria penetrating. The filtered water emitted the same frequency of electromagnetic signal as the bacteria themselves. He says he has evidence that many species of bacteria and many viruses give out the electromagnetic signals, as do some diseased human cells. Montagnier says that the full details of his latest experiments will not be disclosed until the paper is accepted for publication. "Surely you are aware that investigators do not reveal the detailed content of their experimental work before its first appearance in peer-reviewed journals," he says. How 'ghost molecules' were exorcised The latest findings by Luc Montagnier evoke long-discredited work by the French researcher Jacques Benveniste. In a paper in Nature (vol 333, p 816) in 1988 he claimed to show that water had a "memory", and that the activity of human antibodies was retained in solutions so dilute that they couldn't possibly contain any antibody molecules (New Scientist, 14 July 1988, p 39). Faced with widespread scepticism over the paper, including from the chemist Felix Franks who had advised against publication, Nature recruited magician James Randi and chemist and "fraudbuster" Walter Stewart of the US National Institutes of Health in Bethesda, Maryland, to investigate Benveniste's methods. They found his result to be "a delusion", based on a flawed design. In 1991, Benveniste repeated his experiment under double-blind conditions, but not to the satisfaction of referees at Nature and Science. Two years later came the final indignity when he was suspended for damaging the image of his institute. He died in October 2004. That's not to say that quantum effects must be absent from biological systems. Quantum effects have been proposed in both plants and birds. Montagnier and his colleagues are hoping that their paper won't suffer the same fate as Benveniste's. http://www.newscientist.com/article/mg20927952.900-scorn-over-claim-of-teleported-dna.html

58 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

Spinning cosmic dust motes set speed record

• 15:18 12 January 2011 by David Shiga

Fast-spinning hotspot in the cosmic microwave background (Image: ESA/Planck Collaboration) Some dust grains out in deep space are spinning at mind-boggling rates. According to data from the European Space Agency's Planck space telescope, these interstellar particles are turning on their axis tens of billions of times every second. The measurements, which set a record for spinning objects, could lead to a more accurate map of the cosmic microwave background, the afterglow of the big bang, by allowing scientists to better account for distortions caused by microwaves emitted by the grains. The smallest grains, only 100 atoms or so wide, are so light that they can be set spinning by collisions with photons and fast-moving atoms. "Atoms or photons that come along knock the hell out of them," says Charles Lawrence of NASA's Jet Propulsion Laboratory, a member of the Planck science team. Molecular cloud Because these spinning grains carry an electric charge, they can emit electromagnetic radiation. Observations of clouds of gas and dust in our galaxy have previously turned up excess emission at around 15 to 20 gigahertz, though whether spinning dust was responsible wasn't clear. Vibrations of dust grains can cause microwave radiation at frequencies similar to those due to fast rotation, as can collisions between speeding electrons and protons. Now a combination of observations from Planck and the ground-based Cosmosomas instrument at the Teide observatory in the Canary Islands has produced the most detailed energy spectra yet of excess emission from interstellar dust. Both instruments looked at a molecular cloud in the constellation Perseus in particular, which showed an excess between 10 and 100 gigahertz that peaks around 20 to 40 gigahertz. Record smashed

59 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

The size, shape and peak of the spectrum closely match predictions from spinning dust grains, confirming that it is the likely source of the excess radiation. "It takes away any wiggle room that was there," Lawrence told astronomers on Tuesday at a meeting of the American Astronomical Society in Seattle, Washington. The grains spin much faster than other noted fast-spinning objects like graphene flakes that spin at 1 million rotations per second in the laboratory, and pulsars, the fast-spinning remnants of dead stars with a theoretical upper limit about 3000 rotations per second. The Planck observations are also turning up new regions where there is likely to be spinning dust, says science team member Gregory Dobler of the Kavli Institute for Theoretical Physics at the University of California, Santa Barbara. "It's pretty ubiquitous in the galaxy." Cold cores Further measurements by Planck should provide a better understanding of the spinning dust emission in our galaxy, Dobler says. That in turn should help efforts to correct for its distorting effects when building an accurate map of the cosmic microwave background, which is Planck's main goal. The study has been posted online and submitted to the journal and Astrophysics. Planck scientists also announced on Tuesday the discovery of 10,000 clumps of gas and dust on the verge of forming stars in our galaxy, called cold cores, and the identification of 189 massive clusters of galaxies. http://www.newscientist.com/article/dn19955-spinning-cosmic-dust-motes-set-speed- record.html?full=true&print=true

60 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

Fledgling space firm will use old Soviet gear

• 10:35 12 January 2011 by Rachel Courtland

From Russia with love: space modules reach the Isle of Man (Image: EA Ltd) It is a second chance for two relics of the cold war. Shipped from Russia, a pair of Soviet space station modules arrived in the Isle of Man last week, home to the fledgling private space firm Excalibur Almaz. The firm eventually plans to use the modules to provide extra room and supplies for the tourists and researchers it hopes to ferry into space. Using decades-old equipment may seem like an odd move for a brand new space company but former astronaut Leroy Chiao, who oversees technical operations for the firm, says the spacecraft are more valuable than new vehicles because they have been extensively tested. "It's not the nice shiny stuff, but as much as possible we want to use things that have been tested and proven," he says. Parachute preserved The modules, which have never been launched, were built as part of Almaz, a Soviet military programme that sent astronauts into orbit to take reconnaissance photographs of Earth. But Excalibur has also bought four reusable Almaz spacecraft, including one that was flown twice, which might be used much sooner. Indeed, the firm's immediate goal is finding ways to get passengers into orbit. Before this is possible the spaceships will need to be refurbished and modernised. However, Excalibur will attempt to preserve many of their "workhorse" components, including the heat shield, parachute system, solid rocket motors, and an escape system that can jettison a crew to safety if a rocket malfunctions. Excalibur aims to offer its flights to a range of customers, including NASA, which will retire the space shuttle this year. To fill the void, the agency will pay for seats on Russian Soyuz capsules, but private companies may eventually be hired to take over these trips. Excalibur could also get funding from NASA. It is set to issue awards this year to private firms working on ways to transport astronauts to and from low-Earth orbit. With funding Excalibur could have a vehicle flying within three years, Chiao says. http://www.newscientist.com/article/dn19949-fledgling-space-firm-will-use-old-soviet-gear.html

61 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

Most detailed image of night sky unveiled

• 18:30 11 January 2011 by David Shiga, Seattle

Images of the northern and southern hemispheres of our galaxy (bottom) reveal "walls" of galaxies that are the largest known structures in the universe. Zooming in on a patch of sky in the southern hemisphere reveals the spiral galaxy M33 (top left). Zooming in further (top centre) reveals a region of intense star formation known as NGC 604 (green swirls, top right) (Image: M. Blanton and SDSS-III) Enlarge image It would take 500,000 high-definition TVs to view it in its full glory. Astronomers have released the largest digital image of the night sky ever made, to be mined for future discoveries. It is actually a collection of millions of images taken since 1998 with a 2.5-metre telescope at Apache Point Observatory in New Mexico. The project, called the Sloan Digital Sky Survey, is now in its third phase, called SDSS-III. Altogether, the images in the newly released collection contain more than a trillion pixels of data, covering a third of the sky in great detail. "This is one of the biggest bounties in the history of science," says SDSS team member Mike Blanton of New York University in New York . "This data will be a legacy for the ages." Biggest 3D map Data released previously by the survey has already led to many advances, including the discovery of tiny, dim galaxies orbiting the Milky Way, and maps of the large-scale structure of the universe. The third phase of the survey started in 2008. It is now focusing on measuring the light spectra of objects seen in the huge image. One project within the survey aims to measure spectra for more than a million galaxies. These spectra reveal how far away the galaxies are, and by measuring so many of them, astronomers will create the biggest 3D map of the universe yet. Analysing the map will allow them to probe the nature of the mysterious dark energy that is thought to be accelerating the expansion of space. Another SDSS-III project will take spectra of thousands of stars to discover planets and brown dwarfs in orbit around them. You can browse images from SDSS here. The results were announced on Tuesday at a meeting of the American Astronomical Society in Seattle, Washington. http://www.newscientist.com/article/dn19948-most-detailed-image-of-night-sky- unveiled.html?full=true&print=true

62 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

Thunderstorms caught making antimatter

• Updated 16:44 12 January 2011 by David Shiga

Thunderstorms have been caught producing one of the most mysterious substances in the universe: antimatter. The discovery could further our understanding of the murky physics of lightning production. NASA's Fermi spacecraft seems to have been hit by the antimatter counterpart to electrons – positrons – emanating from thunderstorms on Earth. Thunderstorms emit gamma rays, known as terrestrial gamma ray flashes (TGFs), although what causes them is still a mystery. While observing these flashes, Fermi also detected a separate set of gamma rays with an energy of 511 kiloelectronvolts. These rays were produced when a barrage of positrons struck the spacecraft's detectors and were annihilated by making contact with electrons there. "These signals are the first direct evidence that thunderstorms make antimatter particle beams," said Michael Briggs, a member of the Fermi team at the University of Alabama in Huntsville. Antimatter source Where does the antimatter come from? The Fermi scientists believe that gamma rays produced in thunderstorms may spawn positrons and electrons when they hit atoms in Earth's atmosphere. Lightning and TGFs are both thought to be connected to strong electric fields in storm clouds, but the exact processes that trigger these phenomena are not well understood, nor is their relationship with each other. "It's a little bit premature to say exactly what the implications of this [discovery] are going to be going forward, but I'm very confident that it's an important piece of the puzzle," says Steven Cummer of Duke University in Durham, North Carolina, who was not involved in the work. Briggs presented the discovery yesterday at a meeting of the American Astronomical Society in Seattle, Washington. The research will be published in Geophysical Research Letters. When this article was first posted, the second sentence of the fifth paragraph read: "The Fermi scientists believe that TGFs produced in thunderstorms may convert atoms in Earth's atmosphere into positrons and electrons." http://www.newscientist.com/article/dn19943-thunderstorms-caught-making-antimatter.html

63 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

Smallest planet is nearly Earth-sized

• 22:37 10 January 2011 by David Shiga, Seattle

Rocky Kepler-10b (at bottom in this illustration) is too close to its host star to support life as we know it (Image: NASA)

Astronomers have found the smallest planet outside our solar system yet. The alien world is just 1.4 times as wide as Earth, but it is far too hot to host life as we know it. NASA's Kepler space telescope detected the planet, called Kepler-10b, indirectly by observing how it regularly dimmed its parent star when it passed between the star and Earth. The amount of dimming indicates that the planet is 1.4 times the width of Earth, making it smaller than the previous record holder, COROT-7b , which is about 1.7 times Earth's size. The team used ground-based telescopes to observe the wobble that the planet gravitationally induces in its parent star. That revealed that the planet is 4.6 times as massive as Earth. Rocky world Its density is about 8.8 times that of water, meaning it must be made mostly of rock and metal, like Earth. COROT-7b may well be a rocky world, too. It has the same density as Earth – about 5.5 times that of water. But that measurement is less certain, so COROT-7b could instead be much less dense, with up to 50 per cent of its mass made up by water ice, says Kepler deputy science team leader leader Natalie Batalha of NASA's Ames Research Center in Moffett Field, California. There is less wiggle room in the density of Kepler-10b, she says.

64 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

The planet's star is very similar in size and mass to the sun. But the planet orbits it at less than 4 per cent of Mercury's distance from the sun, making the planet's surface way too hot to support life as we know it. Intense radiation from the star probably also prevents the planet from holding onto an atmosphere.

Molten oceans One side of the planet is likely permanently facing the star, heating it to around 1400 °C, "far in excess of the temperature of typical lava flows on Earth", says Batalha. "Along that day side, we expect to find oceans of molten material." Planet hunter Geoff Marcy of the University of California in Berkeley, who was not involved in the discovery, said it is "the first definitive rocky planet ever found" outside the solar system. Because it is a major milestone on the way to finding planets like Earth that could host life, Marcy said Kepler-10b "will be marked as among the most profound scientific discoveries in human history". Batalha announced the result at a press conference in Seattle, Washington, at a meeting of the American Astronomical Society. http://www.newscientist.com/article/dn19937-smallest-planet-is-nearly-earthsized.html?full=true&print=true

65 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

Glasses-free 3D TV tries to broaden out its appeal

• 12 January 2011 by Jeff Hecht • Magazine issue 2794.

Nintendo launch the 3DS, but just one gamer at a time can see the 3D effect (Image: EPA/Mike Nelson/Corbis) TELEVISION makers have shifted their sights from HD to 3D. In 2010, the first 3D TVs from major manufacturers went on sale, and a spate of 3D channels launched around the world. However, many viewers dislike the special glasses that existing 3D TVs require. Over half of the people asked to watch 30 minutes of 3D TV found the glasses "a hassle", according to a recent report by the Cable and Telecommunications Association for Marketing, a non-profit cable TV industry body in National Harbor, Maryland. So the emphasis in 2011 is likely to be on "autostereoscopic" displays. These aim separate images at the viewer's right and left eye, with no need for special glasses. Unlike the 3D TVs already on the market, many of the first glasses-free devices deliver 3D images to just one viewer at a time. That's the case with the Nintendo 3DS, a hand-held gaming unit due for release in Japan in February. It works by interlacing vertical strips of the images for the left and right eye. To do this it has an array of slits - known as a parallax barrier - in front of the screen to ensure that each eye sees only the strips it is meant to, as long as the user stays within a narrow viewing area. That's not a drawback for a hand-held device, but it might be for the first glasses-free 3D TVs, which Toshiba released in Japan in December. They are available with 30-centimetre (12-inch) and 51-centimetre screens, but neither model produces a 3D effect outside a small viewing area. iPont International, based in Budapest, Hungary, has a different solution. At the Consumer Electronics Show in Las Vegas, Nevada, this week, it demonstrated a 140-centimetre glasses-free 3D display developed by Tridelity in Jersey City, New Jersey. Tridelity's screens have multiple parallax barriers and so can send light from pairs of images in five directions at once, considerably widening the viewing area so that at least five people can enjoy the 3D experience simultaneously.

66 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

iPont has already begun deploying the screens in cinema lobbies, says Glen Harper, the company's business development director. "Far back from the screen, it looks like 2D," he says, "but when people get close enough to see the pop-out effect, they say it's cool." Tridelity's technology has its own drawbacks, however. Parallax barriers reduce the brightness of the display since they block some of the light, says Doug Lanman of the Massachusetts Institute of Technology Media Lab. Dividing the light five ways, as Tridelity's screens do, will only compound that issue, so that viewers will need to be in dark surroundings to see the 3D effect. Lanman and his team are attempting to minimise this problem with a parallax barrier that adapts to each video frame to block only the minimum amount of light to achieve a 3D effect. For instance, if the edge of an object is lying horizontally, the left and right-eye images are essentially identical, so the barrier in this part of the image can be temporarily switched off. The same applies to any featureless areas of the image. With this approach, Lanman says they can build a display three to five times as bright as other parallax barrier displays. Even so, there are still questions to be answered before glasses-free 3D displays for multiple viewers are ready for market. Perhaps the most important challenge is to make it easy for viewers to find and keep to the sweet spots where the 3D effect springs into life. In the intervening zones, images get sent to the wrong eyes, making the picture look garbled, warns 3D consultant John Merritt of Williamsburg, Massachusetts. Despite the buzz around autostereoscopic displays, he sounds a pessimistic note. "Right now, I know of no good substitute for glasses." It has to be easy for viewers to find and keep to the sweet spots where the 3D effect springs into life http://www.newscientist.com/article/mg20927943.900-glassesfree-3d-tv-tries-to-broaden-out-its-appeal.html

67 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

Mimic-bots learn to do as we do

• 11 January 2011 by Helen Knight • Magazine issue 2794

Now follow me (Image: David Hecker/AFP/Getty) A robot inspired by human mirror neurons can interpret human gestures to learn how it should behave A HUMAN and a robot face each other across the room. The human picks up a ball, tosses it towards the robot, and then pushes a toy car in the same direction. Confused by two objects coming towards it at the same time, the robot flashes a question mark on a screen. Without speaking, the human makes a throwing gesture. The robot turns its attention to the ball and decides to throw it back. In this case the robot's actions were represented by software commands, but it will be only a small step to adapt the system to enable a real robot to infer a human's wishes from their gestures. Developed by Ji-Hyeong Han and Jong-Hwan Kim at the Korea Advanced Institute of Science and Technology (KAIST) in Daejeon, the system is designed to respond to the actions of the person confronting it in the same way that our own brains do. The human brain contains specialised cells, called mirror neurons, that appear to fire in the same way when we watch an action being performed by others as they do when we perform the action ourselves. It is thought that this helps us to recognise or predict their intentions. To perform the same feat, the robot observes what the person is doing, breaks the action down into a simple verbal description, and stores it in its memory. It compares the action it observes with a database of its own actions, and generates a simulation based on the closest match. The robot also builds up a set of intentions or goals associated with an action. For example, a throwing gesture indicates that the human wants the robot to throw something back. The robot then connects the action "throw" with the object "ball" and adds this to its store of knowledge. When the memory bank contains two possible intentions that fit the available information, the robot considers them both and determines which results in the most positive feedback from the human - a smile or a nod, for example. If the robot is confused by conflicting information, it can request another gesture from the human. It

68 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

also remembers details of each interaction, allowing it to respond more quickly when it finds itself in a situation it has encountered before. The system should allow robots to interact more effectively with humans, using the same visual cues we use. "Of course, robots can recognise human intentions by understanding speech, but humans would have to make constant, explicit commands to the robot," says Han. "That would be pretty uncomfortable." Socially intelligent robots that can communicate with us through gesture and expression will need to develop a mental model of the person they are dealing with in order to understand their needs, says Chris Melhuish, director of the Bristol Robotics Laboratory in the UK. Using mirror neurons and humans' unique mimicking ability as an inspiration for building such robots could be quite interesting, he says. Socially intelligent robots will need to develop a mental model of the person they are dealing with Han now plans to test the system on a robot equipped with visual and other sensors to detect people's gestures. He presented his work at the Robio conference in Tianjin, China, in December. Human-machine teamwork As the population of many countries ages, elderly people may share more of their workload with robotic helpers or colleagues. In an effort to make such interactions as easy as possible, Chris Melhuish and colleagues at the Bristol Robotics Laboratory in the UK are leading a Europe-wide collaboration called Cooperative Human Robotic Interaction Systems that is equipping robots with software that recognises an object they are picking up before they hand it to a person. They also have eye-tracking technology that they use to monitor what humans are paying attention to. The goal is to develop robots that can learn to safely perform shared tasks with people, such as stirring a cake mixture as a human adds milk. http://www.newscientist.com/article/mg20927943.700-mimicbots-learn-to-do-as-we-do.html

69 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

Why social networks are sucking up more of your time

• 14:18 11 January 2011 by Jamie Condliffe

Think Facebook sucks up your time? Well, it's only going to get worse, according to a study that suggests users become more active in online social networks the bigger they become. Researchers at City University of Hong Kong considered behaviour in two online networks: the Chinese blogging site Sina and a peer-to-peer file-sharing system called Tianwang. By comparing the growth of these networks with user activity, they were able to settle a long-standing dispute. When it comes to the growth of online social networks, there are two competing schools of thought. One assumes that networks grow in a linear fashion: in other words, the activity of each user doesn't change much and so network activity grows in proportion with the number of users. The second theory suggests that network growth is non-linear: as a network grows in size, users also use it more, causing total network activity to increase far more quickly than the linear model would predict. The team found that their results overwhelmingly support the non-linear hypothesis. They observed that both blogging and peer-to-peer file-sharing sites see users becoming far more active on the networks the larger they get. In the case of the peer-to-peer network, the team found that if the number of users doubled, their activity rose by a factor of 3.16. They also discovered that the bigger the inequality in activity between users, the quicker the network will grow. So much to do, so little time "It makes sense that the total activity in a social system would be non-linear, because in a social system, the more people there are, the more things there are to do," agrees Mike Thelwall, head of the Statistical Cybermetrics Research Group at the University of Wolverhampton, UK. "This would probably transfer to Facebook and Twitter, too. You might start to rely on Facebook or Twitter if a high proportion of your friends were on them." This effect isn't just down to a small number of very active users, either. The team has found that the increased activity follows a pattern known as a time-invariant power law – one of the upshots of which is that all users tend to become more active in the network. So there's no point deluding yourself: Facebook definitely is sucking up more of your time than ever before. These results don't just explain why many of us feel increasingly compelled to check our Twitter timelines: they're of practical benefit, too. Website designers rely on accurate predictions of network growth to understand the demands placed on their infrastructures, and work such as this can help them make more reliable forecasts. Journal reference: arxiv.org/abs/1101.1042 http://www.newscientist.com/article/dn19939-why-social-networks-are-sucking-up-more-of-your- time.html?full=true&print=true

70 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

New Google Goggles app that solves Sudoku puzzles

16:00 11 January 2011 Technology Niall Firth, technology editor

(Image: Google) You could say it rather defeats the game's purpose. But Google has unveiled a new version of its Goggles app that it says can solve any Sudoku puzzles just by snapping a picture of it. The internet giant has produced a short video which it has posted on its blog and YouTube which shows how the puzzle-solving application works. The firm sent one of its software engineers, simply equipped with a Google Nexus S smartphone, to take on 2009 Sudoku champion Tammy McLeod. Even with a decent headstart, the human competitor was well and truly trounced by the Sudoku novice equipped with the app. The user simply takes a photo of the puzzle with the smartphone's camera and text recognition software recognises the numbers and works out the answer within seconds, before filling in the grid's squares. Unsurprisingly, Google Goggles is banned from Sudoku competitions. The new 1.3 version of Google Goggles can also scan barcodes 'almost instantly', the firm's blog claims. After opening Goggles and hovering over the barcode the phone vibrates when it has recognised the product and will bring back web reviews and price comparison sites. It is also capable of recognising some adverts in magazines. The user takes a photo of the advert and the phone's web browser will bring back search results about the product or brand, the firm's blog says. While it might appeal to those who are short of patience, or who cannot bear being stuck on one particularly fiendish grid, the new Googles app shows how smartphone cameras and text recognition software are being used to create a new generation of 'augmented reality' apps. Last month an application called WordLens hit the headlines for its claim that it could instantly translate any signs from English into Spanish, and vice versa. While the application was slightly hit-and-miss in practice, its ability to recognise text, process it and then paste it over the subject in the phone's camera was still a remarkable step forward for handheld computer vision applications.

http://www.newscientist.com/blogs/shortsharpscience/2011/01/google-goggles-app-that-solves.html

71 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

Smart contact lenses for health and head-up displays

• 10 January 2011 by Duncan Graham-Rowe • Magazine issue 2794.

Eye strain? Triggerfish will know (Image: Sensimed)

Lenses that monitor eye health are on the way, and in-eye 3D image displays are being developed too – welcome to the world of augmented vision THE next time you gaze deep into someone's eyes, you might be shocked at what you see: tiny circuits ringing their irises, their pupils dancing with pinpricks of light. These smart contact lenses aren't intended to improve vision. Instead, they will monitor blood sugar levels in people with diabetes or look for signs of glaucoma. The lenses could also map images directly onto the field of view, creating head-up displays for the ultimate augmented reality experience, without wearing glasses or a headset. To produce such lenses, researchers are merging transparent, eye-friendly materials with microelectronics. The idea is to map images onto the field of view, creating head-up displays for augmented reality In 2008, as a proof of concept, Babak Parviz at the University of Washington in Seattle created a prototype contact lens containing a single red LED. Using the same technology, he has now created a lens capable of monitoring glucose levels in people with diabetes. It works because glucose levels in tear fluid correspond directly to those found in the blood, making continuous measurement possible without the need for thumb pricks, he says. Parviz's design calls for the contact lens to send this information wirelessly to a portable device worn by diabetics, allowing them to manage their diet and medication more accurately. Lenses that also contain arrays of tiny LEDs may allow this or other types of digital information to be displayed directly to the wearer through the lens. This kind of augmented reality has already taken off in cellphones, with countless software apps superimposing digital data onto images of our surroundings, effectively blending the physical and online worlds. Making it work on a contact lens won't be easy, but the technology has begun to take shape. Last September, Sensimed, a Swiss spin-off from the Swiss Federal Institute of Technology in Lausanne, launched the very first commercial smart contact lens, designed to improve treatment for people with glaucoma. The disease puts pressure on the optic nerve through fluid build-up, and can irreversibly damage vision if not properly treated. Highly sensitive platinum strain gauges embedded in Sensimed's Triggerfish lens record changes in the curvature of the cornea, which correspond directly to the pressure inside the eye, says CEO Jean-Marc Wismer. The lens transmits this information wirelessly at regular intervals to a portable recording device worn by the patient, he says.

72 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

Like an RFID tag or London's Oyster travel cards, the lens gets its power from a nearby loop antenna - in this case taped to the patient's face. The powered antenna transmits electricity to the contact lens, which is used to interrogate the sensors, process the signals and transmit the readings back. Each disposable contact lens is designed to be worn just once for 24 hours, and the patient repeats the process once or twice a year. This allows researchers to look for peaks in eye pressure which vary from patient to patient during the course of a day. This information is then used to schedule the timings of medication. "The timing of these drugs is important," Wisner says. Parviz, however, has taken a different approach. His glucose sensor uses sets of electrodes to run tiny currents through the tear fluid and measures them to detect very small quantities of dissolved sugar. These electrodes, along with a computer chip that contains a radio frequency antenna, are fabricated on a flat substrate made of polyethylene terephthalate (PET), a transparent polymer commonly found in plastic bottles. This is then moulded into the shape of a contact lens to fit the eye. Parviz plans to use a higher-powered antenna to get a better range, allowing patients to carry a single external device in their breast pocket or on their belt. Preliminary tests show that his sensors can accurately detect even very low glucose levels. Parvis is due to present his results later this month at the IEEE MEMS 2011 conference in Cancún, Mexico. "There's still a lot more testing we have to do," says Parviz. In the meantime, his lab has made progress with contact lens displays. They have developed both red and blue miniature LEDs - leaving only green for full colour - and have separately built lenses with 3D optics that resemble the head-up visors used to view movies in 3D. Parviz has yet to combine both the optics and the LEDs in the same contact lens, but he is confident that even images so close to the eye can be brought into focus. "You won't necessarily have to shift your focus to see the image generated by the contact lens," says Parviz. It will just appear in front of you, he says. The LEDs will be arranged in a grid pattern, and should not interfere with normal vision when the display is off. For Sensimed, the circuitry is entirely around the edge of the lens (see photo). However, both have yet to address the fact that wearing these lenses might make you look like the robots in the Terminator movies. False irises could eventually solve this problem, says Parviz. "But that's not something at the top of our priority list," he says. http://www.newscientist.com/article/mg20927943.800-smart-contact-lenses-for-health-and-headup- displays.html?full=true&print=true

73 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

A fat tummy shrivels your brain

• 08 January 2011 • Magazine issue 2794.

HAVING a larger waistline may shrink your brain. Obesity is linked to an increased risk of type 2 diabetes, which is known to be associated with cognitive impairment. So Antonio Convit at the New York University School of Medicine wanted to see what impact obesity had on the physical structure of the brain. He used magnetic resonance imaging to compare the brains of 44 obese individuals with those of 19 lean people of similar age and background. He found that obese individuals had more water in the amygdala - a part of the brain involved in eating behaviour. He also saw smaller orbitofrontal cortices in obese individuals, important for impulse control and also involved in feeding behaviour (Brain Research, in press). "It could mean that there are less neurons, or that those neurons are shrunken," says Convit. Eric Stice at Oregon Research Institute, Eugene, thinks that the findings strengthen the "slippery slope" theory of obesity. "If you overeat, it appears to result in neural changes that increase the risk for future overeating," he says. Obesity is associated with a constant, low-level inflammation, which Convit thinks explains the change in brain size.

http://www.newscientist.com/article/mg20927943.000-a-fat-tummy-shrivels-your- brain.html?full=true&print=true

74 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

We can feed 9 billion people in 2050

• 01:00 12 January 2011 by Debora Mackenzie

The 9 billion people projected to inhabit the Earth by 2050 need not starve in order to preserve the environment, says a major report on sustainability out this week. Agrimonde describes the findings of a huge five-year modelling exercise by the French national agricultural and development research agencies, INRA and CIRAD. It is the second report on sustainability launched this week to provide a healthy dose of good news. The French team began with a goal – 3000 calories per day for everyone, including 500 from animal sources – then ran a global food model repeatedly, with and without environmental limits on farming. The aim was to see how the calorie goal could be achieved. "We found three main conditions," says Hervé Guyomard of INRA. "The biggest surprise was that some regions will depend even more on imports", even as their production rises. This, he says, means that we will need to find ways to counter excessive fluctuations in world prices so that imports are not hindered. Waste not In addition, says Guyomard, "the rich must stop consuming so much". He points out that food amounting to 800 calories is lost per person each day as waste in richer nations. The model suggested that realistic yield increases could feed everyone, even as farms take measures to protect the environment, such as preserving forests or cutting down on the use of fossil fuels. The key will be to tailor detailed solutions to different regions. These are the main challenges for research, says Guyomard. For example, high-yield farming typically means large expanses of one crop, which encourages crop diseases and requires more pesticides. Instead, researchers could find ways for farmers to raise yields while maintaining biodiversity. Guyomard says food scientists will need to organise globally, as climate scientists have done. http://www.newscientist.com/article/dn19947-we-can-feed-9-billion-people-in-2050.html

75 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

Did magma rain on the early Earth?

• 08 January 2011 • Magazine issue 2794.

IF MOLTEN rock once rained onto Earth's surface, it might explain puzzling differences between the chemical compositions of our planet and the moon. After a Mars-sized body struck the infant Earth, a common atmosphere of rock vapour enveloped the resulting magma-covered Earth and its disc of orbiting magma. This atmosphere should have thoroughly mixed material from Earth and the disc, which later formed the moon. But while the Earth and the moon have the same relative abundance of different oxygen isotopes, some measurements suggest that moon rocks are iron rich and magnesium poor compared with those on Earth. Magma rain could explain the discrepancy. A team led by Kaveh Pahlevan at Yale University calculate that as rock vapour rose from Earth's hot, roiling surface, the magnesium oxide it contained would have condensed into droplets and rained back to the surface more readily than the more volatile iron oxide, which could then have mixed into the moon-forming disc (Earth and Planetary Science Letters, DOI: 10.1016/j.epsl.2010.10.036). http://www.newscientist.com/article/mg20927943.500-did-magma-rain-on-the-early- earth.html?full=true&print=true

76 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

Foxes zero in on prey via Earth's magnetic field

• 00:01 12 January 2011 by Michael Marshall • Magazine issue 2795.

Magnetised towards prey (Image: Judy Wantulok/Getty) It sounds like something a guided missile would do. Foxes seem to zero in on prey using Earth's magnetic field. They are the first animal thought to use the field to judge distance rather than just direction. Hynek Burda of the University of Duisburg-Essen in Essen, Germany, noticed that the foxes he was watching in the Czech Republic almost always jumped on their prey in a north-easterly direction. Given that cows position themselves using Earth's magnetic field, he wondered if something similar was at work. Foxes jump high into the air before dropping onto prey. Burda's team found that when the foxes could see their prey they jumped from any direction but when prey were hidden, they almost always jumped north-east. Such attacks were successful 72 per cent of the time, compared with 18 per cent of attacks in other directions. All observers saw the same thing, but Burda remained baffled, until he spoke to John Phillips at Virginia Tech in Blacksburg. Phillips has suggested that animals might use Earth's magnetic field to measure distance. The pair think a fox hunts best if it can jump the same distance every time. Burda suggests that it sees a ring of "shadow" on its retina that is darkest towards magnetic north, and just like a normal shadow, always appears to be the same distance ahead. The fox moves forward until the shadow lines up with where the prey's sounds are coming from, at which point it is a set distance away. The idea is "highly speculative but not implausible", says Wolfgang Wiltschko of the University of Frankfurt, Germany. Journal reference: Biology Letters, DOI: 10.1098/rsbl.2010.1145 http://www.newscientist.com/article/dn19945-foxes-zero-in-on-prey-via-earths-magnetic- field.html?full=true&print=true

77 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

Would a placebo work for you?

• 22:00 11 January 2011 by Jessica Hamzelou • Magazine issue 2795.

I need real drugs, please (Image: Michael Hitoshi/Getty) Could you be tricked into believing a sugar pill will ease your pain? A brain scan could reveal whether you would respond to a placebo or not. Tor Wager at the University of Colorado, Boulder, and colleagues took another look at two studies that involved scanning the brains of people given a painful stimulus. Each consisted of two trials where volunteers were given an ineffective cream to ease the pain. In one trial they were told it was a fake, in the other an analgesic. When comparing brain responses from each trial, the group identified several brain structures that were more or less active before and during the painful stimulus in those who experienced a placebo effect. In placebo responders, activity dropped in areas processing pain, but increased in areas involved in emotion. This suggests that, rather than blocking pain signals into the brain, the placebo is changing the interpretation of pain. The meaning of pain In responders, "a lot of the action happens when people are expecting pain", Wager says. "What makes a placebo responder is the ability to re-evaluate the meaning of pain before it happens." His team created a map of relevant brain areas using information from 35 of the 47 participants. With this map they were better able to predict how much the placebo would diminish pain in the remaining participants. The map could be useful for working out how much of a drug's effect is due to a placebo response in clinical trials and for identifying good candidates for placebo therapy. "It's difficult for experimental drugs to prove their superiority to placebo treatment and predicting placebo responders may help to deal with this challenge," says Luana Colloca at the National Institutes of Health in Bethesda, Maryland. Journal reference: Journal of Neuroscience, DOI: 10.1523/jneurosci.3420-10.2011 http://www.newscientist.com/article/dn19944-would-a-placebo-work-for-you.html?full=true&print=true

78 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

Chess grandmasters use twice the brain

• 14:31 11 January 2011 by Nora Schultz • Magazine issue 2795.

Brian booster (Image: Ronnie Kaufman/Getty) It may take years of hard work to become a chess grandmaster, but it gives a real boost to the brain – for working out chess problems, at least. It seems expert chess players use both sides of their brain to process chess tasks, rather than just one. Merim Bilalic at the University of Tübingen in Germany used fMRI to scan the brains of eight international chess players and eight novices while they identified either geometrical shapes or whether the pieces on a chess board were in a check situation. The expert players were quicker at solving the chess problem, activating areas on both sides of their brains as they did so. The novices used just the left side. Bilalic had expected the expert players to use a faster version of the processing mechanism used by novices. "But once the usual brain structures were engaged, the experts utilised additional complementary structures in the other half, to execute processes in parallel," he says. This parallel processing didn't occur when the expert players carried out the geometry task, suggesting that it is limited to practised skills. "It shows that there really is no short cut to expertise," says Bilalic. Journal reference: PLoS One, in press http://www.newscientist.com/article/dn19940-chess-grandmasters-use-twice-the- brain.html?full=true&print=true

79 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

Mind gym: Putting meditation to the test

• 11 January 2011 by Michael Bond • Magazine issue 2794.

Time spent meditating is time well spent (Image: Comstock/Getty) Mystics will tell you that meditation transforms the mind and soothes the soul. But what does science have to say? MANY people see meditation as an exotic form of daydreaming, or a quick fix for a stressed-out mind. My advice to them is, try it. It's difficult, at least to begin with. On my first attempt, instead of concentrating on my breathing and letting go of anything that came to mind as instructed by my cheery Tibetan teacher, I got distracted by a string of troubled thoughts and then fell asleep. Apparently this is normal for first-timers. Experienced meditators will assure you that it is worth persisting, however. "Training allows us to transform the mind, to overcome destructive emotions and to dispel suffering," says Buddhist monk Matthieu Ricard. "The numerous and profound methods that Buddhism has developed over the centuries can be used and incorporated by anyone. What is needed is enthusiasm and perseverance." It all sounds very rewarding, but what does science have to say on the subject? Stories abound in the media about the transformative potential of meditative practice, but it is only in recent years that empirical evidence has emerged. In the past decade, researchers have used functional magnetic resonance imaging (fMRI) to look at the brains of experienced meditators such as Ricard as well as beginners, and tested the effects of different meditative practices on cognition, behaviour, physical and emotional health and brain plasticity. A real scientific picture of meditation is now coming together. It suggests that meditation can indeed change aspects of your psychology, temperament and physical health in dramatic ways. The studies are even starting to throw light on how meditation works. "Time spent earnestly investigating the nature of your mind is bound to be helpful," says Clifford Saron at the Center for Mind and Brain at the University of California, Davis. And you don't need a Buddhist or spiritualist world view to profit from meditation. "One can be an empiricist [in meditation], just by working with the nature of your experience." Saron should know - he is leading the Shamatha project, one of the most comprehensive scientific studies of meditation ever. In 2007, Saron and a team of neuroscientists and psychologists followed 60 experienced meditators over an intensive three-month meditation retreat in the Colorado Rockies, watching for changes in their mental abilities, psychological health and physiology. The participants practised for at least five hours a day using a method known as focused attention meditation, which involves directing attention on the tactile sensation of breathing (see "How to meditate"). The first paper from the project was published in June 2010 (Psychological Science, vol 21, p 829). Headed by Katherine MacLean at Johns Hopkins University School of Medicine in Baltimore, the study measured the volunteers' attention skills by showing them a succession of vertical lines flashed up on a computer screen. They then had to indicate, by clicking a mouse, whenever there was a line shorter than the

80 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

rest. As the retreat progressed, MacLean and her colleagues noted that the volunteers became progressively more accurate and found it increasingly easy to stay focused on the task for long periods. Other researchers have also linked meditation with improved attention. Last year a team led by Antoine Lutz at the Waisman Laboratory for Brain Imaging and Behavior, which is part of the University of Wisconsin- Madison, reported that after three months of training in focused attention meditation, volunteers were quicker at picking out different tones among a succession of similar ones, implying their powers of sustained concentration had improved (Journal of Neuroscience, vol 29, p 13418). In 2007, Lutz's colleague Heleen Slagter, now at the University of Amsterdam in the Netherlands, published results from a study involving a combination of focused attention and "open monitoring" or mindfulness meditation - which involves the constant monitoring of moment-by-moment experience. After three months of meditation for between 10 and 12 hours a day her subjects showed a decreased "attentional blink", the cognitive processing delay, usually lasting about half a second, that causes people to miss a stimulus such as a number on a screen when it follows rapidly after another (PLoS Biology, vol 5, p e128). The suggestion that meditation can improve attention is worth considering, given that focus is crucial to so much in life, from the learning and application of skills to everyday judgement and decision-making, or simply concentrating on your computer screen at work without thinking about what you will be eating for dinner. But how does dwelling on your breath for a period each day lead to such a pronounced cognitive change? One possibility is that it involves working memory, the capacity to hold in mind information needed for short- term reasoning and comprehension. The link with meditation was established recently by Amishi Jha at the University of Miami in Coral Gables. She trained a group of American marines to focus their attention using mindfulness meditation and found that this increased their working memory (Emotion, vol 10, p 54). MacLean points out that meditation is partly about observing how our sensory experiences change from moment to moment, which requires us to hold information about decaying sensory traces in working memory. MacLean and others also believe that meditation training enhances some central cognitive faculty - as yet unknown - that is used in all basic perception tasks. "It's like a muscle that can be used in lots of different ways," she says. Then, once perception becomes less effortful, the brain can direct more of its limited resources to concentration. Backing up this idea, Slagter's measurements of electrical activity in the brain during the attentional blink task revealed that as meditation training progressed, volunteers used fewer resources when processing the first stimulus, meaning they were less likely to get "stuck" on it and miss the second stimulus. Feeling better Along with enhancing cognitive performance, meditation seems to have an effect on emotional well-being. A second study from researchers with the Shamatha project, to appear in the journal Emotion, concluded that meditation improves general social and emotional functioning, making study participants less anxious, and more aware of and better able to manage their emotions. A clue about how this might work comes from the finding that the volunteers also got better at a task in which they had to look at a screen and click a mouse whenever a long line appeared but resist the urge to click at the appearance of shorter lines. This is harder than it sounds, especially as the shorter lines appear infrequently. Lead author Baljinder Sahdra, at the University of California, Davis, reasons that meditation training teaches people to "withhold impulsive reactions to a lot of internal stimuli, some of which can be emotionally intense in nature", adding that this kind of restraint seems to be a key feature of healthy emotion regulation. The notion that by practising meditation people become less emotionally reactive is also reinforced by brain imaging work. A team led by Julie Brefczynski-Lewis at West Virginia University in Morgantown used fMRI to study meditators "in action" and found that the amygdala - which plays a crucial role in processing emotions and emotional memories - was far less active in expert meditators than in novices (Proceedings of the National Academy of Sciences, vol 104, p 11483). The ability to manage one's emotions could also be key to why meditation can improve physical health. Studies have shown it to be an effective treatment for eating disorders, substance abuse, psoriasis and in particular for recurrent depression and chronic pain. Last year, psychologist Fadel Zeidan, at Wake Forest University School of Medicine in Winston-Salem, reported that his volunteers noticed a decreased sensitivity to pain after just a few sessions of mindfulness meditation (Journal of Pain, vol 11, p 199). He believes meditation doesn't remove the sensation of pain so much as teach sufferers to control their emotional reaction

81 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

to it and reduce the stress response. He is now using fMRI in an attempt to understand why that helps. "There's something very empowering about knowing you can alleviate some of these things yourself," he says. Volunteers noticed a decreased sensitivity to pain after just a few sessions of meditation The positive effect of meditation on psychological well-being could also explain recent findings from the Shamatha project that regular meditation practice can lead to a significant increase in the activity of telomerase, an enzyme that protects against cellular ageing and which is suppressed in response to psychological stress. The work will appear in Psychoneuroendocrinology. Emotions may also be at the heart of another benefit of meditation. One of the hottest areas in meditation research is whether the practice can enhance feelings towards others. This arose partly because fMRI studies by Lutz and his team showed that brain circuits linked to empathy and the sharing of emotions - such as the insula and the anterior cingulate cortex - are much more active in long-term meditators than in novices (NeuroImage, vol 47, p 1038). One of the hottest areas in meditation research is whether it can enhance feelings towards others Compassion is a complicated construct that probably involves a host of emotional skills according to Margaret Kemeny at the University of California, San Francisco. "To be compassionate with someone, first you have to recognise that they are experiencing a negative reaction. Then you have to consider what a beneficial response might be. Then you have to have the motivation to do something about it." In other words, you are unlikely to increase someone's capacity for compassion without improving their emotional balance. A gym for your mind In 2009, an institute dedicated to studying the neurobiological roots of empathy and compassion opened at Stanford University in California. The Center for Compassion and Altruism Research and Education, which is funded by a range of interest groups including neuroscientists, Silicon Valley entrepreneurs and the Dalai Lama, has already instigated a clutch of studies. They aim to discover how a special kind of meditation training in which the practitioner focuses on enhancing their altruistic love for others affects the brain, and the extent to which it can cultivate empathic and compassionate feelings and behaviour. The suggestion that people can become more empathic and compassionate through meditation practice has prompted psychologist Paul Ekman and Alan Wallace, a Buddhist teacher and president of the Santa Barbara Institute for Consciousness Studies, to float the idea of mental training "gymnasiums". Like physical exercise gyms, but for the mind, these would allow people to drop in and learn to improve their emotional balance, develop their capacity for compassion and even measure their stress levels. Others have suggested that meditation could become an alternative to medication. Although this seems like a good idea, Saron is dubious. He worries that thinking of meditation as a quick fix will smother some of the subtleties that are integral to successful practice. "When you are returning your mind to the object in hand, you have to do it with a sense of gentleness and authority, rather than develop a sense of failure when your mind wanders." But the great thing about meditation is that anyone can practise it anywhere. What's more you don't have to be an expert or spend five hours a day at it to reap the benefits. The novices in Zeidan's pain experiment reported improvements after meditating for just 20 minutes a day for three days. In a second experiment he found that similarly brief sessions can improve cognitive performance on tasks that demand continuous attention, such as remembering and reciting a series of digits (Consciousness and Cognition, vol 19, p 597). "It is possible to produce substantial changes in brain function through short-term practice of meditation," says Richard Davidson, director of the Waisman Laboratory. He says data from a new unpublished study by his lab shows "demonstrable changes in brain function" in novice meditators after just two weeks of training for 30 minutes a day. "Even small amounts of practice can make a discernible difference." That is good news for beginners like me. Still, it does seem that the more you meditate, the greater the impact on your brain. Research by Brefczynski-Lewis, for example, revealed changes in brain activity indicating that expert meditators require minimal cognitive effort to stay focused. But this particular effect was only evident in people who had spent around 44,000 hours meditating - that's the equivalent of working for 25 years at a full-time job. Most of us will probably never achieve that level of transcendence but it's certainly something to aim for. Bibliography

82 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

1. Mind in the Balance by Alan Wallace (Columbia University Press, 2009) 2. The Art of Meditation by Matthieu Ricard (Atlantic Books, 2010)

How to meditate There are numerous meditation styles, but the two most commonly studied by researchers are focused attention meditation, in which the aim is to stay focused on a chosen thing such as an icon, a mantra or the breath, and mindfulness or open monitoring meditation, where practitioners try to become aware of everything that comes into their moment-by-moment experience without reacting to it. For focused attention meditation, start by sitting on a cushion or chair with your back straight and your hands in your lap and eyes closed. Then concentrate your mind on your chosen object - say your breathing, or more particularly the sensation of your breath leaving your mouth or nostrils. Try to keep it there. Probably your mind will quickly wander away, to an itch on your leg, perhaps, or to thoughts of what you will be doing later. Keep bringing it back to the breath. In time this will train the mind in three essential skills: to watch out for distractions, to "let go" of them once the mind has wandered, and to re-engage with the object of meditation. With practice, you should find it becomes increasingly easy to stay focused. In mindfulness meditation the aim is to monitor all the various experiences of your mind - thoughts, emotions, bodily sensations - and simply observe them, rather than trying to focus on any one of them. Instead of grasping at whatever comes to mind, which is what most of us do most of the time, the idea is to maintain a detached awareness. Those who develop this skill find it easier to manage emotions in day-to-day life. The more you practise, the deeper the changes will be. As Buddhist teacher Alan Wallace puts it: "You have now set out on one of the greatest expeditions as you explore the hidden recesses of your mind." Michael Bond is a consultant for New Scientist http://www.newscientist.com/article/mg20927940.200-mind-gym-putting-meditation-to-the- test.html?full=true&print=true

83 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

V. S. Ramachandran: Mind, metaphor and mirror neurons

• 10 January 2011 by Helen Thomson • Magazine issue 2794.

Music is an important hobby for Ramachandran in his rare moments of downtime (Image: Angela Wylie/Fairfax Photos) From phantom limbs and sick brains, through mirror neurons, synaesthesia, metaphor and abstract art, the ability of Vilayanur S. Ramachandran to generate new ideas about the human brain has made him a superstar. Just talking to him, Helen Thomson found out, puts your brain through a strenuous workout "HE knows his leg belongs to him, he's not crazy. He just doesn't want it anymore," says Vilayanur Ramachandran. He calls this odd state of affairs "spooky". Downright terrifying seems more like it. Another of his patients believes he is, rather inconveniently, dead. "He says he can smell decaying flesh but doesn't bother committing suicide because what's the point? In his mind he's already dead." Ramachandran is regaling me with these disturbing anecdotes at a recent Society for Neuroscience conference in San Diego, California. As one of the most prolific neuroscientists of our time, everyone wants a piece of him. I sneak a glance at the other reporters looking on as I whisk him out of the press room. He seems endearingly unaware of his popularity, shaking hands with "fans" at every turn. Then again, investigating strange neurological conditions and asking what they can tell us about the human mind has allowed him to develop special insights into the qualities of human uniqueness, something he is eager to share not only with his peers at the conference but with a wider audience through his books and lectures. In person, Ramachandran sparkles, his hands shake with a slight, odd, quiver, but his smile suggests he is on the verge of either telling you something very wise - or very silly. But today there is no silliness: we talk about the complex topic of metaphor. He describes his theory that several areas of the brain developed in tandem which ultimately resulted in the uniquely human ability to link dissimilar concepts. "People don't like

84 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

saying we're special because it smacks of creationism, but there are areas of the brain that, when developing, simultaneous and fortuitously combined to create something wonderful - this huge explosion of abilities that characterise the human brain." Ramachandran is particularly interested in metaphor because it ties in neatly with his previous work on synaesthesia - a kind of sensory hijack, where, for example, people see numbers as colours or taste words. "Metaphor is our ability to link seemingly unrelated ideas, just like synaesthesia links the senses," he says. After spending years working with people who have synaesthesia, he believes "pruning genes" are responsible. In the fetal brain, all parts of the brain are interconnected, but as we age, the connections are pruned. If the pruning genes get it wrong, the connections are off. "If you think of ideas as being enshrined in neural populations in the brain, if you get greater cross-connectivity you're going to create a propensity towards metaphorical thinking," he says. I don't have synaesthesia, neither does Ramachandran, but he points out to me the strangeness of asking why, say, the cheddar cheese in your sandwich is "sharp". It's true, cheese isn't sharp, it's soft, so why do I use a tactile adjective to describe a gustatory sensation? "It means our brains are already replete with synaesthetic metaphors," he says. "Your loud shirt isn't making any noise, it's because the same genes that can predispose you to synaesthesia also predispose you to make links between seemingly unrelated ideas, which is the basis of creativity." This ability to link ideas allowed us to swing through the trees, he explains. "Making the connection between the angle of a branch and the angle of your hand is a form of abstraction. Once this fundamental ability to abstract was in place we could start to do more complicated abstractions," he says. "If a cat sees a rat, it's just a long thing that's good to eat. For you, a rat evokes associations with the plague. And for me, that quality is unique to humans." It's not just words that get him going these days, but drawings, too. Ramachandran came to art after hearing a lecture on Auguste Rodin - and it's now both a hobby and a fundamental part of his work. As if to prove it, he grabs a notepad and draws a picture. It bears a pretty good likeness to a seagull. "This is brilliant," he grins, not boasting about his artistic abilities but preparing me for his next tale. "A newborn gull chick begs for food from its mother by pecking at a red spot on its mother's beak," he explains. "The mother then regurgitates food into the chick's mouth. You can just hold a beak without the mother and the chick will still peck at it. But here's the best part," he says, gearing up for the denouement like a motorised bunny. "Put three red stripes onto a stick and the chick goes crazy, completely berserk, and totally ignores the real beak." At this point, he sounds passionate: "Maybe the neurons responsible have a rule that the more red the better, so by putting three stripes on it you're stimulating these neurons more optimally and this sends a jolt to the limbic system which says 'wow what a sexy beak!' " But how does this relate to art, I ask. "It's what a great abstract artist has discovered by intuition, genius or accident - a Picasso is merely a stick with three stripes for the human brain!" Ramachandran was just about to give me the intriguing neurological explanation for why people prefer to glimpse a nude behind a shower curtain rather than full frontal in a magazine when his phone beeps and our interview is briefly put on hold. As I wait, I remember reading that Ramachandran could never remember his children's or his wife's birthdays. "It doesn't mean I don't love you," he had told them. After listening to his voicemail, he says, sheepishly: "I forgot to turn up to another interview." It's funny and intriguing that the man with such a privileged insight into the brain is himself so often absent-minded. "Where were we?" he asks. But it doesn't seem to matter. Despite his forgetfulness, Ramachandran speeds from one fascinating topic to the next, with an excitement that's somewhere between a young researcher on the verge of a great discovery and a small child on a sugar rush. Eventually he returns, as I knew he would at some point, to his pet subject: mirror neurons. These neurons are thought to act in the same way when you perform an action or watch someone else performing the same action, giving a rich internal reflection of their actions. For Ramachandran, this system is likely to have played a key role in making us unique. "It probably developed from being able to understand another's actions and then turned in on itself. Suddenly you're taking an allocentric rather than egocentric view of yourself. That's the dawn of self-awareness."

85 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

And it's impossible to curb his enthusiasm. "Mirror neurons make us all alike, they're acting in the same way whether you or I make the action. If you remove my skin, I dissolve into you," he explains. It's a fascinating concept and the mainstay of many eastern religious traditions. But he's a little guarded when I ask whether he is religious, claiming no obvious alliance to any particular religion nor atheism, but accepting that what he has just said gives some credence to talk of us all being interconnected - as long as you don't take it too literally. "There's no real difference between you and other people," he says. "Through our mirror neurons we're all hooked up together." Mirror neurons make us all alike... If you remove my skin, I dissolve into you That's why interviewing Ramachandran is such a treat: you turn up expecting to find out more about him, and in some spooky mixing of minds end up finding out a whole lot more about yourself. Profile Vilayanur S. Ramachandran directs the Center for Brain and Cognition, University of California, San Diego, and is adjunct professor at the Salk Institute, La Jolla. He trained as a doctor, then neuroscientist. Among his books are Phantoms in the Brain, and, published this month, The Tell-Tale Brain (William Heinemann) http://www.newscientist.com/article/mg20927945.300-v-s-ramachandran-mind-metaphor-and-mirror- neurons.html

86 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

Uncertainty principle: How evolution hedges its bets

• 10 January 2011 by Henry Nicholls • Magazine issue 2794.

Quantum evolution (Image: Shonagh Rae) Variety is the key to survival in a changeable world – and evolution may have come up with an extraordinary way of generating more variety A man walks into a bar. "I have a new way of looking at evolution," he announces. "Do you have something I could write it down on?" The barman produces a piece of paper and a pen without so much as a smile. But then, the man wasn't joking. The man in question is Andrew Feinberg, a leading geneticist at Johns Hopkins University in Baltimore; the bar is The Hung, Drawn and Quartered, a pub within the shadow of the Tower of London; and what's written on the piece of paper could fundamentally alter the way we think about epigenetics, evolution and common diseases. Before setting foot in the pub, Feinberg had taken a turn on the London Eye, climbed Big Ben and wandered into Westminster Abbey. There, as you might expect, he sought out the resting place of Isaac Newton and Charles Darwin. He was struck by the contrast between the lavish marble sculpture of a youthful Newton, reclining regally beneath a gold-leafed globe, and Darwin's minimalist floor stone. As he looked round, Feinberg's eyes came to rest on a nearby plaque commemorating physicist Paul Dirac. This set him thinking about quantum theory and evolution, which led him to the idea that epigenetic changes - heritable changes that don't involve modifications to DNA sequences - might inject a Heisenberg-like uncertainty into the expression of genes, which would boost the chances of species surviving. That, more or less, is what he wrote on the piece of paper. Put simply, Feinberg's idea is that life has a kind of built-in randomness generator which allows it to hedge its bets. For example, a characteristic such as piling on the fat could be very successful when famine is frequent, but a drawback in times of plenty. If the good times last for many generations, however, natural selection could eliminate the gene variant for piling on fat from a population. Then, when famine does eventually come, the population could be wiped out. Life's built-in randomness generator allows evolution to hedge its bets But if there is some uncertainty about the effect of genes, some individuals might still pile on the fat, even though they have the same genes as everyone else. Such individuals might die young in good times, but if famine strikes they might be the only ones to survive. In an uncertain world, uncertainty could be crucial for the long-term survival of populations. The implications of this idea are profound. We already know there is a genetic lottery - every fertilised human egg contains hundreds of new mutations. Most of these have no effect whatsoever, but a few can be beneficial or harmful. If Feinberg is right, there is also an epigenetic lottery: some people are more (or less) likely to

87 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

develop cancer, drop dead of a heart attack or suffer from mental health problems than others with exactly the same DNA. To grasp the significance of Feinberg's idea, we have to briefly rewind to the early 19th century, when the French zoologist Jean-Baptiste Lamarck articulated the idea - already commonly held - that "acquired characteristics" can be passed from parent to offspring. If a giraffe kept trying to stretch to reach leaves, he believed, its neck would get longer, and its offspring would inherit longer necks. Darwin the Lamarckist Contrary to what many texts claim, Darwin believed something similar, that the conditions an organism experiences can lead to modifications that are inherited. According to Darwin's hypothesis of pangenesis, these acquired changes could be harmful as well as beneficial - such as sons getting gout because their fathers drank too much. Natural selection would favour the beneficial and weed out the harmful. In fact, Darwin believed acquired changes provided the variation essential for evolution by natural selection. Pangenesis was never accepted, not even during Darwin's lifetime. In the 20th century it became clear that DNA is the basis of inheritance, and that mutations that alter DNA sequences are the source of the variation on which natural selection acts. Environmental factors such as radiation can cause mutations that are passed down to offspring, but their effect is random. Biologists rejected the idea that adaptations acquired during the life of an organism can be passed down. Even during the last century, though, examples kept cropping up of traits passed down in a way that did not fit with the idea that inheritance was all about DNA. When pregnant rats are injected with the fungicide vinclozolin, for instance, the fertility of their male descendants is lowered for at least two generations, even though the fungicide does not alter the males' DNA. No one now doubts that environmental factors can produce changes in the offspring of animals even when there is no change in DNA. Many different epigenetic mechanisms have been discovered, from the addition of temporary "tags" to DNA or the proteins around which DNA is wrapped, to the presence of certain molecules in sperm or eggs. What provokes fierce argument is the role that epigenetic changes play in evolution. A few biologists, most prominently Eva Jablonka of University in Israel, think that inherited epigenetic changes triggered by the environment are adaptations. They describe these changes as "neo-Lamarckian", and some even claim that such processes necessitate a major rethink of evolutionary theory. While such views have received a lot of attention, most biologists are far from convinced. They say the trouble with the idea that adaptive changes in parents can be passed down to offspring via epigenetic mechanisms is that, like genetic mutations, most inherited epigenetic changes acquired as a result of environmental factors have random and often harmful effects. At most, the inheritance of acquired changes could be seen as a source of variation that is then acted on by natural selection - a view much closer to Darwin's idea of pangenesis than Lamarck's claim that the intent of an animal could shape the bodies of its offspring. But even this idea is problematic, because it is very rare for acquired changes to last longer than a generation (Annual Review of Genomics and Human Genetics, vol 9, p 233). While epigenetic changes can be passed down from cell to cell during the lifetime of an organism, they do not normally get passed down to the next generation. "The process of producing germ cells usually wipes out epigenetic marks," says Feinberg. "You get a clean slate epigenetically." And if epigenetic marks do not usually last long, it's hard to see how they can have a significant role in evolution - unless it is not their stability but their instability that counts. Rather than being another way to code for specific characteristics, as biologists like Jablonka believe, Feinberg's "new way of looking at evolution" sees epigenetic marks as introducing a degree of randomness into patterns of gene expression. In fluctuating environments, he suggests, lineages able to generate offspring with variable patterns of gene expression are most likely to last the evolutionary course. Is this "uncertainty hypothesis" right? There is evidence that epigenetic changes, as opposed to genetic mutations or environmental factors, are responsible for a lot of variation in the characteristics of organisms. The marbled crayfish, for instance, shows a surprising variation in coloration, growth, lifespan, behaviour and other traits even when genetically identical animals are reared in identical conditions. And a study last year found substantial epigenetic differences between genetically identical human twins. On the basis of their findings, the researchers speculated that random epigenetic variations are actually "much more important"

88 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

than environmental factors when it comes to explaining the differences between twins (Nature Genetics, vol 41, p 240). More evidence comes from the work of Feinberg and his colleague Rafael Irizarry, a biostatistician at the Johns Hopkins Bloomberg School of Public Health in Baltimore, Maryland. One of the main epigenetic mechanisms is the addition of methyl groups (with the chemical formula CH3) to DNA, and Feinberg and Irizarry have been studying patterns of DNA methylation in mice. "The mice were from the same parents, from the same litter, eating the same food and water and living in the same cage," Feinberg says. Stunning finding Despite this, he and Irizarry were able to identify hundreds of sites across the genome where the methylation patterns within a given tissue differed hugely from one individual to the next. Interestingly, these variable regions appear to be present in humans too (Proceedings of the National Academy of Sciences, vol 107, p 1757). "Methylation can vary across individuals, across cell types, across cells within the same cell type and across time within the same cell," says Irizarry. It fell to Irizarry to produce a list of genes associated with each region that could, in theory at least, be affected by the variation in methylation. What he found blew him away. The genes that show a high degree of epigenetic plasticity are very much those that regulate basic development and body plan formation. "It's a counter-intuitive and stunning thing because you would not expect there to be that kind of variation in these very important patterning genes," says Feinberg. The results back the idea that epigenetic changes to DNA might blur the relationship between genotype (an organism's genetic make-up) and phenotype (its form and behaviour). "It could help explain why there is so much variation in gene expression during development," says Günter Wagner, an evolutionary biologist at Yale University. But that does not necessarily mean epigenetic changes are adaptive, he says. "There has not been enough work on specifying the conditions under which this kind of mechanism might evolve." When he began exploring the idea with Feinberg, Irizarry constructed a computer simulation to help him get his head round it. At first, he modelled what would happen in a fixed environment where being tall is an advantage. "The taller people survive more often, have more children and eventually everyone's tall," he says. Then, he modelled what would happen in a changeable environment where, at different times, it is advantageous to be tall or short. "If you are a tall person that only has tall kids, then your family is going to go extinct." In the long run, the only winners in this kind of scenario are those that produce offspring of variable height. This result is not controversial. "We know from theory that goes some way back that mechanisms that induce 'random' phenotypic variation may be selected over those that produce a single phenotype," says Tobias Uller, a developmental biologist at the University of Oxford. But showing that something is theoretically plausible is a long way from showing that the variability in methylation evolved because it boosts survival. Jerry Coyne, an evolutionary geneticist at the University of Chicago, is blunter. "There is not a shred of evidence that variation in methylation is adaptive, either within or between species," he says. "I know epigenetics is an interesting phenomenon, but it has been extended willy-nilly to evolution. We're nowhere near getting to grips with what epigenetics is all about. This might be a part of it, but if it is it's going to be a small part." To Susan Lindquist of the Massachusetts Institute of Technology, however, it is an exciting idea that makes perfect sense. "It's not just that epigenetics influences traits, but that epigenetics creates greater variance in the traits and that creates greater phenotypic diversity," she says. And greater phenotypic diversity means a population has a better chance of surviving whatever life throws at it. Lindquist studies prions, proteins that can not only flip between two states but pass on their state to other prions. While they are best known for causing diseases such as Creutzfeldt-Jakob disease, Lindquist thinks they provide another epigenetic mechanism for evolutionary "bet-hedging". Take Sup35, a protein involved in the protein-making machinery of cells. In yeast, Sup35 has a tendency to flip into a state in which it clumps together, spontaneously or in response to environmental stress, which in turn can alter the proteins that cells make, Lindquist says. Some of these changes will be harmful, but she and her colleagues have shown that they can allow yeast cells to survive conditions that would normally mean death. While Jablonka remains convinced that epigenetic marks play an important role in evolution through "neo- Lamarckian" inheritance, she welcomes Feinberg and Irizarry's work. "It would be worth homing in on

89 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

species that live in highly changeable environments," she suggests. "You would expect more methylation, more variability, and inheritance of variability from one generation to the next." As surprising as Feinberg's idea is, it does not challenge the mainstream view of evolution. "It's straight population genetics," says Coyne. Favourable mutations will still win out, even if there is a bit of fuzziness in their expression. And if Feinberg is right, what evolution has selected for is not epigenetic traits, but a genetically encoded mechanism for producing epigenetic variation. This might produce variation completely randomly or in response to environmental factors, or both. Feinberg predicts that if the epigenetic variation produced by this mechanism is involved in disease, it will be most likely found in conditions like obesity and diabetes, where lineages with a mechanism for surviving environmental fluctuation would win out in the evolutionary long run. He, Irizarry and other colleagues recently studied DNA methylation in white blood cells collected in 1991 and 2002 from the same individuals in Iceland. From this, they were able to identify more than 200 variably methylated regions. To see if these variable regions have something to do with human disease, they looked for a link between methylation density and body mass index. There was a correlation at four of these sites, each of them located either within or near genes known to regulate body mass or diabetes. Feinberg sees this in a positive light. If random epigenetic variation does play a significant role in determining people's risk of getting common diseases, he says, untangling the causes may simpler than we thought. The key is to combine genetic analyses with epigenetic measurements. Feinberg is the first to admit that his idea could be wrong. But he's excited enough to put it to the test. Perhaps, he suggests, it could be the missing link in understanding the relationship between evolution, development and common disease. "It could turn out to be really quite important." Henry Nicholls is a science writer based in London. His latest book is The Way of the Panda (Profile, 2010) http://www.newscientist.com/article/mg20927940.100-uncertainty-principle-how-evolution-hedges-its- bets.html?full=true&print=true

90 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

Pi's nemesis: Mathematics is better with tau

• 12 January 2011 by Jacob Aron • Magazine issue 2794

Time to kill off pi (Image: Michael Hartl) It's time to kill off pi, says physicist Michael Hartl, who believes that an alternative mathematical constant will do its job better What's wrong with pi? Of course, pi is not "wrong" in the sense of being incorrect, it's just a confusing and unnatural choice for the circle constant. Pi is the circumference of a circle divided by its diameter, and this definition leads to annoying factors of 2. Try explaining to a 12-year-old why the angle for an eighth of a pizza – one slice – is pi/4 instead of pi/8. So what should replace pi? In The Tau Manifesto, which I published last year, I suggest using the Greek letter tau which is equal to 2 pi, or 6.28318... instead. Tau is the ratio of a circle's circumference to its radius, and this number occurs with astonishing frequency throughout mathematics. If this idea is so fundamental, why haven't we made the change before? The paper that got the ball rolling on this, "Pi is Wrong!" by mathematician Robert Palais, traces the history of pi. It's only in the last 300 years that this convention was adopted, and I think it's just a mistake. It's one of those times in history when we chose the wrong convention. And you are here to correct that mistake? Yes. The aim of The Tau Manifesto was first and foremost to be fun, but it's true that I would like pi to be replaced by something more sensible. Have you had much success? It's definitely getting some traction among geeks. Eventually I think there is going to be a groundswell of support for it.

91 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

You are up against a formidable enemy, because pi is a popular constant... It is: there are books written about it, and people care enough to memorise tens of thousands of its digits. Google even changed its logo to honour pi on 14 March 2010 - Pi Day. Doesn't using tau ruin equations like the formula for circular area and Euler's identity? Quite the opposite. As I show in The Tau Manifesto, using tau reveals underlying mathematical relationships that are obscured by using pi. In particular, the famous formula for circular area is the manifesto's coup de grâce. Could tau exist alongside pi? Yes. There is no need to rewrite the textbooks. Has anyone successfully changed notation like this in the past? In physics, there is an important quantity known as Planck's constant, h. As quantum mechanics developed, it became clear that h-bar was more important, which is equal to h/2 pi - that's the factor of 2 that pops up everywhere! h-bar is now the standard notation, though both are used. People celebrate pi by eating pies on Pi Day. How can people celebrate tau? I'm planning a Tau Day party for 28 June, and if you think that the circular baked goods on Pi Day are tasty, just wait – Tau Day has twice as much pie! Profile Michael Hartl has a PhD in physics from the California Institute of Technology, and is the author of the web development book Ruby on Rails Tutorial. Having memorised 50 digits of pi, he is now memorising 52 digits of tau http://www.newscientist.com/article/mg20927944.300-pis-nemesis-mathematics-is-better-with-tau.html

92 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

Sozzled superconductors sizzle with speed

• 03 January 2011 by Valerie Jamieson and Valerie Jamieson • Magazine issue 2792.

(Image: Evgeny Karandaev/Shutterstock/Getty)

Can't get your new material to lose all electrical resistance? Try mulling it in red wine WONDERING what to do with any booze left over from your Christmas party? Most people would be happy to pour the dregs down the sink and keep the untouched stuff for later. But Yoshihiko Takano has another idea: donate it to the search for superconductivity. Takano, a physicist at Japan's National Institute for Materials Science in Tsukuba, discovered a few months ago that alcoholic drinks can transform fairly ordinary materials into amazing ones. Unlikely as it sounds, booze could help to unlock one of the biggest mysteries in physics: superconductivity. Superconductors are revered because they conduct electricity with zero resistance. That makes them fascinating from a theoretical viewpoint, and also points to brilliant applications. If you could make overhead power lines from superconducting cables they would lose barely any of the electrical energy they carry, saving money and cutting carbon dioxide emissions. Superconductors also repel magnetic fields, which means they can levitate anything containing materials with the merest hint of magnetism - including trains. With such remarkable properties, you might expect to find superconductors in use everywhere. The reason you don't is that they need to be extremely cold to work properly. Most only superconduct at temperatures close to absolute zero. This has been a source of frustration ever since superconductors were discovered, in 1911, when Dutch physicist Heike Kamerlingh Onnes found that he could make mercury lose all electrical

93 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

resistance by chilling it to 4.2 kelvin (-269 °C) in liquid helium. Many more metallic superconductors have since been discovered, but none works above about 25 K, which is far too cold for anything but the most specialised applications. A big breakthrough came in 1987 when Alex Müller and Georg Bednorz at the IBM Zurich Research Laboratory in Switzerland discovered a material that became a superconductor at the positively balmy temperature of 92 K. Their work triggered a frenzy of research into so-called "high-temperature" superconductors. "Everyone went mad," says Ted Forgan at the University of Birmingham in the UK. "There were lots of reports of 'unidentified superconducting objects' as people tried to push to higher temperatures." Unlike the original superconductors, these new materials are ceramics containing layers of copper and oxygen atoms. Crucially, they work at temperatures that can be reached using liquid nitrogen, which makes them much more practical. "Liquid nitrogen costs half the price of milk, and there is an infinite amount of it", Forgan explains. Today the world record for the warmest superconductor is held by a blend of mercury, barium and calcium plus the standard copper and oxygen layers. It superconducts at 135 K. Similar materials are being tested in power cables and for levitating trains. However, 135 K is still well below room temperature. To make further progress, researchers would love to know what makes high-temperature superconductors so special. Unfortunately, nobody has much of a clue. Although we understand metallic superconductors, there is no equivalent theory for high-temperature ones. "We've spent a quarter of a century and still don't understand them," says Forgan. It probably has something to do with the layers of copper and oxygen, which are common to all the high-temperature superconductors. That's where Takano's boozy discovery might help. In 2008, another group of researchers at the Tokyo Institute of Technology stumbled upon an entirely new type of superconductor. Made from iron, arsenic, oxygen and lanthanum, it sent the superconducting world wild (New Scientist, 16 August 2008, p 31). Tests soon showed that these iron-based superconductors had much in common with high-temperature ones, especially in structure, with layers of iron and arsenic that are similar to those of copper and oxygen. Though they don't work at especially high temperatures, the hope is that they will lead to new theoretical insights into high-temperature superconductivity. The craze for iron soon reached Takano's group, which began playing around with a simplest recipe based on iron and tellurium. In one experiment they tried replacing some of the tellurium with sulphur. The resulting material was hopeless, utterly failing to superconduct. And that might have been the end of the story if one of Takano's students hadn't been dumped by his girlfriend. As he licked his wounds, the student began neglecting his duties. So when Takano asked him for fresh samples of iron tellurium sulphur to test, he had none to offer. Instead he brought along an old sample that had been left lying around in the open air for weeks. To everyone's amazement, it showed signs of life (Physical Review B, vol 81, p 214510). Why would a useless compound suddenly start superconducting? Suspecting that exposure to the air had something to do with it, Takano and his colleagues made a fresh batch, then exposed samples to pure nitrogen or pure oxygen. Others they kept in a vacuum, or submersed in water. Only the samples soaked in water turned into superconductors. "We found that the coexistence of water and oxygen is important," says Takano. That set him wondering what else might have the same effect. Inspiration soon arrived in the form of Yoichi Kamihara, one of the discoverers of iron-based superconductors. He visited the institute in March 2010 to give a lecture. So Takano did what anyone would do under the circumstances: he organised a booze-up. Soaked in alcohol During the party, Takano had a brainwave. He instructed one of his students to spirit away some of the drinks - not for drinking, but for experiments. "I thought of it because I like alcohol very much," Takano says. They pilfered wine, beer, whisky, sake and shochu, a Japanese liquor. Later, in the lab, they heated the alcoholic drinks to 70 °C to speed up whatever reactions might be taking place, and soaked their samples in them for 24 hours. Then they tested for superconductivity. The results were striking. All the drinks worked, with red wine streets ahead of the rest (see graph).

94 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

The results aren't simply down to the amount of alcohol in the drinks. Whisky and shochu are much stronger than wine but did not work as well. Takano's team also tried using increasingly alcoholic mixtures of water and ethanol, but none worked as well as the beverages. Whisky and shochu are stronger than wine but did not work as well at making materials superconduct Takano cannot yet explain why being mulled in red wine should turn iron tellurium sulphur into a superconductor. He suspects it is something to do with antioxidant molecules called polyphenols, which are abundant in red wine. Beyond that he is stumped. His team is now gearing up to study the crystal structure of iron tellurium sulphur before and after it is immersed in red wine. They hope to discover the mechanism that induces superconductivity and perhaps find some clues that could help us finally get to grips with high-temperature superconductors. Like a good claret, Takano's work is taking time to mature. "People thought it sounded a bit of a joke at first," says Forgan. "But I think it's a real effect." Takano himself has received lots of supportive messages on Twitter. "Please use good red wine" has been a popular reaction. It's anyone's guess what will happen to the leftovers... Valerie Jamieson will be mulling herself in red wine this Christmas in an attempt to become a human superconductor http://www.newscientist.com/article/mg20827926.400-sozzled-superconductors-sizzle-with-speed.html

95 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

Housing 9 billion won't take techno-magic

• 00:01 12 January 2011 by Fred Pearce • Magazine issue 2795.

Editorial: "How to engineer a better future" A vision of the future built not around technical wizardry but democracy, with the engineering on tap but not on top? Not what you would expect from a group of engineers, but the UK's Institution of Mechanical Engineers has delivered just that. A report out this week shows how humanity already has at its disposal all the tools to make room for as many as 9 billion people. There are "no insurmountable technical issues in meeting the needs of 9 billion people... sustainable engineering solutions largely exist", the engineers write in Population: One Planet, Too Many People? Switching the world to low-carbon energy, for instance, does not require more research breakthroughs. We need instead to fix "market failures" that prevent widespread adoption of extant technologies, like concentrated solar energy and nuclear power. Dams damned The report even sidelines some traditional engineering solutions. Forget large dams, it says – increased water storage should come from recharging aquifers with treated waste water and flood waters. As for where to put the projected 9 billion, it notes that most of the extra population will be in urban areas, often slums. Better homes are a priority, but the report rejects outright the "engineering solution of decant- demolish-rebuild-return". Slums need help to improve, not a demolition ball. The report says the world should adopt a series of engineering goals to sit alongside the United Nations' existing millennium development goals. Another report out today also sets a high note, declaring that it is possible to feed the future 9 billion without trashing the planet. http://www.newscientist.com/article/dn19946-housing-9-billion-wont-take- technomagic.html?full=true&print=true

96 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

Musical Chills: Why They Give Us Thrills

Scientists have found that the pleasurable experience of listening to music releases dopamine, a neurotransmitter in the brain important for more tangible pleasures associated with rewards such as food, drugs and sex. (Credit: iStockphoto) ScienceDaily (Jan. 12, 2011) — Scientists have found that the pleasurable experience of listening to music releases dopamine, a neurotransmitter in the brain important for more tangible pleasures associated with rewards such as food, drugs and sex. The new study from The Montreal Neurological Institute and Hospital -- The Neuro at McGill University also reveals that even the anticipation of pleasurable music induces dopamine release [as is the case with food, drug, and sex cues]. Published in the journal Nature Neuroscience, the results suggest why music, which has no obvious survival value, is so significant across human society. The team at The Neuro measured dopamine release in response to music that elicited "chills," changes in skin conductance, heart rate, breathing, and temperature that were correlated with pleasurability ratings of the music. 'Chills' or 'musical frisson' is a well established marker of peak emotional responses to music. A novel combination of PET and fMRI brain imaging techniques, revealed that dopamine release is greater for pleasurable versus neutral music, and that levels of release are correlated with the extent of emotional arousal and pleasurability ratings. Dopamine is known to play a pivotal role in establishing and maintaining behavior that is biologically necessary. "These findings provide neurochemical evidence that intense emotional responses to music involve ancient reward circuitry in the brain," says Dr. Robert Zatorre, neuroscientist at The Neuro. "To our knowledge, this is the first demonstration that an abstract reward such as music can lead to dopamine release. Abstract rewards are largely cognitive in nature, and this study paves the way for future work to examine non-tangible rewards that humans consider rewarding for complex reasons." "Music is unique in the sense that we can measure all reward phases in real-time, as it progresses from baseline neutral to anticipation to peak pleasure all during scanning," says lead investigator Valorie Salimpoor, a graduate student in the Zatorre lab at The Neuro and McGill psychology program. "It is generally a great challenge to examine dopamine activity during both the anticipation and the consumption phase of a reward. Both phases are captured together online by the PET scanner, which, combined with the

97 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

temporal specificity of fMRI provides us with a unique assessment of the distinct contributions of each brain region at different time points." This innovative study, using a novel combination of imaging techniques, reveals that the anticipation and experience of listening to pleasurable music induces release of dopamine, a neurotransmitter vital for reinforcing behavior that is necessary for survival. The study also showed that two different brain circuits are involved in anticipation and experience, respectively: one linking to cognitive and motor systems, and hence prediction, the other to the limbic system, and hence the emotional part of the brain. These two phases also map onto related concepts in music, such as tension and resolution. This study was conducted at The Neuro and at the Centre for Interdisciplinary Research in Music, Media, and Technology (CIRMMT). The research was supported by the Canadian Institutes of Health Research, the Natural Science and Engineering Research Council, and the CIRMMT.

Story Source: The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by McGill University.

Journal References:

1. Valorie N Salimpoor, Mitchel Benovoy, Kevin Larcher, Alain Dagher, Robert J Zatorre. Anatomically distinct dopamine release during anticipation and experience of peak emotion to music. Nature Neuroscience, 2011; DOI: 10.1038/nn.2726 2. Valorie N. Salimpoor, Mitchel Benovoy, Gregory Longo, Jeremy R. Cooperstock, Robert J. Zatorre. The Rewarding Aspects of Music Listening Are Related to Degree of Emotional Arousal. PLoS ONE, 2009; 4 (10): e7487 DOI: 10.1371/journal.pone.0007487

http://www.sciencedaily.com/releases/2011/01/110112111117.htm

98 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

Most Distant Galaxy Cluster Identified

Astronomers have discovered a massive cluster of young galaxies forming in the distant universe. (Credit: Subaru/NASA/JPL-Caltech) ScienceDaily (Jan. 12, 2011) — Astronomers have uncovered a burgeoning galactic metropolis, the most distant known in the early universe. This ancient collection of galaxies presumably grew into a modern galaxy cluster similar to the massive ones seen today. The developing cluster, named COSMOS-AzTEC3, was discovered and characterized by multi-wavelength telescopes, including NASA's Spitzer, Chandra and Hubble space telescopes, and the ground-based W.M. Keck Observatory and Japan's Subaru Telescope. "This exciting discovery showcases the exceptional science made possible through collaboration among NASA projects and our international partners," said Jon Morse, NASA's Astrophysics Division director at NASA Headquarters in Washington. Scientists refer to this growing lump of galaxies as a proto-cluster. COSMOS-AzTEC3 is the most distant massive proto-cluster known, and also one of the youngest, because it is being seen when the universe itself was young. The cluster is roughly 12.6 billion light-years away from Earth. Our universe is estimated to be 13.7 billion years old. Previously, more mature versions of these clusters had been spotted at 10 billion light- years away. The astronomers also found that this cluster is buzzing with extreme bursts of star formation and one enormous feeding black hole. "We think the starbursts and black holes are the seeds of the cluster," said Peter Capak of NASA's Spitzer Science Center at the California Institute of Technology in Pasadena. "These seeds will eventually grow into a giant, central galaxy that will dominate the cluster -- a trait found in modern-day galaxy clusters." Capak is first author of a paper appearing in the Jan. 13 issue of the journal Nature. Most galaxies in our universe are bound together into clusters that dot the cosmic landscape like urban sprawls, usually centered around one old, monstrous galaxy containing a massive black hole. Astronomers thought that primitive versions of these clusters, still forming and clumping together, should exist in the early universe. But locating one proved difficult-until now. Capak and his colleagues first used the Chandra X-ray Observatory and the United Kingdom's James Clerk Maxwell Telescope on Mauna Kea, Hawaii, to search for the black holes and bursts of star formation needed to form the massive galaxies at the centers of modern galaxy cities. The astronomers then used the Hubble and Subaru telescopes to estimate the distances to these objects, and look for higher densities of galaxies around them. Finally, the Keck telescope was used to confirm that these galaxies were at the same distance and part of the same galactic sprawl. Once the scientists found this lumping of galaxies, they measured the combined mass with the help of Spitzer. At this distance, the optical light from stars is shifted, or stretched, to infrared wavelengths that can only be observed in outer space by Spitzer. The lump sum of the mass turned out to be a minimum of 400 billion suns -- enough to indicate that the astronomers had indeed uncovered a massive proto-cluster. The Spitzer

99 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

observations also helped confirm that a massive galaxy at the center of the cluster was forming stars at an impressive rate. Chandra X-ray observations were used to find and characterize the whopping black hole with a mass of more than 30 million suns. Massive black holes are common in present-day galaxy clusters, but this is the first time a feeding black hole of this heft has been linked to a cluster that is so young. Finally, the Institut de Radioastronomie Millimétrique's interferometer telescope in France and 30-meter (about 100-foot) telescope in Spain, along with the National Radio Astronomy Observatory's Very Large Array telescope in New Mexico, measured the amount of gas, or fuel for future star formation, in the cluster. The results indicate the cluster will keep growing into a modern city of galaxies. "It really did take a village of telescopes to nail this cluster," said Capak. "Observations across the electromagnetic spectrum, from X-ray to millimeter wavelengths, were all critical in providing a comprehensive view of the cluster's many facets." COSMOS-AzTEC3, located in the constellation , is named after the region where it was found, called COSMOS after the Cosmic Evolution Survey. AzTEC is the name of the camera used on the James Clerk Maxwell Telescope; this camera is now on its way to the Large Millimeter Telescope located in Mexico's Puebla state. NASA's Jet Propulsion Laboratory, Pasadena, Calif., manages the Spitzer Space Telescope mission for NASA's Science Mission Directorate, Washington. Science operations are conducted at the Spitzer Science Center at Caltech in Pasadena. Caltech manages JPL for NASA. More information about Spitzer is at: http://spitzer.caltech.edu/ and http://www.nasa.gov/spitzer .

Story Source: The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by NASA/Jet Propulsion Laboratory.

Journal Reference:

1. Peter L. Capak, Dominik Riechers, Nick Z. Scoville, Chris Carilli, Pierre Cox, Roberto Neri, Brant Robertson, Mara Salvato, Eva Schinnerer, Lin Yan, Grant W. Wilson, Min Yun, Francesca Civano, Martin Elvis, Alexander Karim, Bahram Mobasher, Johannes G. Staguhn. A massive protocluster of galaxies at a redshift of z ≈ 5.3. Nature, 2011; DOI: 10.1038/nature09681

100 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

Water on Moon Originated from Comets

Color image of the Moon taken by the Galileo spacecraft. (Credit: NASA) ScienceDaily (Jan. 12, 2011) — Researchers at the University of Tennessee, Knoxville, continue to chip away at the mysterious existence of water on the moon -- this time by discovering the origin of lunar water. Larry Taylor, a distinguished professor in the Department of Earth and Planetary Sciences, was the one last year to discover trace amounts of water on the moon. This discovery debunked beliefs held since the return of the first Apollo rocks that the moon was bone-dry. Then, he discovered water was actually pretty abundant and ubiquitous -- enough so a human settlement on the moon is not unquestionable. Now, Taylor and a team of researchers have determined the lunar water may have originated from comets smashing into the moon soon after it formed. His findings will be posted online, in the article "Extraterrestrial Hydrogen Isotope Composition of Water in Lunar Rocks" on the website of the scientific journal, Nature Geoscience. Taylor and his fellow researchers conducted their study by analyzing rocks brought back from the Apollo mission. Using secondary ion mass spectrometry, they measured the samples' "water signatures," which tell the possible origin of the water -- and made the surprising discovery that the water on the Earth and moon are different. "This discovery forces us to go back to square one on the whole formation of the Earth and moon," said Taylor. "Before our research, we thought the Earth and moon had the same volatiles after the Giant Impact, just at greatly different quantities. Our work brings to light another component in the formation that we had not anticipated -- comets." Scientists believe the moon formed by a giant impact of the nascent Earth with a Mars-sized object called Theia, which caused a great explosion throwing materials outward to aggregate and create the moon. Taylor's article theorizes that at this time, there was a great flux of comets, or "dirty icebergs," hitting both the Earth and moon systems. The Earth already having lots of water and other volatiles did not change much. However, the moon, being bone-dry, acquired much of its water supply from these comets. Taylor's research shows that water has been present throughout all of the moon's history -- some water being supplied externally by solar winds and post-formation comets and the other internally during the moon's original formation. "The water we are looking at is internal," said Taylor. "It was put into the moon during its initial formation, where it existed like a melting pot in space, where cometary materials were added in at small yet significant amounts."

101 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

To be precise, the lunar water he has found does not consist of "water" -- the molecule H2O -- as we know it on Earth. Rather, it contains the ingredients for water -- hydrogen and oxygen -- that when the rocks are heated up, will be liberated to create water. The existence of hydrogen and oxygen -- water -- on the moon can literally serve as a launch pad for further space exploration. "This water could allow the moon to be a gas station in the sky," said Taylor. "Spaceships use up to 85 percent of their fuel getting away from Earth's gravity. This means the moon can act as a stepping stone to other planets. Missions can fuel up at the moon, with liquid hydrogen and liquid oxygen from the water, as they head into deeper space, to other places such as Mars." Taylor collaborated with James P. Greenwood at Wesleyan University in Middletown, Conn.; Shoichi Itoh, Naoya Sakamoto and Hisayoshi Yurimoto at Hokkaido University in Japan; and Paul Warren at the University of California in .

Story Source: The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by University of Tennessee at Knoxville, via EurekAlert!, a service of AAAS.

Journal Reference:

1. James P. Greenwood, Shoichi Itoh, Naoya Sakamoto, Paul Warren, Lawrence Taylor, Hisayoshi Yurimoto. Hydrogen isotope ratios in lunar rocks indicate delivery of cometary water to the Moon. Nature Geoscience, 2011; DOI: 10.1038/ngeo1050

http://www.sciencedaily.com/releases/2011/01/110111133019.htm

102 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

Chemical Analysis Confirms Discovery of Oldest Wine-Making Equipment Ever Found

Archaeologists contemplate 6,100-year-old wine-making equipment. Visiting the excavations of the Areni-1 cave complex in Armenia, archaeologist Levon Petrosyan (left) contemplates the 6,100-year-old wine-making equipment discovered by an international project co-directed by Boris Gasparyan, Gregory Areshian and Ron Pinhasi. Petrosyan's left foot points toward the wine press, which is designed to drain into a vat that at the time of this photo was not yet excavated. While crumbled today, the edge of the wine press would have kept grape juice from spilling over the edges of the wine press, archaeologists believe. (Credit: Hans Barnard) ScienceDaily (Jan. 12, 2011) — Analysis by a UCLA-led team of scientists has confirmed the discovery of the oldest complete wine production facility ever found, including grape seeds, withered grape vines, remains of pressed grapes, a rudimentary wine press, a clay vat apparently used for fermentation, wine-soaked potsherds, and even a cup and drinking bowl. The facility, which dates back to roughly 4100 B.C. -- 1,000 years before the earliest comparable find -- was unearthed by a team of archaeologists from Armenia, the United States and Ireland in the same mysterious Armenian cave complex where an ancient leather shoe was found, a discovery that was announced last summer. "For the first time, we have a complete archaeological picture of wine production dating back 6,100 years," said Gregory Areshian, co-director of the excavation and assistant director of UCLA's Cotsen Institute of Archaeology. An analysis of the discovery, which received support from the National Geographic Society, is presented in an article published online Jan. 11 in the peer-reviewed Journal of Archaeological Science. "This is, so far, the oldest relatively complete wine production facility, with its press, fermentation vats and storage jars in situ," said Hans Barnard, the article's lead author and a UCLA Cotsen Institute archaeologist. Cave outside Armenian village The discovery in 2007 of what appeared to be ancient grape seeds inspired the team to begin excavating Areni-1, a cave complex located in a canyon where the Little Caucasus mountains approach the northern end of the Zagros mountain range, near Armenia's southern border with Iran. The cave is outside a tiny Armenian village still known for its wine-making activities. Under Areshian and Boris Gasparyan, co-director of the project, the dig continued through September, when the vat was excavated. Radiocarbon analysis by researchers at UC Irvine and Oxford University has dated the installation and associated artifacts to between 4100 B.C. and 4000 B.C., or the Late Chalcolithic Period, also known as the Copper Age in recognition of the technological advances that paved the way for metal to replace stone tools. Archaeologists found one shallow basin made of pressed clay measuring about 3 feet by 3-and-a-half feet. Surrounded by a thick rim that would have contained juices, and positioned so as to drain into the deep vat, the basin appears to have served as a wine press. Similarly structured wine-pressing devices were in use as recently as the 19th century throughout the Mediterranean and the Caucasus, Areshian said. No evidence was

103 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

found of an apparatus to smash the grapes against the wine press, but the absence does not trouble the archaeologists. "People obviously were stomping the grapes with their feet, just the way it was done all over the Mediterranean and the way it was originally done in California," Areshian said. All around and on top of the wine press archaeologists found handfuls of grape seeds, remains of pressed grapes and grape must, and dozens of desiccated vines. After examining the seeds, paleobotanists from three separate institutions determined the species to be Vitis vinifera vinifera, the domesticated variety of grape still used to make wine. Telltale evidence of grapes The vat, at just over 2 feet in height, would have held between 14 and 15 gallons of liquid, Areshian estimates. A dark gray layer clung to three potsherds -- two of which rested on the press and the third which was still attached to the vat. Analysis of the residue by chemists at UCLA's Pasarow Mass Spectrometry Laboratory confirmed the presence of the plant pigment malvidin, which is known to appear in only one other fruit native to the area: pomegranates. "Because no remnants of pomegranates were found in the excavated area, we're confident that the vessels held something made with grape juice," Areshian said. The size of the vessel during an era that predated mechanical refrigeration by many millennia points to the likelihood that the liquid was wine, the researchers stress. "At that time, there was no way to preserve juice without fermenting it," Areshian said. "At this volume, any unfermented juice would sour immediately, so the contents almost certainly had to be wine." The team also unearthed one cylindrical cup made of some kind of animal horn and one complete drinking bowl of clay, as well as many bowl fragments. The closest comparable collection of remains was found in the late 1980s by German archaeologists in the tomb of the ancient Egyptian king Scorpion I, the researchers said. Dating to around 3150 B.C., that find consisted of grape seeds, grape skins, dried pulp and imported ceramic jars covered inside with a yellow residue chemically consistent with wine. After the Areni-1 discovery, the next earliest example of an actual wine press is two and a half millennia younger: Two plaster basins that appear to have been used to press grapes between 1650 B.C. and 1550 B.C. were excavated in what is now Israel's West Bank in 1963. Over the years, archaeologists have claimed to find evidence of wine dating as far back as 6000 B.C.-5500 B.C. And references to the art and craft of wringing an inebriant from grapes appear in all kinds of ancient settings. After Noah's Ark landed on Mount Ararat, for instance, the Bible says he planted a vineyard, harvested grapes, produced wine and got drunk. Ancient Egyptian murals depict details of wine-making. Whatever form it takes, early evidence of wine production provides a window into a key transition in human development, scientists say. "Deliberate fermentation of carbohydrates into alcohol has been suggested as a possible factor that prompted the domestication of wild plants and the development of ceramic technology," said Barnard, who teaches in the UCLA Department of Near Eastern Languages and Cultures. Three lines of inquiry point to wine-making In addition to its age and wealth of wine-making elements, the Areni-1 find is notable for its numerous levels of confirmation. In a field where claims often rest on one or two sets of collaborating evidence, this find is supported by radiocarbon dating, paleobotanical analysis and a new approach to analyzing wine residue based on the presence of malvidin. Most prior claims of ancient wine have rested on the presence of tartaric acid -- which is present in grapes but also, at least in some level, in many other fruits and vegetables -- or on the presence of tree resins that were added to preserve the wine and improve its taste, as is done today with retsina, a wine flavored with pine resin. "Tartaric acid alone can't act as a reliable indicator for wine," Areshian said. "It is present in too many other fruits and vegetables, including hawthorn, which still is a popular fruit in the area, but also in a range of other fruits, including tamarind, star fruit and yellow plum." "Resins could indicate wine, but because they were used for a large number of other purposes, ranging from incense to glue, they also are unreliable indicators for wine," Barnard said. "Moreover, we have no idea how wide the preference for retsina-like wine spread."

104 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

The beauty of malvidin, the UCLA team emphasizes, is the limited number of options for its source. The deeply red molecule gives grapes and wine their red color and makes their stains so difficult to remove. "In a context that includes elements used for wine production, malvidin is highly reliable evidence of wine," Areshian said. Areshian and Ron Pinhasi, an archaeologist at Ireland's University College Cork and a co-director of the excavation project, captured the world's imagination in June, when they announced the discovery of a single 5,500-year-old leather moccasin at the Areni-1 site. It is believed to be the oldest leather shoe ever found. The precise identity of the wine-swilling shoe-wearers remains a mystery, although they are believed to be the predecessors of the Kura-Araxes people, an early Transcaucasian group. Nevertheless, archaeologists who have been excavating the 7,500-square-foot-plus site since 2007 think they have an idea of how the wine was used. Because the press and jugs were discovered among dozens of grave sites, the archaeologists believe the wine may have played a ceremonial role. "This wine wasn't used to unwind at the end of the day," Areshian said. The archaeologists believe wine-making for day-to-day consumption would have occurred outside the cave, although they have yet to find evidence for these activities. Still, they believe it is only a matter of time before someone does. "The fact that a fully developed wine production facility seems to have been preserved at this site strongly suggests that there are older, less well-developed instances of this technology, although these have so far not been found," Barnard said. Other foundations that contributed support to the excavation include the Steinmetz Foundation, the Boochever Family Trust, the Gfoeller Foundation Inc. and the Chitjian Family Foundation. The scientific-analytical part of the project also received support from the National Center for Research Resources, the National Science Foundation and the National Institutes of Health.

Story Source: The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by University of California - Los Angeles.

Journal Reference:

1. Hans Barnard, Alek N. Dooley, Gregory Areshian, Boris Gasparyan, Kym F. Faull. Chemical evidence for wine production around 4000 BCE in the Late Chalcolithic Near Eastern highlands. Journal of Archaeological Science, 2010; DOI: 10.1016/j.jas.2010.11.012

http://www.sciencedaily.com/releases/2011/01/110111133236.htm

105 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

How Human Vision Perceives Rapid Changes: Brain Predicts Consequences of Eye Movements Based on What We See Next

Our eyes jump rapidly about three times each second to capture new visual information, and with each jump a new view of the world falls onto the retina -- a layer of visual receptors on the back of the eye. However, we do not experience this jerky sequence of images; rather, we see a stable world. (Credit: iStockphoto/Erik Reis) ScienceDaily (Jan. 10, 2011) — A team of researchers has demonstrated that the brain predicts consequences of our eye movements based on what we see next. The findings, which appear in the journal Nature Neuroscience, have implications for understanding human attention and applications to robotics. The study was conducted by researchers at University Paris Descartes, New York University's Department of Psychology, and Ludwig-Maximilian University in Munich. Our eyes jump rapidly about three times each second to capture new visual information, and with each jump a new view of the world falls onto the retina -- a layer of visual receptors on the back of the eye. However, we do not experience this jerky sequence of images; rather, we see a stable world. In the Nature Neuroscience study, the researchers examined how visual attention is redeployed just before the eye movement in order to keep track of targets and prepare for actions towards these target's locations following the eye movement. In their experiments, the researchers asked human subjects to visually track a series of objects -- six grey squares located in different areas of the subjects' field of vision -- while they were making a sequence of rapid eye movements. To monitor the deployment of visual attention, the researchers had the subjects detect a tilted slash among vertical slashes presented at only one of those six locations. The researchers gauged the subjects' ability to detect the orientation of the tilted slash as a way of monitoring which locations received more attention just before the eye movement. Their results showed that just before the eyes move to a new location, attention is drawn to the targets of interest and also shifted to the locations that the targets will have once the eyes had moved. This process speeds up subsequent eye movements to those targets.

106 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

"Our results show that shifts of visual attention precede rapid eye movements, improving accuracy in identifying objects in the visual field and speeding our future actions to those objects," explained Martin Rolfs, one of the study's co-authors and a post-doctoral fellow in NYU's Department of Psychology.

Story Source: The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by New York University.

Journal Reference:

1. Martin Rolfs, Donatas Jonikaitis, Heiner Deubel, Patrick Cavanagh. Predictive remapping of attention across eye movements. Nature Neuroscience, 2010; DOI: 10.1038/nn.2711

http://www.sciencedaily.com/releases/2011/01/110110103737.htm

107 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

Species Loss Tied to Ecosystem Collapse and Recovery

Ammonoid markers. A 50-million-year fossil record of ammonoids includes two kind of the nautilus-like creatures, swimmers and floaters. At two points of mass extinction, the swimming ammonoids disappear completely from the fossil record. (Credit: Image courtesy of Brown University) ScienceDaily (Jan. 12, 2011) — Geologists at Brown University and the University of Washington have a cautionary tale: Lose enough species in the oceans, and the entire ecosystem could collapse. Looking at two of the greatest mass extinctions in Earth's history, the scientists attribute the ecosystems' collapse to a loss in the variety of species sharing the same space. It took up to 10 million years after the mass extinctions for the ecosystem to stabilize. The findings appear in Geology. The world's oceans are under siege. Conservation biologists regularly note the precipitous decline of key species, such as cod, bluefin tuna, swordfish and sharks. Lose enough of these top-line predators (among other species), and the fear is that the oceanic web of life may collapse. In a new paper in Geology, researchers at Brown University and the University of Washington used a group of marine creatures similar to today's nautilus to examine the collapse of marine ecosystems that coincided with two of the greatest mass extinctions in the Earth's history. They attribute the ecosystems' collapse to a loss of enough species occupying the same space in the oceans, called "ecological redundancy." While the term is not new, the paper marks the first time that a loss of ecological redundancy is directly blamed for a marine ecosystem's collapse in the fossil record. Just as ominously, the authors write that it took up to 10 million years after the mass extinctions for enough variety of species to repopulate the ocean -- restoring ecological redundancy -- for the ecosystem to stabilize. "It's definitely a cautionary tale because we know it's happened at least twice before," said Jessica Whiteside, assistant professor of geological sciences at Brown and the paper's lead author. "And you have long periods of time before you have reestablishment of ecological redundancy." If the theory is true, the implications could not be clearer today. According to the United Nations-sponsored report Global Biodiversity Outlook 2, the population of nearly one-third of marine species that were tracked had declined over the three decades that ended in 2000. The numbers were the same for land-based species. "In effect, we are currently responsible for the sixth major extinction event in the history of the Earth, and the greatest since the dinosaurs disappeared, 65 million years ago," the 2006 report states. Whiteside and co-author Peter Ward studied mass extinctions that ended the Permian period 250 million years ago and another that brought the Triassic to a close roughly 200 million years ago. Both periods are generally believed to have ended with global spasms of volcanic activity. The abrupt change in climate stemming from the volcanism, notably a spike in greenhouse gases in the atmosphere, decimated species on land and in the oceans, losing approximately 90 percent of existing marine species in the Permian-Triassic and 72 percent in the Triassic-Jurassic. The widespread loss of marine life and the abrupt change in global climate caused the carbon cycle, a broad indicator of life and death and outside influences in the oceans, to fluctuate wildly. The

108 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

authors noted these "chaotic carbon episodes" and their effects on biodiversity by studying carbon isotopes spanning these periods. The researchers further documented species collapse in the oceans by compiling a 50-million-year fossil record of ammonoids, predatory squidlike creatures that lived inside coiled shells, found embedded in rocks throughout western Canada. The pair found that two general types of ammonoids, those that could swim around and pursue prey and those that simply floated throughout the ocean, suffered major losses. The fossil record after the end-Permian and end-Triassic mass extinctions shows a glaring absence of swimming ammonoids, which, because they compete with other active predators including fish, is interpreted as a loss of ecological redundancy. "It means that during these low-diversity times, there are only one or two (ammonoids) taxa that are performing. It's a much more simplified food chain," Whiteside noted. Only when the swimming ammonoids reappear alongside its floating brethren does the carbon isotope record stabilize and the ocean ecosystem fully recover, the authors report. "That's when we say ecological redundancy is reestablished," Whiteside said. "The swimming ammonoids have fulfilled that trophic role." The U.S. National Science Foundation and the NASA Astrobiology Institute funded the research.

Story Source: The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by Brown University.

Journal Reference:

1. J. H. Whiteside, P. D. Ward. Ammonoid diversity and disparity track episodes of chaotic carbon cycling during the early Mesozoic. Geology, 2011; 39 (2): 99 DOI: 10.1130/G31401.1

http://www.sciencedaily.com/releases/2011/01/110110103834.htm

109 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

New Glass Stronger and Tougher Than Steel

Micrograph of deformed notch in palladium-based metallic glass shows extensive plastic shielding of an initially sharp crack. Inset is a magnified view of a shear offset (arrow) developed during plastic sliding before the crack opened. (Credit: Image courtesy of Ritchie and Demetriou) ScienceDaily (Jan. 11, 2011) — Glass stronger and tougher than steel? A new type of damage-tolerant metallic glass, demonstrating a strength and toughness beyond that of any known material, has been developed and tested by a collaboration of researchers with the U.S. Department of Energy (DOE)'s Lawrence Berkeley National Laboratory (Berkeley Lab)and the California Institute of Technology. What's more, even better versions of this new glass may be on the way. "These results mark the first use of a new strategy for metallic glass fabrication and we believe we can use it to make glass that will be even stronger and more tough," says Robert Ritchie, a materials scientist who led the Berkeley contribution to the research. The new metallic glass is a microalloy featuring palladium, a metal with a high "bulk-to-shear" stiffness ratio that counteracts the intrinsic brittleness of glassy materials. "Because of the high bulk-to-shear modulus ratio of palladium-containing material, the energy needed to form shear bands is much lower than the energy required to turn these shear bands into cracks," Ritchie says. "The result is that glass undergoes extensive plasticity in response to stress, allowing it to bend rather than crack." Ritchie, who holds joint appointments with Berkeley Lab's Materials Sciences Division and the University of California (UC) Berkeley's Materials Science and Engineering Department, is one of the co-authors of a paper describing this research published in the journal Nature Materials. Co-authoring the Nature Materials paper were Marios Demetriou (who actually made the new glass), Maximilien Launey, Glenn Garrett, Joseph Schramm, Douglas Hofmann and William Johnson of Cal Tech, one of the pioneers in the field of metallic glass fabrication. Glassy materials have a non-crystalline, amorphous structure that make them inherently strong but invariably brittle. Whereas the crystalline structure of metals can provide microstructural obstacles (inclusions, grain boundaries, etc.,) that inhibit cracks from propagating, there's nothing in the amorphous structure of a glass to stop crack propagation. The problem is especially acute in metallic glasses, where single shear bands can form and extend throughout the material leading to catastrophic failures at vanishingly small strains. In earlier work, the Berkeley-Cal Tech collaboration fabricated a metallic glass, dubbed "DH3," in which the propagation of cracks was blocked by the introduction of a second, crystalline phase of the metal. This

110 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

crystalline phase, which took the form of dendritic patterns permeating the amorphous structure of the glass, erected microstructural barriers to prevent an opened crack from spreading. In this new work, the collaboration has produced a pure glass material whose unique chemical composition acts to promote extensive plasticity through the formation of multiple shear bands before the bands turn into cracks. "Our game now is to try and extend this approach of inducing extensive plasticity prior to fracture to other metallic glasses through changes in composition," Ritchie says. "The addition of the palladium provides our amorphous material with an unusual capacity for extensive plastic shielding ahead of an opening crack. This promotes a fracture toughness comparable to those of the toughest materials known. The rare combination of toughness and strength, or damage tolerance, extends beyond the benchmark ranges established by the toughest and strongest materials known." The initial samples of the new metallic glass were microalloys of palladium with phosphorous, silicon and germanium that yielded glass rods approximately one millimeter in diameter. Adding silver to the mix enabled the Cal Tech researchers to expand the thickness of the glass rods to six millimeters. The size of the metallic glass is limited by the need to rapidly cool or "quench" the liquid metals for the final amorphous structure. "The rule of thumb is that to make a metallic glass we need to have at least five elements so that when we quench the material, it doesn't know what crystal structure to form and defaults to amorphous," Ritchie says. The new metallic glass was fabricated by co-author Demetriou at Cal Tech in the laboratory of co-author Johnson. Characterization and testing was done at Berkeley Lab by Ritchie's group. "Traditionally strength and toughness have been mutually exclusive properties in materials, which makes these new metallic glasses so intellectually exciting," Ritchie says. "We're bucking the trend here and pushing the envelope of the damage tolerance that's accessible to a structural metal." The characterization and testing research at Berkeley Lab was funded by DOE's Office of Science. The fabrication work at Cal Tech was funded by the National Science Foundation.

Story Source: The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by DOE/Lawrence Berkeley National Laboratory.

Journal Reference:

1. Marios D. Demetriou, Maximilien E. Launey, Glenn Garrett, Joseph P. Schramm, Douglas C. Hofmann, William L. Johnson, Robert O. Ritchie. A damage-tolerant glass. Nature Materials, 2011; DOI: 10.1038/nmat2930

http://www.sciencedaily.com/releases/2011/01/110110121709.htm

111 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

Hubble Zooms in on a Space Oddity

In this image by NASA's , an unusual, ghostly green blob of gas appears to float near a normal-looking spiral galaxy. The bizarre object, dubbed Hanny's Voorwerp (Hanny's Object in Dutch), is the only visible part of a 300,000-light-year-long streamer of gas stretching around the galaxy, called IC 2497. The greenish Voorwerp is visible because a searchlight beam of light from the galaxy's core illuminated it. This beam came from a quasar, a bright, energetic object that is powered by a black hole. The quasar may have turned off about 200,000 years ago. (Credit: NASA, ESA, W. Keel (University of Alabama), and the Galaxy Zoo Team) ScienceDaily (Jan. 11, 2011) — One of the strangest space objects ever seen is being scrutinized by the penetrating vision of NASA's Hubble Space Telescope. A mysterious, glowing green blob of gas is floating in space near a spiral galaxy. Hubble uncovered delicate filaments of gas and a pocket of young star clusters in the giant object, which is the size of our Milky Way galaxy. The Hubble revelations are the latest finds in an ongoing probe of Hanny's Voorwerp (Hanny's Object in Dutch), named for Hanny van Arkel, the Dutch teacher who discovered the ghostly structure in 2007 while participating in the online Galaxy Zoo project. Galaxy Zoo enlists the public to help classify more than a million galaxies catalogued in the Sloan Digital Sky Survey. The project has expanded to include the Hubble Zoo, in which the public is asked to assess tens of thousands of galaxies in deep imagery from the Hubble Space Telescope. In the sharpest view yet of Hanny's Voorwerp, Hubble's Wide Field Camera 3 and Advanced Camera for Surveys have uncovered star birth in a region of the green object that faces the spiral galaxy IC 2497, located about 650 million light-years from Earth. Radio observations have shown an outflow of gas arising from the galaxy's core. The new Hubble images reveal that the galaxy's gas is interacting with a small region of Hanny's Voorwerp, which is collapsing and forming stars. The youngest stars are a couple of million years old.

112 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

"The star clusters are localized, confined to an area that is over a few thousand light-years wide," explains astronomer William Keel of the University of Alabama in Tuscaloosa, leader of the Hubble study. "The region may have been churning out stars for several million years. They are so dim that they have previously been lost in the brilliant light of the surrounding gas." Recent X-ray observations have revealed why Hanny's Voorwerp caught the eye of astronomers. The galaxy's rambunctious core produced a quasar, a powerful light beacon powered by a black hole. The quasar shot a broad beam of light in Hanny's Voorwerp's direction, illuminating the gas cloud and making it a space oddity. Its bright green color is from glowing oxygen. "We just missed catching the quasar, because it turned off no more than 200,000 years ago, so what we're seeing is the afterglow from the quasar," Keel says. "This implies that it might flicker on and off, which is typical of quasars, but we've never seen such a dramatic change happen so rapidly." The quasar's outburst also may have cast a shadow on the blob. This feature gives the illusion of a gaping hole about 20,000 light-years wide in Hanny's Voorwerp. Hubble reveals sharp edges around the apparent opening, suggesting that an object close to the quasar may have blocked some of the light and projected a shadow on Hanny's Voorwerp. This phenomenon is similar to a fly on a movie projector lens casting a shadow on a movie screen. Radio studies have revealed that Hanny's Voorwerp is not just an island gas cloud floating in space. The glowing blob is part of a long, twisting rope of gas, or tidal tail, about 300,000 light-years long that wraps around the galaxy. The only optically visible part of the rope is Hanny's Voorwerp. The illuminated object is so huge that it stretches from 44,000 light-years to 136,000 light-years from the galaxy's core. The quasar, the outflow of gas that instigated the star birth, and the long, gaseous tidal tail point to a rough life for IC 2497. "The evidence suggests that IC 2497 may have merged with another galaxy about a billion years ago," Keel explains. "The Hubble images show in exquisite detail that the spiral arms are twisted, so the galaxy hasn't completely settled down." In Keel's scenario, the merger expelled the long streamer of gas from the galaxy and funneled gas and stars into the center, which fed the black hole. The engorged black hole then powered the quasar, which launched two cones of light. One light beam illuminated part of the tidal tail, now called Hanny's Voorwerp. About a million years ago, shock waves produced glowing gas near the galaxy's core and blasted it outward. The glowing gas is seen only in Hubble images and spectra, Keel says. The outburst may have triggered star formation in Hanny's Voorwerp. Less than 200,000 years ago, the quasar dropped in brightness by 100 times or more, leaving an ordinary-looking core. New images of the galaxy's dusty core from Hubble's Space Telescope Imaging Spectrograph show an expanding bubble of gas blown out of one side of the core, perhaps evidence of the sputtering quasar's final gasps. The expanding ring of gas is still too small for ground-based telescopes to detect. "This quasar may have been active for a few million years, which perhaps indicates that quasars blink on and off on timescales of millions of years, not the 100 million years that theory had suggested," Keel says. He added that the quasar could light up again if more material is dumped around the black hole. Keel is presenting his results on Jan. 10, 2011, at the American Astronomical Society meeting in Seattle, Wash.

Story Source: The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by Space Telescope Science Institute.

http://www.sciencedaily.com/releases/2011/01/110110090425.htm

113 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

NASA Radar Reveals Features on Asteroid

A radar image of asteroid 2010 JL33, generated from data taken by NASA's Goldstone Solar System Radar on Dec. 11 and 12, 2010. (Credit: NASA/JPL-Caltech) ScienceDaily (Jan. 12, 2011) — Radar imaging at NASA's Goldstone Solar System Radar in the California desert on Dec. 11 and 12, 2010, revealed defining characteristics of recently discovered asteroid 2010 JL33. The images have been made into a short movie that shows the celestial object's rotation and shape. A team led by Marina Brozovic, a scientist at NASA's Jet Propulsion Laboratory in Pasadena, Calif., made the discovery. "Asteroid 2010 JL33 was discovered on May 6 by the Mount Lemmon Survey in Arizona, but prior to the radar observations, little was known about it," said Lance Benner, a scientist at JPL. "By using the Goldstone Solar System Radar, we can obtain detailed images that reveal the asteroid's size, shape and rotational rate, improve its orbit, and even make out specific surface features." Data from the radar reveal 2010 JL33 to be an irregular, elongated object roughly 1.8 kilometers (1.1 miles) wide that rotates once every nine hours. The asteroid's most conspicuous feature is a large concavity that may be an impact crater. The images in the movie span about 90 percent of one rotation. At the time it was imaged, the asteroid was about 22 times the distance between Earth and the moon (8.5 million kilometers, or 5.3 million miles). At that distance, the radio signals from the Goldstone radar dish used to make the images took 56 seconds to make the roundtrip from Earth to the asteroid and back to Earth again. The 70-meter (230-foot) Goldstone antenna in California's Mojave Desert, part of NASA's Deep Space network, is one of only two facilities capable of imaging asteroids with radar. The other is the National Science Foundation's 1,000-foot-diameter (305 meters) Arecibo Observatory in Puerto Rico. The capabilities of the two instruments are complementary. The Arecibo radar is about 20 times more sensitive, can see about one-third of the sky, and can detect asteroids about twice as far away. Goldstone is fully steerable, can see about 80 percent of the sky, can track objects several times longer per day, and can image asteroids at finer spatial resolution. To date, Goldstone and Arecibo have observed 272 near-Earth asteroids and 14 comets with radar. JPL manages the Goldstone Solar System Radar and the Deep Space Network for NASA. More information about asteroid radar research is at: http://echo.jpl.nasa.gov/ . More information about the Deep Space Network is at: http://deepspace.jpl.nasa.gov/dsn .

Story Source: The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by NASA/Jet Propulsion Laboratory.

http://www.sciencedaily.com/releases/2011/01/110112143327.htm

114 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

Virus Killer Gets Supercharged: Discovery Greatly Improves Common Disinfectant

From left, Professor Qilin Li, graduate student Michael Liga, alumna Huma Jafry and Professor Andrew Barron have published a paper outlining their method to dramatically improve the effectiveness of a common disinfectant. (Credit: Jeff Fitlow/Rice University) ScienceDaily (Jan. 12, 2011) — A simple technique to make a common virus-killing material significantly more effective is a breakthrough from the Rice University labs of Andrew Barron and Qilin Li. Rather than trying to turn the process into profit, the researchers have put it into the public domain. They hope wide adoption will save time, money and perhaps even lives. The Rice professors and their team reported in Environmental Science and Technology, an American Chemical Society journal, that adding silicone to titanium dioxide, a common disinfectant, dramatically increases its ability to degrade aerosol- and water-borne viruses. "We're taking a nanoparticle that everyone's been using for years and, with a very simple treatment, we've improved its performance by more than three times without any real cost," said Barron, Rice's Charles W. Duncan Jr.-Welch Professor of Chemistry and a professor of materials science. Barron described himself as a "serial entrepreneur," but saw the discovery's potential benefits to society as being far more important than any thoughts of commercialization. Barron said titanium dioxide is used to kill viruses and bacteria and to decompose organics via photocatalysis (exposure to light, usually ultraviolet). The naturally occurring material is also used as a pigment in paints, in sunscreen and even as food coloring. "If you're using titanium dioxide, just take it, treat it for a few minutes with silicone grease or silica or silicic acid, and you will increase its efficiency as a catalyst," he said. Barron's lab uses "a pinch" of silicon dioxide to treat a commercial form of titanium dioxide called P25. "Basically, we're taking white paint pigment and functionalizing it with sand," he said. Disinfecting a volume of water that once took an hour would now take minutes because of the material's enhanced catalytic punch, Barron said. "We chose the Yangtze River as our baseline for testing, because it's considered the most polluted river in the world, with the highest viral content," he said. "Even at that level of viral contamination, we're getting complete destruction of the viruses in water that matches the level of pollution in the Yangtze." Using a smaller amount of treated P25 takes longer but works just as well, he said. "Either way, it's green and it's cheap."

115 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

The team started modifying titanium dioxide two years ago. Li, an assistant professor in civil and environmental engineering whose specialties include water and wastewater treatment, approached Barron to help search for new photocatalytic nanomaterials to disinfect drinking water. The revelation came when students in Barron's lab heated titanium dioxide, but it wasn't quite the classic "aha!" moment. Graduate student and co-author Michael Liga saw the data showing greatly enhanced performance and asked fellow graduate student Huma Jafry what she had done. Jafry, the paper's first author, said, "I didn't do anything," Barron recalled. When Barron questioned Jafry, who has since earned her doctorate, he discovered she used silicone grease to seal the vessel of P25 before heating it. Subsequent testing with nonsilicone grease revealed no change in P25's properties, whether the sample was heated or not. Remarkably, Barron said, further work with varying combinations of titanium dioxide and silicone dioxide found the balance between the two at the time of the discovery was nearly spot-on for maximum impact. Barron said binding just the right amount of silica to P25 creates an effect at the molecular level called band bending. "Because the silicone-oxygen bond is very strong, you can think of it as a dielectric," he said. "If you put a dielectric next to a semiconductor, you bend the conduction and valence bands. And therefore, you shift the absorption of the ultraviolet (used to activate the catalyst)." Bending the bands creates a path for electrons freed by the UV to go forth and react with water to create hydroxyl radicals, an oxidant responsible for contaminant degradation and the most significant reactive agent created by titanium dioxide. "If your conduction band bends to the degree that electrons find it easier to pop out and do something else, your process becomes more efficient," Barron said. Li saw great potential for enhanced P25. In developed countries, photo reactors designed to take advantage of the new material in centralized treatment plants could more efficiently kill bacteria and inactivate viruses in water supplies while minimizing the formation of harmful disinfection byproducts, she said. But the greatest impact may be in developing nations where water is typically disinfected through the SODIS method, in which water is exposed to sunlight for its heat and ultraviolet radiation. "In places where they don't have treatment plants or even electricity, the SODIS method is great, but it takes a very long time to make water safe to drink," Li said. "Our goal is to incorporate this photocatalyst so that instead of taking six hours, it only takes 15 minutes." Barron wants to spread the good news. "Here's a way of taking what is already a very good environmental catalyst and making it better," he said. "It works consistently, and we've done batch after batch after batch of it now. The methodology in the paper is the one we routinely use. As soon as we buy P25, we treat it." The Robert A. Welch Foundation, the U.S. Navy and the National Science Foundation Center for Biological and Environmental Nanotechnology supported the research.

Story Source: The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by Rice University, via EurekAlert!, a service of AAAS.

Journal Reference:

1. Huma R. Jafry, Michael V. Liga, Qilin Li, Andrew R. Barron. Simple Route to Enhanced Photocatalytic Activity of P25 Titanium Dioxide Nanoparticles by Silica Addition. Environmental Science & Technology, 2010; 101231104927031 DOI: 10.1021/es102749e

http://www.sciencedaily.com/releases/2011/01/110112080912.htm

116 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

Earth Is Twice as Dusty as in 19th Century, Research Shows

Desert dust and climate influence each other directly and indirectly through a host of intertwined systems. (Credit: Image courtesy of Cornell University) ScienceDaily (Jan. 10, 2011) — If the house seems dustier than it used to be, it may not be a reflection on your housekeeping skills. The amount of dust in the Earth's atmosphere has doubled over the last century, according to a new study; and the dramatic increase is influencing climate and ecology around the world. The study, led by Natalie Mahowald, associate professor of earth and atmospheric sciences, used available data and computer modeling to estimate the amount of desert dust, or soil particles in the atmosphere, throughout the 20th century. It's the first study to trace the fluctuation of a natural (not human-caused) aerosol around the globe over the course of a century. Mahowald presented the research at the fall meeting of the American Geophysical Union in San Francisco Dec. 13. Desert dust and climate influence each other directly and indirectly through a host of intertwined systems. Dust limits the amount of solar radiation that reaches the Earth, for example, a factor that could mask the warming effects of increasing atmospheric carbon dioxide. It also can influence clouds and precipitation, leading to droughts; which, in turn, leads to desertification and more dust. Ocean chemistry is also intricately involved. Dust is a major source of iron, which is vital for plankton and other organisms that draw carbon out of the atmosphere. To measure fluctuations in desert dust over the century, the researchers gathered existing data from ice cores, lake sediment and coral, each of which contain information about past concentrations of desert dust in the region. They then linked each sample with its likely source region and calculated the rate of dust deposition over time. Applying components of a computer modeling system known as the Community Climate System Model, the researchers reconstructed the influence of desert dust on temperature, precipitation, ocean iron deposition and terrestrial carbon uptake over time. Among their results, the researchers found that regional changes in temperature and precipitation caused a global reduction in terrestrial carbon uptake of 6 parts per million (ppm) over the 20th century. The model also showed that dust deposited in oceans increased carbon uptake from the atmosphere by 6 percent, or 4 ppm, over the same time period.

117 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

While the majority of research related to aerosol impacts on climate is focused on anthropogenic aerosols (those directly emitted by humans through combustion), Mahowald said, the study highlights the important role of natural aerosols as well. "Now we finally have some information on how the desert dust is fluctuating. This has a really big impact for the understanding of climate sensitivity," she said. It also underscores the importance of gathering more data and refining the estimates. "Some of what we're doing with this study is highlighting the best available data. We really need to look at this more carefully. And we really need more paleodata records," she said. Meanwhile, the study is also notable for the variety of fields represented by its contributors, she said, which ranged from marine geochemistry to computational modeling. "It was a fun study to do because it was so interdisciplinary. We're pushing people to look at climate impacts in a more integrative fashion."

Story Source: The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by Cornell University. The original article was written by Lauren Gold.

http://www.sciencedaily.com/releases/2011/01/110110055748.htm

118 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

'Thirdhand Smoke' May Be Bigger Health Hazard Than Previously Believed

Scientists are reporting that so-called "thirdhand smoke" -- the invisible remains of cigarette smoke that deposits on carpeting, clothing, furniture and other surfaces -- may be even more of a health hazard than previously believed. (Credit: iStockphoto/Rosemarie Gearhart) ScienceDaily (Jan. 12, 2011) — Scientists are reporting that so-called "thirdhand smoke" -- the invisible remains of cigarette smoke that deposits on carpeting, clothing, furniture and other surfaces -- may be even more of a health hazard than previously believed. The study, published in ACS' journal, Environmental Science & Technology, extends the known health risks of tobacco among people who do not smoke but encounter the smoke exhaled by smokers or released by smoldering cigarette butts. Yael Dubowski and colleagues note that thirdhand smoke is a newly recognized contributor to the health risks of tobacco and indoor air pollution. Studies show that that nicotine in thirdhand smoke can react with the ozone in indoor air and surfaces like clothing and furniture, to form other pollutants. Exposure to them can occur to babies crawling on the carpet, people napping on the sofa, or people eating food tainted by thirdhand smoke. In an effort to learn more about thirdhand smoke, the scientists studied interactions between nicotine and indoor air on a variety of different materials, including cellulose (a component of wood furniture), cotton, and paper to simulate typical indoor surfaces. They found that nicotine interacts with ozone, in indoor air, to form potentially toxic pollutants on these surfaces. "Given the toxicity of some of the identified products and that small particles may contribute to adverse health effects, the present study indicates that exposure to [thirdhand smoke] may pose additional health risks," the article notes. The authors acknowledge funding from the United States-Israel Binational Science Foundation (BSF) and the German-Israeli Foundation for Scientific Research and Development (GIF).

Story Source: The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by American Chemical Society, via EurekAlert!, a service of AAAS.

Journal Reference:

1. Lauren M. Petrick, Alona Svidovsky, Yael Dubowski. Thirdhand Smoke: Heterogeneous Oxidation of Nicotine and Secondary Aerosol Formation in the Indoor Environment. Environmental Science & Technology, 2011; 45 (1): 328 DOI: 10.1021/es102060v

http://www.sciencedaily.com/releases/2011/01/110112132138.htm

119 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

Polymer Membranes With Molecular-Sized Channels That Assemble Themselves

Image (a) is an AFM image of a polymer membrane whose dark spots corresponds to organic nanotubes. (b) is a TEM showing a sub-channeled membrane with the organic nanotubes circled in red. Inset shows zoomed- in image of a single nanotube. (Credit: Image from Ting Xu) ScienceDaily (Jan. 11, 2011) — Many futurists envision a world in which polymer membranes with molecular-sized channels are used to capture carbon, produce solar-based fuels, or desalinate sea water, among many other functions. This will require methods by which such membranes can be readily fabricated in bulk quantities. A technique representing a significant first step down that road has now been successfully demonstrated. Researchers with the U.S. Department of Energy's Lawrence Berkeley National Laboratory (Berkeley Lab) and the University of California (UC) Berkeley have developed a solution-based method for inducing the self- assembly of flexible polymer membranes with highly aligned subnanometer channels. Fully compatible with commercial membrane-fabrication processes, this new technique is believed to be the first example of organic nanotubes fabricated into a functional membrane over macroscopic distances. "We've used nanotube-forming cyclic peptides and block co-polymers to demonstrate a directed co-assembly technique for fabricating subnanometer porous membranes over macroscopic distances," says Ting Xu, a polymer scientist who led this project. "This technique should enable us to generate porous thin films in the future where the size and shape of the channels can be tailored by the molecular structure of the organic nanotubes." Xu, who holds joint appointments with Berkeley Lab's Materials Sciences Division and the University of California Berkeley's Departments of Materials Sciences and Engineering, and Chemistry, is the lead author of a paper describing this work, which has been published in the journal ACS Nano. Co-authoring the paper with Xu were Nana Zhao, Feng Ren, Rami Hourani, Ming Tsang Lee, Jessica Shu, Samuel Mao, and Brett Helms, who is with the Molecular Foundry, a DOE nanoscience center hosted at Berkeley Lab. Channeled membranes are one of nature's most clever and important inventions. Membranes perforated with subnanometer channels line the exterior and interior of a biological cell, controlling -- by virtue of size -- the transport of essential molecules and ions into, through, and out of the cell. This same approach holds enormous potential for a wide range of human technologies, but the challenge has been finding a cost- effective means of orienting vertically-aligned subnanometer channels over macroscopic distances on flexible substrates. "Obtaining molecular level control over the pore size, shape, and surface chemistry of channels in polymer membranes has been investigated across many disciplines but has remained a critical bottleneck," Xu says.

120 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

"Composite films have been fabricated using pre-formed carbon nanotubes and the field is making rapid progess, however, it still presents a challenge to orient pre-formed nanotubes normal to the film surface over macroscopic distances." For their subnanometer channels, Xu and her research group used the organic nanotubes naturally formed by cyclic peptides -- polypeptide protein chains that connect at either end to make a circle. Unlike pre-formed carbon nanotubes, these organic nanotubes are "reversible," which means their size and orientation can be easily modified during the fabrication process. For the membrane, Xu and her collaborators used block copolymers -- long sequences or "blocks" of one type of monomer molecule bound to blocks of another type of monomer molecule. Just as cyclic peptides self-assemble into nanotubes, block copolymers self-assemble into well-defined arrays of nanostructures over macroscopic distances. A polymer covalently linked to the cyclic peptide was used as a "mediator" to bind together these two self-assembling systems "The polymer conjugate is the key," Xu says. "It controls the interface between the cyclic peptides and the block copolymers and synchronizes their self-assembly. The result is that nanotube channels only grow within the framework of the polymer membrane. When you can make everything work together this way, the process really becomes very simple." Xu and her colleagues were able to fabricate subnanometer porous membranes measuring several centimeters across and featuring high-density arrays of channels. The channels were tested via gas transport measurements of carbon dioxide and neopentane. These tests confirmed that permeance was higher for the smaller carbon dioxide molecules than for the larger molecules of neopentane. The next step will be to use this technique to make thicker membranes. "Theoretically, there are no size limitations for our technique so there should be no problem in making membranes over large area," Xu says. "We're excited because we believe this demonstrates the feasibility of synchronizing multiple self-assembly processes by tailoring secondary interactions between individual components. Our work opens a new avenue to achieving hierarchical structures in a multicomponent system simultaneously, which in turn should help overcome the bottleneck to achieving functional materials using a bottom-up approach." This research was supported by DOE's Office of Science and by the U.S. Army Research Office. Measurements were carried out on beamlines at Berkeley Lab's Advanced Light Source and at the Advanced Photon Source of Argonne National Laboratory.

Story Source: The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by DOE/Lawrence Berkeley National Laboratory.

Journal Reference:

1. Ting Xu, Nana Zhao, Feng Ren, Rami Hourani, Ming Tsang Lee, Jessica Y. Shu, Samuel Mao, Brett A. Helms. Subnanometer Porous Thin Films by the Co-assembly of Nanotube Subunits and Block Copolymers. ACS Nano, 2011; : 110106141442030 DOI: 10.1021/nn103083t

http://www.sciencedaily.com/releases/2011/01/110111133226.htm

121 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

Eating Vegetables Gives Skin a More Healthy Glow Than the Sun, Study Shows

The face in the middle shows the woman's natural colour. The face on the left shows the effect of sun tanning, while the face on the right shows the effect of eating more carotenoids. Participants thought the carotenoid colour looked healthier. (Credit: Image courtesy of University of Nottingham)

ScienceDaily (Jan. 12, 2011) — New research suggests eating vegetables gives you a healthy tan. The study, led by Dr Ian Stephen at The University of Nottingham, showed that eating a healthy diet rich in fruit and vegetables gives you a more healthy golden glow than the sun.

The research, which showed that instead of heading for the sun the best way to look good is to munch on carrots and tomatoes, has been published in the Journal Evolution and Human Behaviour.

Dr Ian Stephen, from the School of Psychology, University of Nottingham, Malaysia Campus, led the research as part of his PhD at the University of St Andrews and Bristol University. He said: "Most people think the best way to improve skin colour is to get a suntan, but our research shows that eating lots of fruit and vegetables is actually more effective.

Dr Stephen and his team in the Perception Lab found that people who eat more portions of fruit and vegetables per day have a more golden skin colour, thanks to substances called carotenoids. Carotenoids are antioxidants that help soak up damaging compounds produced by the stresses and strains of everyday living, especially when the body is combating disease. Responsible for the red colouring in fruit and vegetables such as carrots and tomatoes, carotenoids are important for our immune and reproductive systems.

Dr Stephen said: "We found that, given the choice between skin colour caused by suntan and skin colour caused by carotenoids, people preferred the carotenoid skin colour, so if you want a healthier and more attractive skin colour, you are better off eating a healthy diet with plenty of fruit and vegetables than lying in the sun."

Dr Stephen suggests that the study is important because evolution would favour individuals who choose to form alliances or mate with healthier individuals over unhealthy individuals.

Professor David Perrett, who heads the Perception Lab, said: "This is something we share with many other species. For example, the bright yellow beaks and feathers of many birds can be thought of as adverts showing how healthy a male bird is. What's more, females of these species prefer to mate with brighter, more coloured males. But this is the first study in which this has been demonstrated in humans."

While this study describes work in Caucasian faces, the paper also describes a study that suggests the effect may exist cross culturally, since similar preferences for skin yellowness were found in an African population. The work was funded by the Biotechnology and Biological Sciences Research Council (BBSRC) and Unilever Research, and published with support from the Economic and Social Research Council (ESRC) and the British Academy and Wolfson Foundation.

122 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

See: http://perception.st-and.ac.uk/ for demos or to participate in face experiments.

Story Source: The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by University of Nottingham.

Journal Reference:

1. Ian D. Stephen, Vinet Coetzee, David I. Perrett. Carotenoid and melanin pigment coloration affect perceived human health . Evolution and Human Behavior, 2010; DOI: 10.1016/j.evolhumbehav.2010.09.003

http://www.sciencedaily.com/releases/2011/01/110111133224.htm

123 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

Coiled Nanowires May Hold Key to Stretchable Electronics

Zhu's research team has created the first coils of silicon nanowire on a substrate that can be stretched to more than double their original length, moving us closer to developing stretchable electronic devices. (Credit: Image courtesy of North Carolina State University) ScienceDaily (Jan. 12, 2011) — Researchers at North Carolina State University have created the first coils of silicon nanowire on a substrate that can be stretched to more than double their original length, moving us closer to incorporating stretchable electronic devices into clothing, implantable health-monitoring devices, and a host of other applications. "In order to create stretchable electronics, you need to put electronics on a stretchable substrate, but electronic materials themselves tend to be rigid and fragile," says Dr. Yong Zhu, one of the researchers who created the new nanowire coils and an assistant professor of mechanical and aerospace engineering at NC State. "Our idea was to create electronic materials that can be tailored into coils to improve their stretchability without harming the electric functionality of the materials." Other researchers have experimented with "buckling" electronic materials into wavy shapes, which can stretch much like the bellows of an accordion. However, Zhu says, the maximum strains for wavy structures occur at localized positions -- the peaks and valleys -- on the waves. As soon as the failure strain is reached at one of the localized positions, the entire structure fails. "An ideal shape to accommodate large deformation would lead to a uniform strain distribution along the entire length of the structure -- a coil spring is one such ideal shape," Zhu says. "As a result, the wavy materials cannot come close to the coils' degree of stretchability." Zhu notes that the coil shape is energetically favorable only for one-dimensional structures, such as wires. Zhu's team put a rubber substrate under strain and used very specific levels of ultraviolet radiation and ozone to change its mechanical properties, and then placed silicon nanowires on top of the substrate. The nanowires formed coils upon release of the strain. Other researchers have been able to create coils using freestanding nanowires, but have so far been unable to directly integrate those coils on a stretchable substrate. While the new coils' mechanical properties allow them to be stretched an additional 104 percent beyond their original length, their electric performance cannot hold reliably to such a large range, possibly due to factors like contact resistance change or electrode failure, Zhu says. "We are working to improve the reliability of the electrical performance when the coils are stretched to the limit of their mechanical stretchability, which is likely well beyond 100 percent, according to our analysis."

124 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

A paper describing the research was published online Dec. 28 by ACS Nano. The paper is co-authored by Zhu, NC State Ph.D. student Feng Xu and Wei Lu, an assistant professor at the University of Michigan. The research was funded by the National Science Foundation. NC State's Department of Mechanical and Aerospace Engineering is part of the university's College of Engineering.

Story Source: The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by North Carolina State University.

Journal Reference:

1. Feng Xu, Wei Lu, Yong Zhu. Controlled 3D Buckling of Silicon Nanowires for Stretchable Electronics. ACS Nano, 2010; : 101228134259017 DOI: 10.1021/nn103189z

http://www.sciencedaily.com/releases/2011/01/110111141343.htm

125 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

NASA Tests New Propulsion System for Robotic Lander Prototype

The robotic lander prototype's propulsion system, shown during a hot-fire test. (Credit: Dynetics Corp.) ScienceDaily (Jan. 12, 2011) — NASA's Robotic Lunar Lander Development Project at Marshall Space Flight Center in Huntsville, Ala., has completed a series of hot fire tests and taken delivery of a new propulsion system for integration into a more sophisticated free-flying autonomous robotic lander prototype. The project is partnered with the Johns Hopkins University Applied Physics Laboratory in Laurel, Md., to develop a new generation of small, smart, versatile robotic landers to achieve scientific and exploration goals on the surface of the moon and near-Earth asteroids. The new robotic lander prototype will continue to mature the development of a robotic lander capability by bringing online an autonomous flying test lander that will be capable of flying up to sixty seconds, testing the guidance, navigation and control system by demonstrating a controlled landing in a simulated low gravity environment. By the spring of 2011, the new prototype lander will begin flight tests at the U.S. Army's Redstone Arsenal Test Center in Huntsville, Ala. The prototype's new propulsion system consists of 12 small attitude control thrusters, three primary descent thrusters to control the vehicle's altitude, and one large "gravity-canceling" thruster which offsets a portion of the prototype's weight to simulate a lower gravity environment, like that of the moon and asteroids. The prototype uses a green propellant, hydrogen peroxide, in a stronger concentration of a solution commonly used in homes as a disinfectant. The by-products after use are water and oxygen. "The propulsion hardware acceptance test consisted of a series of tests that verified the performance of each thruster in the propulsion system," said Julie Bassler, Robotic Lunar Lander Development Project Manager. "The series culminated in a test that characterized the entire system by running a scripted set of thruster firings based on a flight scenario simulation." The propulsion system is currently at Teledyne Brown's manufacturing facility in Huntsville, Ala., for integration with the structure and avionics to complete the new robotic lander prototype. Dynetics Corp. developed the robotic lander prototype propulsion system under the management of the Von Braun Center for Science and Innovation both located in Huntsville, Ala. "This is the second phase of a robotic lander prototype development program," said Bassler. "Our initial "cold gas" prototype was built, delivered and successfully flight tested at the Marshall Center in a record nine

126 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

months, providing a physical and tangible demonstration of capabilities related to the critical terminal descent and landing phases for an airless body mission." The first robotic lander prototype has a record flight time of ten seconds and descended from three meters altitude. This first robotic lander prototype began flight tests in September 2009 and has completed 142 flight tests, providing a platform to develop and test algorithms, sensors, avionics, ground and flight software and ground systems to support autonomous landings on airless bodies, where aero-braking and parachutes are not options. For more photos of the hardware visit: http://www.nasa.gov/roboticlander

Story Source: The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by NASA.

http://www.sciencedaily.com/releases/2011/01/110111081105.htm

127 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

'Superstreet' Traffic Design Improves Travel Time, Safety

"Superstreet" traffic designs result in faster travel times and significantly fewer accidents, according to the new study. (Credit: Image courtesy of North Carolina State University) ScienceDaily (Jan. 12, 2011) — The so-called "superstreet" traffic design results in significantly faster travel times, and leads to a drastic reduction in automobile collisions and injuries, according to North Carolina State University researchers who have conducted the largest-ever study of superstreets and their impacts. Superstreets are surface roads, not freeways. It is defined as a thoroughfare where the left-hand turns from side streets are re-routed, as is traffic from side streets that needs to cross the thoroughfare. In both instances, drivers are first required to make a right turn and then make a U-turn around a broad median. While this may seem time-consuming, the study shows that it actually results in a significant time savings since drivers are not stuck waiting to make left-hand turns or for traffic from cross-streets to go across the thoroughfare. "The study shows a 20 percent overall reduction in travel time compared to similar intersections that use conventional traffic designs," says Dr. Joe Hummer, professor of civil, construction and environmental engineering at NC State and one of the researchers who conducted the study. "We also found that superstreet intersections experience an average of 46 percent fewer reported automobile collisions -- and 63 percent fewer collisions that result in personal injury." The researchers assessed travel time at superstreet intersections as the amount of time it takes a vehicle to pass through an intersection from the moment it reaches the intersection -- whether traveling left, right or straight ahead. The travel-time data were collected from three superstreets located in eastern and central North Carolina, all of which have traffic signals. The superstreet collision data were collected from 13 superstreets located across North Carolina, none of which have traffic signals. The superstreet concept has been around for over 20 years, but little research had been done to assess its effectiveness under real-world conditions. The NC State study is the largest analysis ever performed of the impact of superstreets in real traffic conditions. A paper on the travel time research is being presented Jan. 24 at the Transportation Research Board Annual Meeting in Washington, D.C. The paper is co-authored by Hummer, former NC State graduate students Rebecca Haley and Sarah Ott, and three researchers from NC State's Institute for Transportation Research and Education: Robert Foyle, associate director; Christopher Cunningham, senior research associate; and Bastian Schroeder, research associate.

128 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

The collision research was part of an overarching report of the study submitted to the North Carolina Department of Transportation (NCDOT) last month, and is the subject of a forthcoming paper. The study was funded by NCDOT. NC State's Department of Civil, Construction and Environmental Engineering is part of the university's College of Engineering.

Story Source: The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by North Carolina State University.

http://www.sciencedaily.com/releases/2011/01/110110103741.htm

129 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

Planck's New View of the Cosmic Theater

Planck zeros in on anomalous emission in Rho Ophiucus. The colour composite of the Rho Ophiuchus molecular cloud highlights the correlation between the anomalous microwave emission, most likely due to miniature spinning dust grains observed at 30 GHz (shown here in red), and the thermal dust emission, observed at 857 GHz (shown here in green). The complex structure of knots and filaments, visible in this cloud of gas and dust, represents striking evidence for the ongoing processes of star formation. The composite image (right) is based on three individual maps (left) taken at 0.4 GHz from Haslam et al. (1982) and at 30 GHz and 857 GHz by Planck, respectively. The size of the image is about 5 degrees on a side, which is about 10 times the apparent diameter of the full Moon. (Credit: ESA/Planck Collaboration) ScienceDaily (Jan. 12, 2011) — The first scientific results from ESA's Planck mission were released on Jan. 11, 2011 in Paris. The findings focus on the coldest objects in the Universe, from within our Galaxy to the distant reaches of space. If William Shakespeare were an astronomer living today, he might write that "All the Universe is a stage, and all the galaxies merely players." Planck is bringing us new views of both the stage and players, revealing the drama of the evolution of our Universe. The basis of many of the new results is the Planck mission's 'Early Release Compact Source Catalogue', the equivalent of a cast list. Drawn from Planck's continuing survey of the entire sky at millimetre and submillimetre wavelengths, the catalogue contains thousands of very cold, individual sources which the scientific community is now free to explore. "This is a great moment for Planck. Until now, everything has been about collecting data and showing off their potential. Now, at last, we can begin the discoveries," says Jan Tauber, ESA Project Scientist for Planck. We can think of the Universe as a stage on which the great cosmic drama plays out over three acts. Visible-light telescopes see little more than the final act: the tapestry of galaxies around us. But by making measurements at wavelengths between the infrared and radio, Planck is able to work back in time and show us the preceding two acts. The results just released contain important new information about the middle act, when the galaxies were being assembled. Planck shows galaxy formation taking place Planck has found evidence for an otherwise invisible population of galaxies shrouded in dust billions of years in the past, which formed stars at rates some 10-1000 times higher than we see in our own Galaxy today. Measurements of this population had never been made at these wavelengths before. "This is a first step, we are just learning how to work with these data and extract the most information," says Jean-Loup Puget, CNRS-Université Paris Sud, Orsay, France. Eventually, Planck will show us the best views yet of the Universe's first act: the formation of the first large- scale structures in the Universe, where the galaxies were later born. These structures are traced by the cosmic microwave background radiation, released just 380 000 years after the Big Bang, as the Universe was cooling.

130 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

However, in order to see it properly, contaminating emission from a whole host of foreground sources must first be removed. These include the individual objects contained in the Early Release Compact Source Catalogue, as well as various sources of diffuse emission. Now, an important step towards removing this contamination has been announced. The 'anomalous microwave emission' is a diffuse glow most strongly associated with the dense, dusty regions of our Galaxy, but its origin has been a puzzle for decades. However, data collected across Planck's unprecedented wide wavelength range confirm the theory that it is coming from dust grains set spinning at several tens of billion times a second by collisions with either fast- moving atoms or packets of ultraviolet light. This new understanding helps to remove this local microwave 'fog' from the Planck data with greater precision, leaving the cosmic microwave background untouched. "This is a great result made possible by the exceptional quality of the Planck data," says Clive Dickinson, University of Manchester, UK. Among the many other results just presented, Planck has shown new details of yet other actors on the cosmic stage: distant clusters of galaxies. These show up in the Planck data as compact silhouettes against the cosmic microwave background. The Planck Collaboration has identified 189 so far, including 20 previously unknown clusters that are being confirmed by ESA's XMM-Newton X-ray observatory. By surveying the whole sky, Planck stands the best chance of finding the most massive examples of these clusters. They are rare and their number is a sensitive probe of the kind of Universe we live in, how fast it is expanding, and how much matter it contains. "These observations will be used as bricks to build our understanding of the Universe," says Nabila Aghanim, CNRS-Université Paris Sud, Orsay, France. "Today's results are the tip of the scientific iceberg. Planck is exceeding expectations thanks to the dedication of everyone involved in the project," says David Southwood, ESA Director of Science and Robotic Exploration. "However, beyond those announced today, this catalogue contains the raw material for many more discoveries. Even then, we haven't got to the real treasure yet, the cosmic microwave background itself." Planck continues to survey the Universe. Its next data release is scheduled for January 2013 and will reveal the cosmic microwave background in unprecedented detail, the opening act of the cosmic drama, a picture of the beginning of everything.

Story Source: The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by European Space Agency.

http://www.sciencedaily.com/releases/2011/01/110111133017.htm

131 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

New Insights Into Sun's Photosphere

The primary mirror of the new solar telescope at Big Bear Solar Observatory offers the highest power resolution ever available from a land-based telescope. To demonstrate this power, solar granulation covering a field of 12,000 by 12,000 miles or 19,000 by 19,000 kilometers is shown. Bright points appear side-by-side in dark lanes between granules. These bright points are believed to be associated with magnetic field concentrations on the Sun and are 50 miles in diameter. If you were to view an equivalent image on earth, you’d need an instrument that would allow you to see a row of dimes from a distance of more than 20 miles. Tick marks are separated by intervals of 620 miles or 1000 kilometers. (Credit: BBSO/NJIT) ScienceDaily (Jan. 12, 2011) — NJIT Distinguished Professor Philip R. Goode and the research team at Big Bear Solar Observatory (BBSO) have reported new insights into the small-scale dynamics of the Sun's photosphere. The observations were made during a period of historic inactivity on the Sun and reported in The Astrophysical Journal. The high-resolution capabilities of BBSO's new 1.6-meter aperture solar telescope have made such work possible. "The smallest scale photospheric magnetic field seems to come in isolated points in the dark intergranular lanes, rather than the predicted continuous sheets confined to the lanes," said Goode. "The unexpected longevity of the bright points implies a deeper anchoring than predicted." Following classical Kolmogorov turbulence theory, the researchers demonstrated for the first time how photospheric plasma motion and magnetic fields are in equipartition over a wide dynamic range, while unleashing energy in ever-smaller scales. This equipartition is one of the basic plasma properties used in magnetogydrodynamic models. "Our data clearly illustrates that the Sun can generate magnetic fields not only as previously known in the convective zone but also on the near-surface layer. We believe small-scale turbulent flows of less than 500 km to be the catalyst," said NJIT Research Professor Valentyna Abramenko at BBSO. Tiny jet-like features originating in the dark lanes surrounding the ubiquitous granules that characterize the solar surface were also discovered. Such small-scale events hold the key to unlocking the mystery of heating the solar atmosphere, the researchers said. The origins of such events appear to be neither unequivocally tied to strong magnetic field concentrations, nor associated with the vertex formed by converging flows. "The solar chromosphere shows itself ceaselessly changing character with small-scale energetic events occurring constantly on the solar surface, said NJIT Research Professor Vasyl Yurchyshyn, also at BBSO. Such events suggest a similarity of magnetic structures and events from the hemisphere to its granular scales. The researchers hope to establish how such dynamics can explain the movement underlying convective flows and turbulent magnetic fields.

132 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

The telescope is the crown jewel of BBSO, the first facility-class solar observatory built in more than a generation in the U.S. The instrument is undergoing commissioning at BBSO. Since 1997, under Goode's direction, NJIT has owned and operated BBSO, located in a clear mountain lake. The mountain lake is characterized by sustained atmospheric stability, which is essential for BBSO's primary interests of measuring and understanding solar complex phenomena utilizing dedicated telescopes and instruments. The images were taken with the new instrument with atmospheric distortion corrected by its 97 actuator deformable mirror. By the summer of 2011, in collaboration with the National Solar Observatory, BBSO will have upgraded the current adaptive optics system to one utilizing a 349 actuator deformable mirror. The new telescope began operation in the summer of 2009, with support from the National Science Foundation (NSF), Air Force Office of Scientific Research, NASA and NJIT. Additional NSF support was received a few months ago to fund further upgrades to this new optical system. The telescope will be the pathfinder for an even larger ground-based telescope, the Advanced Technology Solar Telescope (ATST), to be built over the next decade. NJIT is an ATST co-principal investigator on this NSF project. Scientists believe that magnetic structures like sunspots hold the key to space weather. Such weather, originating in the Sun, can affect Earth's climate and environment. A bad storm can disrupt power grids and communication, destroy satellites and even expose airline pilots, crew and passengers to radiation. The new telescope now feeds a high-order adaptive optics system, which in turn feeds the next generation of technologies for measuring magnetic fields and dynamic events using visible and infrared light. A parallel computer system for real-time image enhancement highlights it. Goode and his research team, who study solar magnetic, are expert at combining BBSO ground-based data with satellite data to determine dynamic properties of the solar magnetic fields.

Story Source: The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by New Jersey Institute of Technology, via EurekAlert!, a service of AAAS.

Journal Reference:

1. Philip R. Goode, Vasyl Yurchyshyn, Wenda Cao, Valentyna Abramenko, Aleksandra Andic, Kwangsu Ahn, Jongchul Chae. Highest Resolution Observations of the Quietest Sun. The Astrophysical Journal, 2010; 714 (1): L31 DOI: 10.1088/2041-8205/714/1/L31

http://www.sciencedaily.com/releases/2011/01/110110130929.htm

133 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

Study Estimates Land Available for Biofuel Crops

Civil and environmental engineering professor Ximing Cai, left, and graduate student Xiao Zhang performed a global analysis of marginal land that could produce biofuel crops. (Credit: Photo by L. Brian Stauffer) ScienceDaily (Jan. 12, 2011) — Using detailed land analysis, Illinois researchers have found that biofuel crops cultivated on available land could produce up to half of the world's current fuel consumption -- without affecting food crops or pastureland. Published in the journal Environmental Science and Technology, the study led by civil and environmental engineering professor Ximing Cai identified land around the globe available to produce grass crops for biofuels, with minimal impact on agriculture or the environment. Many studies on biofuel crop viability focus on biomass yield, or how productive a crop can be regionally. There has been relatively little research on land availability, one of the key constraints of biofuel development. Of special concern is whether the world could even produce enough biofuel to meet demand without compromising food production. "The questions we're trying to address are, what kind of land could be used for biofuel crops? If we have land, where is it, and what is the current land cover?" Cai said. Cai's team assessed land availability from a physical perspective -- focusing on soil properties, soil quality, land slope, and regional climate. The researchers collected data on soil, topography, climate and current land use from some of the best data sources available, including remote sensing maps. The critical concept of the Illinois study was that only marginal land would be considered for biofuel crops. Marginal land refers to land with low inherent productivity, that has been abandoned or degraded, or is of low quality for agricultural uses. In focusing on marginal land, the researchers rule out current crop land, pasture land, and forests. They also assume that any biofuel crops would be watered by rainfall and not irrigation, so no water would have to be diverted from agricultural land. Using fuzzy logic modeling, a technique to address uncertainty and ambiguity in analysis, the researchers considered multiple scenarios for land availability. First, they considered only idle land and vegetation land with marginal productivity; for the second scenario, they added degraded or low-quality cropland. For the second scenario, they estimated 702 million hectares of land available for second-generation biofuel crops, such as switchgrass or miscanthus. The researchers then expanded their sights to marginal grassland. A class of biofuel crops called low-impact high-diversity (LIHD) perennial grasses could produce bioenergy while maintaining grassland. While they have a lower ethanol yield than grasses such as miscanthus or switchgrass, LIHD grasses have minimal environmental impact and are similar to grassland's natural land cover. Adding LIHD crops grown on marginal grassland to the marginal cropland estimate from earlier scenarios nearly doubled the estimated land area to 1,107 million hectares globally, even after subtracting possible pasture land -- an area that would produce 26 to 56 percent of the world's current liquid fuel consumption. Next, the team plans to study the possible effect of climate change on land use and availability. "Based on the historical data, we now have an estimation for current land use, but climate may change in the near future as a result of the increase in greenhouse gas emissions, which will have effect on the land availability," said

134 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

graduate student Xiao Zhang, a co-author of the paper. Former postdoctoral fellow Dingbao Wang, now at the University of Central Florida, also co-wrote the paper. "We hope this will provide a physical basis for future research," Cai said. "For example, agricultural economists could use the dataset to do some research with the impact of institutions, community acceptance and so on, or some impact on the market. We want to provide a start so others can use our research data." The Energy Biosciences Institute at U. of I. and the National Science Foundation supported the study.

Story Source: The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by University of Illinois at Urbana-Champaign.

Journal Reference:

1. Ximing Cai, Xiao Zhang, Dingbao Wang. Land Availability for Biofuel Production. Environmental Science & Technology, 2011; 45 (1): 334 DOI: 10.1021/es103338e

http://www.sciencedaily.com/releases/2011/01/110110130936.htm

135 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

NASA Image Shows La Niña-Caused Woes Down Under

The Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) instrument on NASA's Terra spacecraft captured this image of extensive flooding in Rockhampton, Queensland, Australia, on Jan. 7, 2011. (Credit: NASA/GSFC/METI/ERSDAC/JAROS, and U.S./Japan ASTER Science Team) ScienceDaily (Jan. 12, 2011) — The current La Niña in the Pacific Ocean, one of the strongest in the past 50 years, continues to exert a powerful influence on weather around the world, affecting rainfall and temperatures in varying ways in different locations. For Australia, La Niña typically means above-average rains, and the current La Niña is no exception. Heavy rains that began in late December led to the continent's worst flooding in nearly a half century, at its peak inundating an area the size of Germany and France combined. The Associated Press reports about 1,200 homes in 40 communities are underwater and about 11,000 others are damaged, resulting in thousands of evacuations and 10 deaths to date. On Jan. 7, 2011, the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) instrument on NASA's Terra spacecraft captured this image of the inundated city of Rockhampton, Queensland, Australia. With a population of 75,000, Rockhampton is the largest city affected by the current flooding. Torrential rains in northeastern Australia caused the Fitzroy River to overflow its banks and flood much of the city and surrounding agricultural lands. Both the airport and major highways are underwater, isolating the city. In this natural color rendition, muddy water is brown, and shallow, clearer water is gray. Vegetation is depicted in various shades of green, and buildings and streets are white. The image is located at 23.3 degrees south latitude, 150.5 degrees east longitude, and covers an area of 22 by 28.1 kilometers (13.6 by 17.4 miles). For more information, visit: http://photojournal.jpl.nasa.gov/catalog/PIA13775 .

Story Source: The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by NASA/Jet Propulsion Laboratory.

http://www.sciencedaily.com/releases/2011/01/110110152812.htm

136 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

International Space Station Begins New Era of Utilization

Crystals of human hematopoietic prostaglandin D synthase (H-PGDS) grown under terrestrial (a) and microgravity (b) conditions. In the microgravity experiment plate-like crystals were grown with good morphology. Scale bar corresponds to 100 ¼m. (Credit: Dr. Yoshihiro Urade, Osaka Bioscience Institute) ScienceDaily (Jan. 12, 2011) — A new era of utilization for research and technology begins for the completed International Space Station. The orbiting laboratory shifts focus in 2011 from finalizing construction efforts to full-scale use of the facility for scientific investigation and technological advances. Mark Uhran, assistant associate administrator for the International Space Station at NASA Headquarters, kicked off the year at the 49th AIAA Meeting speaking on the topic, Positioning the International Space Station for the Utilization Era. "Full-scale ISS utilization will re-boot the spacecraft for the purposes for which it was originally designed -- scientific research, applications development, technological demonstration and industrial growth," Uhran says. With benefits from research conducted under microgravity conditions already being realized, the NASA authorization act of 2010 extends the life of the space station to 2020. Accomplishments during the second decade of continuous human life, work and research on the station will depend upon the global-market impact of station-based research and development, as well as continued government programmatic support. The past 25 years of microgravity-based research, on earlier missions and during station assembly, can be viewed as a survey phase. Although it can take a long time for the full application of research results in our daily lives, early space research has already yielded important progress and advances for industry and health here on Earth. In his AIAA paper, Uhran reviews five specific examples of notable discoveries and their benefits: Thermo-physical Properties Measurement -- Using electromagnetic levitators, investigators can position and study samples free of contamination from container walls. They also can transition the samples from solid to liquid phases via energy flux. This capability has enabled the understanding of thermodynamic properties for complex, metallic glass alloys, advancing the capably to produce bulk metallic glasses on the ground. The Liquidmetal® Technologies company has recently granted an exclusive worldwide license to Apple Computer, Inc., for use of their patented Liquidmetal® alloy in consumer electronics. Cellular Tissue Culturing -- The use of a bioreactor, originally developed for microgravity-based research, enables cells to grow and propagate in a three-dimensional matrix, similar to their natural development inside the human body. Ground-based labs were previously limited to two-dimensional cell growth. Research laboratories around the nation now routinely use bioreactors in studies of tumors and tissue growth, and the space-based research will continue on board the station in the future where conditions have proven to be optimum.

137 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

Macromolecular Crystallization -- Microgravity allows for larger and more perfect biological macromolecular crystal growth, due to the lack of sedimentation, buoyancy, thermal convection, etc. The resulting crystal allows a more exact determination of molecular structures, needed for therapeutic drug design. At the International Astronautics Congress in Prague, Czechoslovakia, in 2010, Dr. Yoshihiro Urade of Japan unveiled recent results from his space station research using crystallization to learn the structure of an enzyme protein. His work has led to a potential new treatment for Duchenne's muscular dystrophy. Differential Gene Expression -- Gene expression radically changes under microgravity conditions because the physical forces experienced at the cellular level are different than on the ground. Improved understanding of the cues that cause genes to turn "on" and "off" is now enabled through microgravity experimentation. Healthy maintenance of human systems can benefit from a better understanding of gene expression, which regulates physiological performance. Microbial Pathogenicity -- Some bacterial microbes have proven to be more virulent in space than on the ground. Research may lead to new vaccine development here on Earth. Microgravity investigations have shown that signal transduction pathways at the cellular level alter in the absence of gravitational forces. The original government-funded scientific studies, which focused on Salmonella enterica, have since led to privately sponsored research on the development of vaccine candidates for bacterial infections, which could help in the fight against food poisoning. As the world looks to findings from the space station, it is necessary to anticipate a realistic lag from research to results. As Uhran explains, "The notion that a single experimental finding is going to yield a profound discovery that rapidly impacts society in the form of widely available products is well beyond the bounds of history." Even so, the survey period findings point to the promising prospects ahead in this new era of utilization, making the space station an asset of great potential to watch in the coming decade. The complete paper and references can be viewed at: http://www.nasa.gov/pdf/508618main_Uhran_2011_AIAA_Paper.pdf

Story Source: The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by NASA. The original article was written by Jessica Nimon, NASA's Johnson Space Center, International Space Station Program Science Office.

http://www.sciencedaily.com/releases/2011/01/110110164212.htm

138 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

Close-Knit Pairs of Supermassive Black Holes Discovered in Merging Galaxies

One of the newly discovered black-hole pairs. On the left is the image from the Sloan Digital Sky Survey, showing just a single object. The image on the right, taken with the Keck telescope and adaptive optics, resolves the two active galactic nuclei, which are powered by massive black holes. (Credit: Image courtesy of California Institute of TechnologyImage courtesy of California Institute of Technology) ScienceDaily (Jan. 12, 2011) — Astronomers at the California Institute of Technology (Caltech), University of Illinois at Urbana-Champaign (UIUC), and University of Hawaii (UH) have discovered 16 close-knit pairs of supermassive black holes in merging galaxies. The discovery, based on observations done at the W. M. Keck Observatory on Hawaii's Mauna Kea, is being presented in Seattle on January 12 at the meeting of the American Astronomical Society, and has been submitted for publication in the Astrophysical Journal. These black-hole pairs, also called binaries, are about a hundred to a thousand times closer together than most that have been observed before, providing astronomers a glimpse into how these behemoths and their host galaxies merge -- a crucial part of understanding the evolution of the universe. Although few similarly close pairs have been seen previously, this is the largest population of such objects observed as the result of a systematic search. "This is a very nice confirmation of theoretical predictions," says S. George Djorgovski, professor of astronomy, who will present the results at the conference. "These close pairs are a missing link between the wide binary systems seen previously and the merging black-hole pairs at even smaller separations that we believe must be there." As the universe has evolved, galaxies have collided and merged to form larger ones. Nearly every one -- or perhaps all -- of these large galaxies contains a giant black hole at its center, with a mass millions -- or even billions -- of times higher than the sun's. Material such as interstellar gas falls into the black hole, producing enough energy to outshine galaxies composed of a hundred billion stars. The hot gas and black hole form an active galactic nucleus, the brightest and most distant of which are called quasars. The prodigious energy output of active galactic nuclei can affect the evolution of galaxies themselves. While galaxies merge, so should their central black holes, producing an even more massive black hole in the nucleus of the resulting galaxy. Such collisions are expected to generate bursts of gravitational waves, which have yet to be detected. Some merging galaxies should contain pairs of active nuclei, indicating the presence of supermassive black holes on their way to coalescing. Until now, astronomers have generally observed only widely separated pairs -- binary quasars -- which are typically hundreds of thousands of light-years apart.

139 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

"If our understanding of structure formation in the universe is correct, closer pairs of active nuclei must exist," adds Adam Myers, a research scientist at UIUC and one of the coauthors. "However, they would be hard to discern in typical images blurred by Earth's atmosphere." The solution was to use Laser Guide Star Adaptive Optics, a technique that enables astronomers to remove the atmospheric blur and capture images as sharp as those taken from space. One such system is deployed on the W. M. Keck Observatory's 10-meter telescopes on Mauna Kea. The astronomers selected their targets using spectra of known galaxies from the Sloan Digital Sky Survey (SDSS). In the SDSS images, the galaxies are unresolved, appearing as single objects instead of binaries. To find potential pairs, the astronomers identified targets with double sets of emission lines -- a key feature that suggests the existence of two active nuclei. By using adaptive optics on Keck, the astronomers were able to resolve close pairs of galactic nuclei, discovering 16 such binaries out of 50 targets. "The pairs we see are separated only by a few thousands of light-years -- and there are probably many more to be found," says Hai Fu, a Caltech postdoctoral scholar and the lead author of the paper. "Our results add to the growing understanding of how galaxies and their central black holes evolve," adds Lin Yan, a staff scientist at Caltech and one of the coauthors of the study. "These results illustrate the discovery power of adaptive optics on large telescopes," Djorgovski says. "With the upcoming Thirty Meter Telescope, we'll be able to push our observational capabilities to see pairs with separations that are three times closer." In addition to Djorgovski, Fu, Myers, and Yan, the team includes Alan Stockton from the University of Hawaii at Manoa. The work done at Caltech was supported by the National Science Foundation and the Ajax Foundation.

Story Source: The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by California Institute of Technology. The original article was written by Marcus Woo.

http://www.sciencedaily.com/releases/2011/01/110112132132.htm

140 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

Quantum Quirk Contained

Researchers at the University of Calgary and University of Paderborn designed a quantum memory device using a waveguide in a crystal doped with rare-earth ions. (Credit: Wolfgang Tittel/University of Calgary) ScienceDaily (Jan. 12, 2011) — Researchers at the University of Calgary, in Canada, collaborating with the University of Paderborn, in Germany, are working on a way to make quantum networks a reality and have published their findings in the journal Nature. A similar finding by a group at the University of Geneva, in Switzerland is reported in the same issue. "We have demonstrated, for the first time, that a crystal can store information encoded into entangled quantum states of photons," says paper co-author Dr. Wolfgang Tittel of the University of Calgary's Institute for Quantum Information Science. "This discovery constitutes an important milestone on the path toward quantum networks, and will hopefully enable building quantum networks in a few years." In current communication networks, information is sent through pulses of light moving through optical fibre. The information can be stored on computer hard disks for future use. Quantum networks operate differently than the networks we use daily. "What we have is similar but it does not use pulses of light," says Tittel, who is a professor in the Department of Physics and Astronomy at the University of Calgary. "In quantum communication, we also have to store and retrieve information. But in our case, the information is encoded into entangled states of photons." In this state, photons are "entangled," and remain so even when they fly apart. In a way, they communicate with each other even when they are very far apart. The difficulty is getting them to stay put without breaking this fragile quantum link. To achieve this task, the researchers used a crystal doped with rare-earth ions and cooled it to -270 Celsius. At these temperatures, material properties change and allowed the researchers to store and retrieve these photons without measurable degradation.

141 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

An important feature is that this memory device uses almost entirely standard fabrication technologies. "The resulting robustness, and the possibility to integrate the memory with current technology such as fibre-optic cables is important when moving the currently fundamental research towards applications." Quantum networks will allow the sending of information without one being afraid of somebody listening in. "The results show that entanglement, a quantum physical property that has puzzled philosophers and physicists since almost hundred years, is not as fragile as is generally believed," says Tittel.

Story Source: The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by University of Calgary, via EurekAlert!, a service of AAAS.

Journal Reference:

1. Erhan Saglamyurek, Neil Sinclair, Jeongwan Jin, Joshua A. Slater, Daniel Oblak, Félix Bussières, Mathew George, Raimund Ricken, Wolfgang Sohler, Wolfgang Tittel. Broadband waveguide quantum memory for entangled photons. Nature, 2011; DOI: 10.1038/nature09719

Need to cite this story in your essay, paper, or report? Use one of the following formats:

http://www.sciencedaily.com/releases/2011/01/110112132128.htm

142 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

Biomedical Breakthrough: Blood Vessels for Lab-Grown Tissues

Still from a time-lapse image showing how two types of cells -- which were tagged with fluorescent dye -- organize themselves into a functioning capillary networks within 72 hours (Credit: Image courtesy of Rice University) ScienceDaily (Jan. 12, 2011) — Researchers from Rice University and Baylor College of Medicine (BCM) have broken one of the major roadblocks on the path to growing transplantable tissue in the lab: They've found a way to grow the blood vessels and capillaries needed to keep tissues alive. The new research is available online and due to appear in the January issue of the journal Acta Biomaterialia. "The inability to grow blood-vessel networks -- or vasculature -- in lab-grown tissues is the leading problem in regenerative medicine today," said lead co-author Jennifer West, department chair and the Isabel C. Cameron Professor of Bioengineering at Rice. "If you don't have blood supply, you cannot make a tissue structure that is thicker than a couple hundred microns." As its base material, a team of researchers led by West and BCM molecular physiologist Mary Dickinson chose polyethylene glycol (PEG), a nontoxic plastic that's widely used in medical devices and food. Building on 10 years of research in West's lab, the scientists modified the PEG to mimic the body's extracellular matrix -- the network of proteins and polysaccharides that make up a substantial portion of most tissues. West, Dickinson, Rice graduate student Jennifer Saik, Rice undergraduate Emily Watkins and Rice-BCM graduate student Daniel Gould combined the modified PEG with two kinds of cells -- both of which are needed for blood-vessel formation. Using light that locks the PEG polymer strands into a solid gel, they created soft hydrogels that contained living cells and growth factors. After that, they filmed the hydrogels for 72 hours. By tagging each type of cell with a different colored fluorescent marker, the team was able to watch as the cells gradually formed capillaries throughout the soft, plastic gel. To test these new vascular networks, the team implanted the hydrogels into the corneas of mice, where no natural vasculature exists. After injecting a dye into the mice's bloodstream, the researchers confirmed normal blood flow in the newly grown capillaries. Another key advance, published by West and graduate student Joseph Hoffmann in November, involved the creation of a new technique called "two-photon lithography," an ultrasensitive way of using light to create intricate three-dimensional patterns within the soft PEG hydrogels. West said the patterning technique allows the engineers to exert a fine level of control over where cells move and grow. In follow-up experiments, also in collaboration with the Dickinson lab at BCM, West and her team plan to use the technique to grow blood vessels in predetermined patterns. The research was supported by the National Science Foundation and the National Institutes of Health. West's work was conducted in her lab at Rice's BioScience Research Collaborative (BRC). The BRC is an innovative

143 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

space where scientists and educators from Rice University and other Texas Medical Center institutions work together to perform leading research that benefits human medicine and health. A video illustrating the research is available at: http://www.youtube.com/watch?v=JtMifCkTHTo.

Story Source: The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by Rice University.

Journal Reference:

1. Jennifer E. Saik, Daniel J. Gould, Emily M. Watkins, Mary E. Dickinson, Jennifer L. West. Covalently immobilized platelet-derived growth factor-BB promotes angiogenesis in biomimetic poly(ethylene glycol) hydrogels. Acta Biomaterialia, 2011; 7 (1): 133 DOI: 10.1016/j.actbio.2010.08.018

http://www.sciencedaily.com/releases/2011/01/110112080910.htm

144 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

New Measure Trumps High-Density Lipoprotein (HDL) Levels in Protecting Against Heart Disease

Daniel J. Rader, M.D. (Credit: Image courtesy of University of Pennsylvania School of Medicine) ScienceDaily (Jan. 12, 2011) — The discovery that high levels of high-density lipoprotein (HDL) cholesterol (the "good cholesterol") is associated with reduced risk of cardiovascular disease has fostered intensive research to modify HDL levels for therapeutic gain. However, recent findings have called into question the notion that pharmacologic increases in HDL cholesterol levels are necessarily beneficial to patients. Now, a new study from researchers at the University of Pennsylvania School of Medicine shows that a different metric, a measure of HDL function called cholesterol efflux capacity, is more closely associated with protection against heart disease than HDL cholesterol levels themselves. Findings from the study could lead to new therapeutic interventions in the fight against heart disease. The new research will be published in the January 13 issue of the New England Journal of Medicine. Atherosclerosis, a component of heart disease, occurs with a build-up along the artery wall of fatty materials such as cholesterol. Cholesterol efflux capacity, an integrated measure of HDL function, is a direct measure of the efficiency by which a person's HDL removes cholesterol from cholesterol-loaded macrophages (a type of white blood cell), the sort that accumulate in arterial plaque. "Recent scientific findings have directed increasing interest toward the concept that measures of the function of HDL, rather than simply its level in the blood, might be more important to assessing cardiovascular risk and evaluating new HDL-targeted therapies," said Daniel J. Rader, MD, director, Preventive Cardiology at Penn. "Our study is the first to relate a measure of HDL function--its ability to remove cholesterol from macrophages--to measures of cardiovascular disease in a large number of people." In the present study, Rader and colleagues at Penn measured cholesterol efflux capacity in 203 healthy volunteers who underwent assessment of carotid artery intima-media thickness, a measure of arthrosclerosis, 442 patients with confirmed coronary artery disease, and 351 patients without such confirmed disease. An inverse relationship was noted between cholesterol efflux capacity and carotid intima-media thickness both before and after adjustment for the HDL cholesterol level. After an age- and gender- adjusted analysis, increasing efflux capacity conferred decreased likelihood of having coronary artery disease. This relationship remained robust after the addition of traditional cardiovascular risk factors, including HDL cholesterol levels, as covariates. Additionally, men and current smokers had decreased efflux capacity. The researchers noted that although cholesterol efflux from macrophages represents only a small fraction of overall flow through the cholesterol pathway, it is probably the component that is most relevant to protection against heart disease. Rader said, "The findings from this study support the concept that measurement of HDL function provides information beyond that of HDL level, and suggests the potential for wider use of this measure of HDL function in the assessment of new HDL therapies. Future studies may prove fruitful in elucidating additional HDL components that determine cholesterol efflux capacity."

145 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

This work was funded in part by grants from the National Heart, Lung, and Blood Institute, the National Center for Research Resources, and a Distinguished Clinical Scientist Award from the Doris Duke Charitable Foundation.

Story Source: The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by University of Pennsylvania School of Medicine.

Journal Reference:

1. Amit V. Khera, Marina Cuchel, Margarita de la Llera-Moya, Amrith Rodrigues, Megan F. Burke, Kashif Jafri, Benjamin C. French, Julie A. Phillips, Megan L. Mucksavage, Robert L. Wilensky, Emile R. Mohler, George H. Rothblat, Daniel J. Rader. Cholesterol Efflux Capacity, High-Density Lipoprotein Function, and Atherosclerosis. New England Journal of Medicine, 2011; 364 (2): 127 DOI: 10.1056/NEJMoa1001689

http://www.sciencedaily.com/releases/2011/01/110112175516.htm

146 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

Cosmology Standard Candle Not So Standard After All

This image layout illustrates how NASA's Spitzer Space Telescope was able to show that a "standard candle" used to measure cosmological distances is shrinking -- a finding that affects precise measurements of the age, size and expansion rate of our universe. Image credit: (Credit: NASA/JPL-Caltech/Iowa State) ScienceDaily (Jan. 12, 2011) — Astronomers have turned up the first direct proof that "standard candles" used to illuminate the size of the universe, termed Cepheids, shrink in mass, making them not quite as standard as once thought. The findings, made with NASA's Spitzer Space Telescope, will help astronomers make even more precise measurements of the size, age and expansion rate of our universe. Standard candles are astronomical objects that make up the rungs of the so-called cosmic distance ladder, a tool for measuring the distances to farther and farther galaxies. The ladder's first rung consists of pulsating stars called Cepheid variables, or Cepheids for short. Measurements of the distances to these stars from Earth are critical in making precise measurements of even more distant objects. Each rung on the ladder depends on the previous one, so without accurate Cepheid measurements, the whole cosmic distance ladder would come unhinged. Now, new observations from Spitzer show that keeping this ladder secure requires even more careful attention to Cepheids. The telescope's infrared observations of one particular Cepheid provide the first direct evidence that these stars can lose mass-or essentially shrink. This could affect measurements of their distances. "We have shown that these particular standard candles are slowly consumed by their wind," said Massimo Marengo of Iowa State University, Ames, Iowa, lead author of a recent study on the discovery appearing in the Astronomical Journal. "When using Cepheids as standard candles, we must be extra careful because, much like actual candles, they are consumed as they burn." The star in the study is Delta Cephei, which is the namesake for the entire class of Cepheids. It was discovered in 1784 in the constellation Cepheus, or the King. Intermediate-mass stars can become Cepheids when they are middle-aged, pulsing with a regular beat that is related to how bright they are. This unique trait allows astronomers to take the pulse of a Cepheid and figure out how bright it is intrinsically-or how bright it would be if you were right next to it. By measuring how bright the star appears in the sky, and comparing this to its intrinsic brightness, it can then be determined how far away it must be. This calculation was famously performed by astronomer Edwin Hubble in 1924, leading to the revelation that our galaxy is just one of many in a vast cosmic sea. Cepheids also helped in the discovery that our universe is expanding and galaxies are drifting apart. Cepheids have since become reliable rungs on the cosmic distance ladder, but mysteries about these standard candles remain. One question has been whether or not they lose mass. Winds from a Cepheid star could blow off significant amounts of gas and dust, forming a dusty cocoon around the star that would affect how bright it appears. This, in turn, would affect calculations of its distance. Previous research had hinted at such mass loss, but more direct evidence was needed. Marengo and his colleague used Spitzer's infrared vision to study the dust around Delta Cephei. This particular star is racing along through space at high speeds, pushing interstellar gas and dust into a bow shock up ahead. Luckily for the scientists, a nearby companion star happens to be lighting the area, making the bow shock easier to see. By studying the size and structure of the shock, the team was able to show that a strong, massive wind from the star is pushing against the interstellar gas and dust. In addition, the team calculated that this wind is up to one million times stronger than the wind blown by our sun. This proves that Delta Cephei is shrinking slightly.

147 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

Follow-up observations of other Cepheids conducted by the same team using Spitzer have shown that other Cepheids, up to 25 percent observed, are also losing mass. "Everything crumbles in cosmology studies if you don't start up with the most precise measurements of Cepheids possible," said Pauline Barmby of the University of Western Ontario, Canada, lead author of the follow-up Cepheid study published online Jan. 6 in the Astronomical Journal. "This discovery will allow us to better understand these stars, and use them as ever more precise distance indicators." Other authors of this study include N. R. Evans and G.G. Fazio of the Harvard-Smithsonian Center for Astrophysics, Cambridge, Mass.; L.D. Matthews of Harvard-Smithsonian and the Massachusetts Institute of Technology Haystack Observatory, Westford; G. Bono of the Università di Roma Tor Vergata and the INAF- Osservatorio Astronomico di Roma in Rome, Italy; D.L. Welch of the McMaster University, Ontario, Canada; M. Romaniello of the European Southern Observatory, Garching, Germany; D. Huelsman of Harvard- Smithsonian and University of Cincinnati, Ohio; and K. Y. L. Su of the University of Arizona, Tucson. The Spitzer observations were made before it ran out of its liquid coolant in May 2009 and began its warm mission. NASA's Jet Propulsion Laboratory, Pasadena, Calif., manages the Spitzer Space Telescope mission for NASA's Science Mission Directorate, Washington. Science operations are conducted at the Spitzer Science Center at the California Institute of Technology, also in Pasadena. Caltech manages JPL for NASA. For more information about Spitzer, visit http://spitzer.caltech.edu/ and http://www.nasa.gov/spitzer .

Story Source: The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by NASA/Jet Propulsion Laboratory.

Journal References:

1. P. Barmby, M. Marengo, N. R. Evans, G. Bono, D. Huelsman, K. Y. L. Su, D. L. Welch, G. G. Fazio. Galactic Cepheids with Spitzer. II. Search for Extended Infrared Emission. The Astronomical Journal, 2011; 141 (2): 42 DOI: 10.1088/0004-6256/141/2/42 2. M. Marengo, N. R. Evans, P. Barmby, L. D. Matthews, G. Bono, D. L. Welch, M. Romaniello, D. Huelsman, K. Y. L. Su, G. G. Fazio. An Infrared Nebula Associated with δ Cephei: Evidence of Mass Loss? The Astrophysical Journal, 2010; 725 (2): 2392 DOI: 10.1088/0004-637X/725/2/2392

http://www.sciencedaily.com/releases/2011/01/110112143218.htm

148 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

Scientific Evidence Supports Effectiveness of Chinese Drug for Cataracts

Eye with cataract. Scientists are reporting a scientific basis for the long-standing belief that a widely used non-prescription drug in China and certain other countries can prevent and treat cataracts, a clouding of the lens of the eye that is a leading cause of vision loss worldwide. (Credit: iStockphoto/Micheline Dub) ScienceDaily (Jan. 12, 2011) — Scientists are reporting a scientific basis for the long-standing belief that a widely used non-prescription drug in China and certain other countries can prevent and treat cataracts, a clouding of the lens of the eye that is a leading cause of vision loss worldwide. Their study appears in Inorganic Chemistry. In the study, Tzu-Hua Wu, Fu-Yung Huang, Shih-Hsiung Wu and colleagues note that eye drops containing pirenoxine, or PRX, have been reputed as a cataract remedy for almost 60 years. Currently, the only treatment for cataracts in Western medicine is surgical replacement of the lens, the clear disc-like structure inside the eye that focuses light onto the nerve tissue in the back of the eye. Despite the wide use of pirenoxine, there have been few scientific studies on its actual effects, the scientists note. To fill that gap, the scientists tested pirenoxine on cloudy solutions that mimic the chemical composition of the eye lens of cataract patients. The solutions contained crystallin -- a common lens protein -- combined with either calcium or selenite, two minerals whose increased levels appear to play key roles in the development of cataracts. Presence of PRX reduced the cloudiness of the lens solution containing calcium by 38 percent and reduced the cloudiness of the selenite solution by 11 percent. "These results may provide a rationale for using PRX as an anti-cataract agent and warrant further biological studies," the article notes.

Story Source: The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by American Chemical Society.

Journal Reference:

1. Jiahn-Haur Liao, Chien-Sheng Chen, Chao-Chien Hu, Wei-Ting Chen, Shao-Pin Wang, I-Lin Lin, Yi-Han Huang, Ming-Hsuan Tsai, Tzu-Hua Wu, Fu-Yung Huang, Shih-Hsiung Wu. Ditopic Complexation of Selenite Anions or Calcium Cations by Pirenoxine: An Implication for Anti- Cataractogenesis. Inorganic Chemistry, 2011; 50 (1): 365 DOI: 10.1021/ic102151p

http://www.sciencedaily.com/releases/2011/01/110112132140.htm

149 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

No More Mrs. Nice Mom

By JUDITH WARNER

Julie Blackmon

There was bound to be some push back. All the years of nurturance overload simply got to be too much. The breast-feeding through toddlerhood, nonstop baby wearing, co-sleeping, “Baby Mozart” co-watching; the peer pressure for never-ending singsong-voiced Mommy niceness, the ever-maddening chant of “good job!”; compulsory school “involvement” (that is, teacher-delegated busywork packaged as a way to Show Your Child You Care), the rapt attendance at each and every school performance, presentation, sporting event — the whole mishmash of modern, attuned, connected, concerned, self-esteem-building parenting. The reaction came in waves. There were expert warnings, with moralists claiming that all this loosey-goosey lovey-dovey-ness was destroying the hierarchical fiber of the American family, and psychologists writing that all that self-esteem building was leading to epidemic levels of pathological ninnyishness in kids. Then there was a sort of quasi-hedonist revolt, cries of rebellion like Christie Mellor’s “Three Martini Playdate,” mother- toddler happy hours (postpregnancy liberation from “What to Expect” sanctimony!) and take-the-kid-out-all- night hipster parenting. Then came “free range” parenting, an appellation with the added advantage of sounding both fresh and fancy, like a Whole Foods chicken; “simplicity parenting” (recession-era lack of cash dressed up as principled rejection of expensive lessons); and, eventually, a kind of edgy irritation with it all: a new stance of get-tough no-nonsense, frequently called — with no small amount of pride — being a “bad” mother. “Bad,” of course, is relative. “I’m such a bad mother” these days tends to be a boast, as in, “Can you believe that I just said, ‘Not now,’ to my 4-year-old?” The un-self-questioning 1960s-era mother — she of the cream- of-mushroom soup in a can — evokes wistful memories. “Surrendering to motherhood” is over; as a cautionary tale, this spring HBO is running a new miniseries of “Mildred Pierce,” a Todd Haynes remake of the 1945 Joan Crawford film, in which Kate Winslet will play a mother whose life is devoured by her

150 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

attempts to meet the demands of her grasping, never-satisfied daughter. (“She gave her daughter everything,” the tagline for the trailer reads. “But everything was not enough.”) The new toughness is only partly about saving Mom’s sanity. A bigger goal is producing tougher, more resilient and (of course) higher-performing kids. “You have to be hated sometimes by someone you love and who hopefully loves you,” writes Amy Chua in “Battle Hymn of the Tiger Mother,” a just-published parenting memoir that clearly aspires to become a battle plan for a new age of re-empowered, captain-of-the- ship motherhood. “Tiger Mother” is the story of a woman who runs her daughters’ lives with an iron hand, breaking every rule of today’s right-thinking parenting (praise your kids, never compare them to one another, don’t threaten to burn all their stuffed animals if their piano practice is imperfect, don’t tell them they’re “garbage”), all in the guise of practicing Chinese parenting, which, in contrast to the flaccid, touchy-feely Western variety, stresses respect, self-discipline and, above all, results. Chua, who is Chinese-American and a Yale law professor, pushes her children to get straight A’s, forces them to spend hours each day practicing piano and violin; they are not allowed to pursue loser activities like playing the drums, “which leads to drugs,” she says, in a typical turn of phrase that may or may or may not be facetious. She refuses them playdates and sleepovers and TV and video games, and she demands unstinting obedience and devotion to family, all of which leads, unsurprisingly, to no small amount of crying, screaming and general tension. (“I don’t want this,” Chua says, in one particularly memorable moment, when her 4-year-old daughter, Lulu, gives her a birthday card that, the mother judges, couldn’t have taken “more than 20 seconds” to make. “I want a better one — one that you’ve put some thought and effort into. . . . I deserve better than this. So I reject this.”) Despite the obvious limits of Chua’s appeal, her publisher is clearly banking on her message finding wide resonance among American moms worn out from trying to do everything right for kids who mimic Disney Channel-style disrespect for parents, spend hours a day on Facebook, pick at their lovingly prepared food and generally won’t get with the program. The gimmick of selling a program of Chinese parenting is a great one for a time when all the talk is of Chinese ascendancy and American decline. (If you can’t beat ’em, join ’em, kids; not for nothing does Chua make sure that her own children take the time to become fluent in Mandarin.) And there is true universality behind the message she’s honest enough to own: that she is terrified of “family decline,” that she fears that raising a “soft, entitled child” will let “my family fail.” Her deepest hope is that by insisting upon perfection from her children in all things, like violin playing, she will be able to achieve, in her words, control: “Over generational decline. Over birth order. Over one’s destiny. Over one’s children.” The terror of losing ground is the ultimate driving force in the middle- and upper-middle-class American family today, and however unique Chua’s elaboration of it (simply by marrying a Jew, and not a Chinese man, she worries that she is “letting down 4,000 years of civilization”), however obnoxious and over the top her attempts to cope, she is hardly alone in believing that, in her carefully considered ministrations, she will find the perfect alchemy that will allow her to inoculate her kids against personal and professional misfortune. Through all the iterations of Mommy madness, “good” and “bad,” this article of faith always remains intact: that parents can have control. Developmental neuroscientists may talk of genes and as-yet-undiscovered-and- hence-uncontrollable environmental factors that affect the developing fetus, social scientists may talk of socioeconomic background and the predictive power of parents’ level of education — the rest of us keep hope alive that parental actions, each and every moment of each and every better-lived day, have the ultimate ability to shape a child’s life outcome. The notion that parental choices — for early-onset Suzuki or otherwise — have this uniquely determinative effect is, in light of current research, almost adorably quaint, akin to beliefs that cats must be kept out of a baby’s bedroom at night lest they climb into the crib and suck away the child’s breath. But it remains part and parcel of modern mother love. Judith Warner is the author, most recently, of “We’ve Got Issues: Children and Parents in the Age of Medication.”

http://www.nytimes.com/2011/01/16/magazine/16fob-wwln-t.html?_r=1&ref=magazine

151 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

The War on Logic

By PAUL KRUGMAN My wife and I were thinking of going out for an inexpensive dinner tonight. But John Boehner, the speaker of the House, says that no matter how cheap the meal may seem, it will cost thousands of dollars once you take our monthly mortgage payments into account.

Wait a minute, you may say. How can our mortgage payments be a cost of going out to eat, when we’ll have to make the same payments even if we stay home? But Mr. Boehner is adamant: our mortgage is part of the cost of our meal, and to say otherwise is just a budget gimmick.

O.K., the speaker hasn’t actually weighed in on our plans for the evening. But he and his G.O.P. colleagues have lately been making exactly the nonsensical argument I’ve just described — not about tonight’s dinner, but about health care reform. And the nonsense wasn’t a slip of the tongue; it’s the official party position, laid out in charts and figures.

We are, I believe, witnessing something new in American politics. Last year, looking at claims that we can cut taxes, avoid cuts to any popular program and still balance the budget, I observed that Republicans seemed to have lost interest in the war on terror and shifted focus to the war on arithmetic. But now the G.O.P. has moved on to an even bigger project: the war on logic.

So, about that nonsense: this week the House is expected to pass H.R. 2, the Repealing the Job-Killing Health Care Law Act — its actual name. But Republicans have a small problem: they claim to care about budget deficits, yet the Congressional Budget Office says that repealing last year’s health reform would increase the deficit. So what, other than dismissing the nonpartisan budget office’s verdict as “their opinion” — as Mr. Boehner has — can the G.O.P. do?

The answer is contained in an analysis — or maybe that should be “analysis” — released by the speaker’s office, which purports to show that health care reform actually increases the deficit. Why? That’s where the war on logic comes in.

First of all, says the analysis, the true cost of reform includes the cost of the “doc fix.” What’s that? Well, in 1997 Congress enacted a formula to determine Medicare payments to physicians. The formula was, however, flawed; it would lead to payments so low that doctors would stop accepting Medicare patients. Instead of changing the formula, however, Congress has consistently enacted one-year fixes. And Republicans claim that the estimated cost of future fixes, $208 billion over the next 10 years, should be considered a cost of health care reform.

But the same spending would still be necessary if we were to undo reform. So the G.O.P. argument here is exactly like claiming that my mortgage payments, which I’ll have to make no matter what we do tonight, are a cost of going out for dinner.

There’s more like that: the G.O.P. also claims that $115 billion of other health care spending should be charged to health reform, even though the budget office has tried to explain that most of this spending would have taken place even without reform.

To be sure, the Republican analysis doesn’t rely entirely on spurious attributions of cost — it also relies on using three-card monte tricks to make money disappear. Health reform, says the budget office, will increase Social Security revenues and reduce Medicare costs. But the G.O.P. analysis says that these sums don’t count, because some people have said that these savings would also extend the life of these programs’ trust funds, so counting these savings as deficit reduction would be “double-counting,” because — well, actually it doesn’t make any sense, but it sounds impressive.

152 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

So, is the Republican leadership unable to see through childish logical fallacies? No.

The key to understanding the G.O.P. analysis of health reform is that the party’s leaders are not, in fact, opposed to reform because they believe it will increase the deficit. Nor are they opposed because they seriously believe that it will be “job-killing” (which it won’t be). They’re against reform because it would cover the uninsured — and that’s something they just don’t want to do.

And it’s not about the money. As I tried to explain in my last column, the modern G.O.P. has been taken over by an ideology in which the suffering of the unfortunate isn’t a proper concern of government, and alleviating that suffering at taxpayer expense is immoral, never mind how little it costs.

Given that their minds were made up from the beginning, top Republicans weren’t interested in and didn’t need any real policy analysis — in fact, they’re basically contemptuous of such analysis, something that shines through in their health care report. All they ever needed or wanted were some numbers and charts to wave at the press, fooling some people into believing that we’re having some kind of rational discussion. We aren’t.

http://www.nytimes.com/2011/01/17/opinion/17krugman.html?src=me&ref=general

153 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

How Did Economists Get It So Wrong?

By PAUL KRUGMAN

Jason Lutes

I. MISTAKING BEAUTY FOR TRUTH It’s hard to believe now, but not long ago economists were congratulating themselves over the success of their field. Those successes — or so they believed — were both theoretical and practical, leading to a golden era for the profession. On the theoretical side, they thought that they had resolved their internal disputes. Thus, in a 2008 paper titled “The State of Macro” (that is, macroeconomics, the study of big-picture issues like recessions), Olivier Blanchard of M.I.T., now the chief economist at the International Monetary Fund, declared that “the state of macro is good.” The battles of yesteryear, he said, were over, and there had been a “broad convergence of vision.” And in the real world, economists believed they had things under control: the “central problem of depression-prevention has been solved,” declared Robert Lucas of the University of Chicago in his 2003 presidential address to the American Economic Association. In 2004, Ben Bernanke, a former Princeton professor who is now the chairman of the Federal Reserve Board, celebrated the Great Moderation in economic performance over the previous two decades, which he attributed in part to improved economic policy making. Last year, everything came apart.

154 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

Few economists saw our current crisis coming, but this predictive failure was the least of the field’s problems. More important was the profession’s blindness to the very possibility of catastrophic failures in a market economy. During the golden years, financial economists came to believe that markets were inherently stable — indeed, that stocks and other assets were always priced just right. There was nothing in the prevailing models suggesting the possibility of the kind of collapse that happened last year. Meanwhile, macroeconomists were divided in their views. But the main division was between those who insisted that free- market economies never go astray and those who believed that economies may stray now and then but that any major deviations from the path of prosperity could and would be corrected by the all-powerful Fed. Neither side was prepared to cope with an economy that went off the rails despite the Fed’s best efforts. And in the wake of the crisis, the fault lines in the economics profession have yawned wider than ever. Lucas says the Obama administration’s stimulus plans are “schlock economics,” and his Chicago colleague John Cochrane says they’re based on discredited “fairy tales.” In response, Brad DeLong of the University of California, Berkeley, writes of the “intellectual collapse” of the Chicago School, and I myself have written that comments from Chicago economists are the product of a Dark Age of macroeconomics in which hard- won knowledge has been forgotten. What happened to the economics profession? And where does it go from here? As I see it, the economics profession went astray because economists, as a group, mistook beauty, clad in impressive-looking mathematics, for truth. Until the Great Depression, most economists clung to a vision of capitalism as a perfect or nearly perfect system. That vision wasn’t sustainable in the face of mass unemployment, but as memories of the Depression faded, economists fell back in love with the old, idealized vision of an economy in which rational individuals interact in perfect markets, this time gussied up with fancy equations. The renewed romance with the idealized market was, to be sure, partly a response to shifting political winds, partly a response to financial incentives. But while sabbaticals at the Hoover Institution and job opportunities on Wall Street are nothing to sneeze at, the central cause of the profession’s failure was the desire for an all-encompassing, intellectually elegant approach that also gave economists a chance to show off their mathematical prowess. Unfortunately, this romanticized and sanitized vision of the economy led most economists to ignore all the things that can go wrong. They turned a blind eye to the limitations of human rationality that often lead to bubbles and busts; to the problems of institutions that run amok; to the imperfections of markets — especially financial markets — that can cause the economy’s operating system to undergo sudden, unpredictable crashes; and to the dangers created when regulators don’t believe in regulation. It’s much harder to say where the economics profession goes from here. But what’s almost certain is that economists will have to learn to live with messiness. That is, they will have to acknowledge the importance of irrational and often unpredictable behavior, face up to the often idiosyncratic imperfections of markets and accept that an elegant economic “theory of everything” is a long way off. In practical terms, this will translate into more cautious policy advice — and a reduced willingness to dismantle economic safeguards in the faith that markets will solve all problems. II. FROM SMITH TO KEYNES AND BACK The birth of economics as a discipline is usually credited to Adam Smith, who published “The Wealth of Nations” in 1776. Over the next 160 years an extensive body of economic theory was developed, whose central message was: Trust the market. Yes, economists admitted that there were cases in which markets might fail, of which the most important was the case of “externalities” — costs that people impose on others without paying the price, like traffic congestion or pollution. But the basic presumption of “neoclassical” economics (named after the late-19th-century theorists who elaborated on the concepts of their “classical” predecessors) was that we should have faith in the market system. This faith was, however, shattered by the Great Depression. Actually, even in the face of total collapse some economists insisted that whatever happens in a market economy must be right: “Depressions are not simply evils,” declared Joseph Schumpeter in 1934 — 1934! They are, he added, “forms of something which has to be done.” But many, and eventually most, economists turned to the insights of John Maynard Keynes for both an explanation of what had happened and a solution to future depressions. Keynes did not, despite what you may have heard, want the government to run the economy. He described his analysis in his 1936 masterwork, “The General Theory of Employment, Interest and Money,” as “moderately conservative in its implications.” He wanted to fix capitalism, not replace it. But he did challenge the notion

155 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

that free-market economies can function without a minder, expressing particular contempt for financial markets, which he viewed as being dominated by short-term speculation with little regard for fundamentals. And he called for active government intervention — printing more money and, if necessary, spending heavily on public works — to fight unemployment during slumps. It’s important to understand that Keynes did much more than make bold assertions. “The General Theory” is a work of profound, deep analysis — analysis that persuaded the best young economists of the day. Yet the story of economics over the past half century is, to a large degree, the story of a retreat from Keynesianism and a return to neoclassicism. The neoclassical revival was initially led by Milton Friedman of the University of Chicago, who asserted as early as 1953 that neoclassical economics works well enough as a description of the way the economy actually functions to be “both extremely fruitful and deserving of much confidence.” But what about depressions? Friedman’s counterattack against Keynes began with the doctrine known as monetarism. Monetarists didn’t disagree in principle with the idea that a market economy needs deliberate stabilization. “We are all Keynesians now,” Friedman once said, although he later claimed he was quoted out of context. Monetarists asserted, however, that a very limited, circumscribed form of government intervention — namely, instructing central banks to keep the nation’s money supply, the sum of cash in circulation and bank deposits, growing on a steady path — is all that’s required to prevent depressions. Famously, Friedman and his collaborator, Anna Schwartz, argued that if the Federal Reserve had done its job properly, the Great Depression would not have happened. Later, Friedman made a compelling case against any deliberate effort by government to push unemployment below its “natural” level (currently thought to be about 4.8 percent in the United States): excessively expansionary policies, he predicted, would lead to a combination of inflation and high unemployment — a prediction that was borne out by the stagflation of the 1970s, which greatly advanced the credibility of the anti-Keynesian movement. Eventually, however, the anti-Keynesian counterrevolution went far beyond Friedman’s position, which came to seem relatively moderate compared with what his successors were saying. Among financial economists, Keynes’s disparaging vision of financial markets as a “casino” was replaced by “efficient market” theory, which asserted that financial markets always get asset prices right given the available information. Meanwhile, many macroeconomists completely rejected Keynes’s framework for understanding economic slumps. Some returned to the view of Schumpeter and other apologists for the Great Depression, viewing recessions as a good thing, part of the economy’s adjustment to change. And even those not willing to go that far argued that any attempt to fight an economic slump would do more harm than good. Not all macroeconomists were willing to go down this road: many became self-described New Keynesians, who continued to believe in an active role for the government. Yet even they mostly accepted the notion that investors and consumers are rational and that markets generally get it right. Of course, there were exceptions to these trends: a few economists challenged the assumption of rational behavior, questioned the belief that financial markets can be trusted and pointed to the long history of financial crises that had devastating economic consequences. But they were swimming against the tide, unable to make much headway against a pervasive and, in retrospect, foolish complacency. III. PANGLOSSIAN FINANCE In the 1930s, financial markets, for obvious reasons, didn’t get much respect. Keynes compared them to “those newspaper competitions in which the competitors have to pick out the six prettiest faces from a hundred photographs, the prize being awarded to the competitor whose choice most nearly corresponds to the average preferences of the competitors as a whole; so that each competitor has to pick, not those faces which he himself finds prettiest, but those that he thinks likeliest to catch the fancy of the other competitors.” And Keynes considered it a very bad idea to let such markets, in which speculators spent their time chasing one another’s tails, dictate important business decisions: “When the capital development of a country becomes a by-product of the activities of a casino, the job is likely to be ill-done.” By 1970 or so, however, the study of financial markets seemed to have been taken over by Voltaire’s Dr. Pangloss, who insisted that we live in the best of all possible worlds. Discussion of investor irrationality, of bubbles, of destructive speculation had virtually disappeared from academic discourse. The field was dominated by the “efficient-market hypothesis,” promulgated by Eugene Fama of the University of Chicago, which claims that financial markets price assets precisely at their intrinsic worth given all publicly available information. (The price of a company’s stock, for example, always accurately reflects the company’s value

156 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

given the information available on the company’s earnings, its business prospects and so on.) And by the 1980s, finance economists, notably Michael Jensen of the Harvard Business School, were arguing that because financial markets always get prices right, the best thing corporate chieftains can do, not just for themselves but for the sake of the economy, is to maximize their stock prices. In other words, finance economists believed that we should put the capital development of the nation in the hands of what Keynes had called a “casino.” It’s hard to argue that this transformation in the profession was driven by events. True, the memory of 1929 was gradually receding, but there continued to be bull markets, with widespread tales of speculative excess, followed by bear markets. In 1973-4, for example, stocks lost 48 percent of their value. And the 1987 stock crash, in which the Dow plunged nearly 23 percent in a day for no clear reason, should have raised at least a few doubts about market rationality. These events, however, which Keynes would have considered evidence of the unreliability of markets, did little to blunt the force of a beautiful idea. The theoretical model that finance economists developed by assuming that every investor rationally balances risk against reward — the so-called Capital Asset Pricing Model, or CAPM (pronounced cap-em) — is wonderfully elegant. And if you accept its premises it’s also extremely useful. CAPM not only tells you how to choose your portfolio — even more important from the financial industry’s point of view, it tells you how to put a price on financial derivatives, claims on claims. The elegance and apparent usefulness of the new theory led to a string of Nobel prizes for its creators, and many of the theory’s adepts also received more mundane rewards: Armed with their new models and formidable math skills — the more arcane uses of CAPM require physicist-level computations — mild- mannered business-school professors could and did become Wall Street rocket scientists, earning Wall Street paychecks. To be fair, finance theorists didn’t accept the efficient-market hypothesis merely because it was elegant, convenient and lucrative. They also produced a great deal of statistical evidence, which at first seemed strongly supportive. But this evidence was of an oddly limited form. Finance economists rarely asked the seemingly obvious (though not easily answered) question of whether asset prices made sense given real-world fundamentals like earnings. Instead, they asked only whether asset prices made sense given other asset prices. Larry Summers, now the top economic adviser in the Obama administration, once mocked finance professors with a parable about “ketchup economists” who “have shown that two-quart bottles of ketchup invariably sell for exactly twice as much as one-quart bottles of ketchup,” and conclude from this that the ketchup market is perfectly efficient. But neither this mockery nor more polite critiques from economists like Robert Shiller of Yale had much effect. Finance theorists continued to believe that their models were essentially right, and so did many people making real-world decisions. Not least among these was Alan Greenspan, who was then the Fed chairman and a long-time supporter of financial deregulation whose rejection of calls to rein in subprime lending or address the ever-inflating housing bubble rested in large part on the belief that modern financial economics had everything under control. There was a telling moment in 2005, at a conference held to honor Greenspan’s tenure at the Fed. One brave attendee, Raghuram Rajan (of the University of Chicago, surprisingly), presented a paper warning that the financial system was taking on potentially dangerous levels of risk. He was mocked by almost all present — including, by the way, Larry Summers, who dismissed his warnings as “misguided.” By October of last year, however, Greenspan was admitting that he was in a state of “shocked disbelief,” because “the whole intellectual edifice” had “collapsed.” Since this collapse of the intellectual edifice was also a collapse of real-world markets, the result was a severe recession — the worst, by many measures, since the Great Depression. What should policy makers do? Unfortunately, macroeconomics, which should have been providing clear guidance about how to address the slumping economy, was in its own state of disarray. IV. THE TROUBLE WITH MACRO “We have involved ourselves in a colossal muddle, having blundered in the control of a delicate machine, the working of which we do not understand. The result is that our possibilities of wealth may run to waste for a time — perhaps for a long time.” So wrote John Maynard Keynes in an essay titled “The Great Slump of 1930,” in which he tried to explain the catastrophe then overtaking the world. And the world’s possibilities of wealth did indeed run to waste for a long time; it took World War II to bring the Great Depression to a definitive end.

157 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

Why was Keynes’s diagnosis of the Great Depression as a “colossal muddle” so compelling at first? And why did economics, circa 1975, divide into opposing camps over the value of Keynes’s views? I like to explain the essence of Keynesian economics with a true story that also serves as a parable, a small- scale version of the messes that can afflict entire economies. Consider the travails of the Capitol Hill Baby- Sitting Co-op. This co-op, whose problems were recounted in a 1977 article in The Journal of Money, Credit and Banking, was an association of about 150 young couples who agreed to help one another by baby-sitting for one another’s children when parents wanted a night out. To ensure that every couple did its fair share of baby- sitting, the co-op introduced a form of scrip: coupons made out of heavy pieces of paper, each entitling the bearer to one half-hour of sitting time. Initially, members received 20 coupons on joining and were required to return the same amount on departing the group. Unfortunately, it turned out that the co-op’s members, on average, wanted to hold a reserve of more than 20 coupons, perhaps, in case they should want to go out several times in a row. As a result, relatively few people wanted to spend their scrip and go out, while many wanted to baby-sit so they could add to their hoard. But since baby-sitting opportunities arise only when someone goes out for the night, this meant that baby-sitting jobs were hard to find, which made members of the co-op even more reluctant to go out, making baby-sitting jobs even scarcer. . . . In short, the co-op fell into a recession. O.K., what do you think of this story? Don’t dismiss it as silly and trivial: economists have used small-scale examples to shed light on big questions ever since Adam Smith saw the roots of economic progress in a pin factory, and they’re right to do so. The question is whether this particular example, in which a recession is a problem of inadequate demand — there isn’t enough demand for baby-sitting to provide jobs for everyone who wants one — gets at the essence of what happens in a recession. Forty years ago most economists would have agreed with this interpretation. But since then macroeconomics has divided into two great factions: “saltwater” economists (mainly in coastal U.S. universities), who have a more or less Keynesian vision of what recessions are all about; and “freshwater” economists (mainly at inland schools), who consider that vision nonsense. Freshwater economists are, essentially, neoclassical purists. They believe that all worthwhile economic analysis starts from the premise that people are rational and markets work, a premise violated by the story of the baby-sitting co-op. As they see it, a general lack of sufficient demand isn’t possible, because prices always move to match supply with demand. If people want more baby-sitting coupons, the value of those coupons will rise, so that they’re worth, say, 40 minutes of baby-sitting rather than half an hour — or, equivalently, the cost of an hours’ baby-sitting would fall from 2 coupons to 1.5. And that would solve the problem: the purchasing power of the coupons in circulation would have risen, so that people would feel no need to hoard more, and there would be no recession. But don’t recessions look like periods in which there just isn’t enough demand to employ everyone willing to work? Appearances can be deceiving, say the freshwater theorists. Sound economics, in their view, says that overall failures of demand can’t happen — and that means that they don’t. Keynesian economics has been “proved false,” Cochrane, of the University of Chicago, says. Yet recessions do happen. Why? In the 1970s the leading freshwater macroeconomist, the Nobel laureate Robert Lucas, argued that recessions were caused by temporary confusion: workers and companies had trouble distinguishing overall changes in the level of prices because of inflation or deflation from changes in their own particular business situation. And Lucas warned that any attempt to fight the business cycle would be counterproductive: activist policies, he argued, would just add to the confusion. By the 1980s, however, even this severely limited acceptance of the idea that recessions are bad things had been rejected by many freshwater economists. Instead, the new leaders of the movement, especially Edward Prescott, who was then at the University of Minnesota (you can see where the freshwater moniker comes from), argued that price fluctuations and changes in demand actually had nothing to do with the business cycle. Rather, the business cycle reflects fluctuations in the rate of technological progress, which are amplified by the rational response of workers, who voluntarily work more when the environment is favorable and less when it’s unfavorable. Unemployment is a deliberate decision by workers to take time off. Put baldly like that, this theory sounds foolish — was the Great Depression really the Great Vacation? And to be honest, I think it really is silly. But the basic premise of Prescott’s “real business cycle” theory was

158 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

embedded in ingeniously constructed mathematical models, which were mapped onto real data using sophisticated statistical techniques, and the theory came to dominate the teaching of macroeconomics in many university departments. In 2004, reflecting the theory’s influence, Prescott shared a Nobel with Finn Kydland of Carnegie Mellon University. Meanwhile, saltwater economists balked. Where the freshwater economists were purists, saltwater economists were pragmatists. While economists like N. Gregory Mankiw at Harvard, Olivier Blanchard at M.I.T. and David Romer at the University of California, Berkeley, acknowledged that it was hard to reconcile a Keynesian demand-side view of recessions with neoclassical theory, they found the evidence that recessions are, in fact, demand-driven too compelling to reject. So they were willing to deviate from the assumption of perfect markets or perfect rationality, or both, adding enough imperfections to accommodate a more or less Keynesian view of recessions. And in the saltwater view, active policy to fight recessions remained desirable. But the self-described New Keynesian economists weren’t immune to the charms of rational individuals and perfect markets. They tried to keep their deviations from neoclassical orthodoxy as limited as possible. This meant that there was no room in the prevailing models for such things as bubbles and banking-system collapse. The fact that such things continued to happen in the real world — there was a terrible financial and macroeconomic crisis in much of Asia in 1997-8 and a depression-level slump in Argentina in 2002 — wasn’t reflected in the mainstream of New Keynesian thinking. Even so, you might have thought that the differing worldviews of freshwater and saltwater economists would have put them constantly at loggerheads over economic policy. Somewhat surprisingly, however, between around 1985 and 2007 the disputes between freshwater and saltwater economists were mainly about theory, not action. The reason, I believe, is that New Keynesians, unlike the original Keynesians, didn’t think fiscal policy — changes in government spending or taxes — was needed to fight recessions. They believed that monetary policy, administered by the technocrats at the Fed, could provide whatever remedies the economy needed. At a 90th birthday celebration for Milton Friedman, Ben Bernanke, formerly a more or less New Keynesian professor at Princeton, and by then a member of the Fed’s governing board, declared of the Great Depression: “You’re right. We did it. We’re very sorry. But thanks to you, it won’t happen again.” The clear message was that all you need to avoid depressions is a smarter Fed. And as long as macroeconomic policy was left in the hands of the maestro Greenspan, without Keynesian- type stimulus programs, freshwater economists found little to complain about. (They didn’t believe that monetary policy did any good, but they didn’t believe it did any harm, either.) It would take a crisis to reveal both how little common ground there was and how Panglossian even New Keynesian economics had become. V. NOBODY COULD HAVE PREDICTED . . . In recent, rueful economics discussions, an all-purpose punch line has become “nobody could have predicted. . . .” It’s what you say with regard to disasters that could have been predicted, should have been predicted and actually were predicted by a few economists who were scoffed at for their pains. Take, for example, the precipitous rise and fall of housing prices. Some economists, notably Robert Shiller, did identify the bubble and warn of painful consequences if it were to burst. Yet key policy makers failed to see the obvious. In 2004, Alan Greenspan dismissed talk of a housing bubble: “a national severe price distortion,” he declared, was “most unlikely.” Home-price increases, Ben Bernanke said in 2005, “largely reflect strong economic fundamentals.” How did they miss the bubble? To be fair, interest rates were unusually low, possibly explaining part of the price rise. It may be that Greenspan and Bernanke also wanted to celebrate the Fed’s success in pulling the economy out of the 2001 recession; conceding that much of that success rested on the creation of a monstrous bubble would have placed a damper on the festivities. But there was something else going on: a general belief that bubbles just don’t happen. What’s striking, when you reread Greenspan’s assurances, is that they weren’t based on evidence — they were based on the a priori assertion that there simply can’t be a bubble in housing. And the finance theorists were even more adamant on this point. In a 2007 interview, Eugene Fama, the father of the efficient-market hypothesis, declared that “the word ‘bubble’ drives me nuts,” and went on to explain why we can trust the housing market: “Housing markets are less liquid, but people are very careful when they buy houses. It’s typically the biggest investment they’re going to make, so they look around very carefully and they compare prices. The bidding process is very detailed.”

159 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

Indeed, home buyers generally do carefully compare prices — that is, they compare the price of their potential purchase with the prices of other houses. But this says nothing about whether the overall price of houses is justified. It’s ketchup economics, again: because a two-quart bottle of ketchup costs twice as much as a one- quart bottle, finance theorists declare that the price of ketchup must be right. In short, the belief in efficient financial markets blinded many if not most economists to the emergence of the biggest financial bubble in history. And efficient-market theory also played a significant role in inflating that bubble in the first place. Now that the undiagnosed bubble has burst, the true riskiness of supposedly safe assets has been revealed and the financial system has demonstrated its fragility. U.S. households have seen $13 trillion in wealth evaporate. More than six million jobs have been lost, and the unemployment rate appears headed for its highest level since 1940. So what guidance does modern economics have to offer in our current predicament? And should we trust it? VI. THE STIMULUS SQUABBLE Between 1985 and 2007 a false peace settled over the field of macroeconomics. There hadn’t been any real convergence of views between the saltwater and freshwater factions. But these were the years of the Great Moderation — an extended period during which inflation was subdued and recessions were relatively mild. Saltwater economists believed that the Federal Reserve had everything under control. Freshwater economists didn’t think the Fed’s actions were actually beneficial, but they were willing to let matters lie. But the crisis ended the phony peace. Suddenly the narrow, technocratic policies both sides were willing to accept were no longer sufficient — and the need for a broader policy response brought the old conflicts out into the open, fiercer than ever. Why weren’t those narrow, technocratic policies sufficient? The answer, in a word, is zero. During a normal recession, the Fed responds by buying Treasury bills — short-term government debt — from banks. This drives interest rates on government debt down; investors seeking a higher rate of return move into other assets, driving other interest rates down as well; and normally these lower interest rates eventually lead to an economic bounceback. The Fed dealt with the recession that began in 1990 by driving short-term interest rates from 9 percent down to 3 percent. It dealt with the recession that began in 2001 by driving rates from 6.5 percent to 1 percent. And it tried to deal with the current recession by driving rates down from 5.25 percent to zero. But zero, it turned out, isn’t low enough to end this recession. And the Fed can’t push rates below zero, since at near-zero rates investors simply hoard cash rather than lending it out. So by late 2008, with interest rates basically at what macroeconomists call the “zero lower bound” even as the recession continued to deepen, conventional monetary policy had lost all traction. Now what? This is the second time America has been up against the zero lower bound, the previous occasion being the Great Depression. And it was precisely the observation that there’s a lower bound to interest rates that led Keynes to advocate higher government spending: when monetary policy is ineffective and the private sector can’t be persuaded to spend more, the public sector must take its place in supporting the economy. Fiscal stimulus is the Keynesian answer to the kind of depression-type economic situation we’re currently in. Such Keynesian thinking underlies the Obama administration’s economic policies — and the freshwater economists are furious. For 25 or so years they tolerated the Fed’s efforts to manage the economy, but a full- blown Keynesian resurgence was something entirely different. Back in 1980, Lucas, of the University of Chicago, wrote that Keynesian economics was so ludicrous that “at research seminars, people don’t take Keynesian theorizing seriously anymore; the audience starts to whisper and giggle to one another.” Admitting that Keynes was largely right, after all, would be too humiliating a comedown. And so Chicago’s Cochrane, outraged at the idea that government spending could mitigate the latest recession, declared: “It’s not part of what anybody has taught graduate students since the 1960s. They [Keynesian ideas] are fairy tales that have been proved false. It is very comforting in times of stress to go back to the fairy tales we heard as children, but it doesn’t make them less false.” (It’s a mark of how deep the division between saltwater and freshwater runs that Cochrane doesn’t believe that “anybody” teaches ideas that are, in fact, taught in places like Princeton, M.I.T. and Harvard.) Meanwhile, saltwater economists, who had comforted themselves with the belief that the great divide in macroeconomics was narrowing, were shocked to realize that freshwater economists hadn’t been listening at all. Freshwater economists who inveighed against the stimulus didn’t sound like scholars who had weighed

160 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

Keynesian arguments and found them wanting. Rather, they sounded like people who had no idea what Keynesian economics was about, who were resurrecting pre-1930 fallacies in the belief that they were saying something new and profound. And it wasn’t just Keynes whose ideas seemed to have been forgotten. As Brad DeLong of the University of California, Berkeley, has pointed out in his laments about the Chicago school’s “intellectual collapse,” the school’s current stance amounts to a wholesale rejection of Milton Friedman’s ideas, as well. Friedman believed that Fed policy rather than changes in government spending should be used to stabilize the economy, but he never asserted that an increase in government spending cannot, under any circumstances, increase employment. In fact, rereading Friedman’s 1970 summary of his ideas, “A Theoretical Framework for Monetary Analysis,” what’s striking is how Keynesian it seems. And Friedman certainly never bought into the idea that mass unemployment represents a voluntary reduction in work effort or the idea that recessions are actually good for the economy. Yet the current generation of freshwater economists has been making both arguments. Thus Chicago’s Casey Mulligan suggests that unemployment is so high because many workers are choosing not to take jobs: “Employees face financial incentives that encourage them not to work . . . decreased employment is explained more by reductions in the supply of labor (the willingness of people to work) and less by the demand for labor (the number of workers that employers need to hire).” Mulligan has suggested, in particular, that workers are choosing to remain unemployed because that improves their odds of receiving mortgage relief. And Cochrane declares that high unemployment is actually good: “We should have a recession. People who spend their lives pounding nails in Nevada need something else to do.” Personally, I think this is crazy. Why should it take mass unemployment across the whole nation to get carpenters to move out of Nevada? Can anyone seriously claim that we’ve lost 6.7 million jobs because fewer Americans want to work? But it was inevitable that freshwater economists would find themselves trapped in this cul-de-sac: if you start from the assumption that people are perfectly rational and markets are perfectly efficient, you have to conclude that unemployment is voluntary and recessions are desirable. Yet if the crisis has pushed freshwater economists into absurdity, it has also created a lot of soul-searching among saltwater economists. Their framework, unlike that of the Chicago School, both allows for the possibility of involuntary unemployment and considers it a bad thing. But the New Keynesian models that have come to dominate teaching and research assume that people are perfectly rational and financial markets are perfectly efficient. To get anything like the current slump into their models, New Keynesians are forced to introduce some kind of fudge factor that for reasons unspecified temporarily depresses private spending. (I’ve done exactly that in some of my own work.) And if the analysis of where we are now rests on this fudge factor, how much confidence can we have in the models’ predictions about where we are going? The state of macro, in short, is not good. So where does the profession go from here? VII. FLAWS AND FRICTIONS Economics, as a field, got in trouble because economists were seduced by the vision of a perfect, frictionless market system. If the profession is to redeem itself, it will have to reconcile itself to a less alluring vision — that of a market economy that has many virtues but that is also shot through with flaws and frictions. The good news is that we don’t have to start from scratch. Even during the heyday of perfect-market economics, there was a lot of work done on the ways in which the real economy deviated from the theoretical ideal. What’s probably going to happen now — in fact, it’s already happening — is that flaws-and-frictions economics will move from the periphery of economic analysis to its center. There’s already a fairly well developed example of the kind of economics I have in mind: the school of thought known as behavioral finance. Practitioners of this approach emphasize two things. First, many real- world investors bear little resemblance to the cool calculators of efficient-market theory: they’re all too subject to herd behavior, to bouts of irrational exuberance and unwarranted panic. Second, even those who try to base their decisions on cool calculation often find that they can’t, that problems of trust, credibility and limited collateral force them to run with the herd. On the first point: even during the heyday of the efficient-market hypothesis, it seemed obvious that many real-world investors aren’t as rational as the prevailing models assumed. Larry Summers once began a paper on finance by declaring: “THERE ARE IDIOTS. Look around.” But what kind of idiots (the preferred term in the academic literature, actually, is “noise traders”) are we talking about? Behavioral finance, drawing on the broader movement known as behavioral economics, tries to answer that question by relating the apparent

161 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

irrationality of investors to known biases in human cognition, like the tendency to care more about small losses than small gains or the tendency to extrapolate too readily from small samples (e.g., assuming that because home prices rose in the past few years, they’ll keep on rising).

Until the crisis, efficient-market advocates like Eugene Fama dismissed the evidence produced on behalf of behavioral finance as a collection of “curiosity items” of no real importance. That’s a much harder position to maintain now that the collapse of a vast bubble — a bubble correctly diagnosed by behavioral economists like Robert Shiller of Yale, who related it to past episodes of “irrational exuberance” — has brought the world economy to its knees.

On the second point: suppose that there are, indeed, idiots. How much do they matter? Not much, argued Milton Friedman in an influential 1953 paper: smart investors will make money by buying when the idiots sell and selling when they buy and will stabilize markets in the process. But the second strand of behavioral finance says that Friedman was wrong, that financial markets are sometimes highly unstable, and right now that view seems hard to reject.

Probably the most influential paper in this vein was a 1997 publication by Andrei Shleifer of Harvard and Robert Vishny of Chicago, which amounted to a formalization of the old line that “the market can stay irrational longer than you can stay solvent.” As they pointed out, arbitrageurs — the people who are supposed to buy low and sell high — need capital to do their jobs. And a severe plunge in asset prices, even if it makes no sense in terms of fundamentals, tends to deplete that capital. As a result, the smart money is forced out of the market, and prices may go into a downward spiral.

The spread of the current financial crisis seemed almost like an object lesson in the perils of financial instability. And the general ideas underlying models of financial instability have proved highly relevant to economic policy: a focus on the depleted capital of financial institutions helped guide policy actions taken after the fall of Lehman, and it looks (cross your fingers) as if these actions successfully headed off an even bigger financial collapse.

Meanwhile, what about macroeconomics? Recent events have pretty decisively refuted the idea that recessions are an optimal response to fluctuations in the rate of technological progress; a more or less Keynesian view is the only plausible game in town. Yet standard New Keynesian models left no room for a crisis like the one we’re having, because those models generally accepted the efficient-market view of the financial sector.

There were some exceptions. One line of work, pioneered by none other than Ben Bernanke working with Mark Gertler of New York University, emphasized the way the lack of sufficient collateral can hinder the ability of businesses to raise funds and pursue investment opportunities. A related line of work, largely established by my Princeton colleague Nobuhiro Kiyotaki and John Moore of the London School of Economics, argued that prices of assets such as real estate can suffer self-reinforcing plunges that in turn depress the economy as a whole. But until now the impact of dysfunctional finance hasn’t been at the core even of Keynesian economics. Clearly, that has to change.

VIII. RE-EMBRACING KEYNES So here’s what I think economists have to do. First, they have to face up to the inconvenient reality that financial markets fall far short of perfection, that they are subject to extraordinary delusions and the madness of crowds. Second, they have to admit — and this will be very hard for the people who giggled and whispered over Keynes — that Keynesian economics remains the best framework we have for making sense of recessions and depressions. Third, they’ll have to do their best to incorporate the realities of finance into macroeconomics.

Many economists will find these changes deeply disturbing. It will be a long time, if ever, before the new, more realistic approaches to finance and macroeconomics offer the same kind of clarity, completeness and

162 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

sheer beauty that characterizes the full neoclassical approach. To some economists that will be a reason to cling to neoclassicism, despite its utter failure to make sense of the greatest economic crisis in three generations. This seems, however, like a good time to recall the words of H. L. Mencken: “There is always an easy solution to every human problem — neat, plausible and wrong.”

When it comes to the all-too-human problem of recessions and depressions, economists need to abandon the neat but wrong solution of assuming that everyone is rational and markets work perfectly. The vision that emerges as the profession rethinks its foundations may not be all that clear; it certainly won’t be neat; but we can hope that it will have the virtue of being at least partly right.

Paul Krugman is a Times Op-Ed columnist and winner of the 2008 Nobel Memorial Prize in Economic Science. His latest book is “The Return of Depression Economics and the Crisis of 2008.”

http://www.nytimes.com/2009/09/06/magazine/06Economic-t.html?ref=magazine&pagewanted=print

163 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

Pushing Petals Up and Down Park Ave.

By DOROTHY SPEARS

Chester Higgins Jr./The New York Times. Right, WR Studio Inc. Left, Will Ryman with one of his flowers, made from fiberglass and stainless steel. Right, a digital rendering of one of the works from Will Ryman’s “Roses,” the piece will sit at 59th Street and Park. The project will be unveiled on Jan. 25th.

THE first sign of spring this year in New York may be the work of Will Ryman, whose site-specific art installation, “The Roses,” will be unveiled on Jan. 25. It will cover 10 blocks of Park Avenue with an unseasonable crop of giant pink and red rose blossoms.

Those with apartments overlooking the Park Avenue Mall between 57th and 67th Streets may feel as if they’re hallucinating when they wake up to Mr. Ryman’s clusters of nearly 40 buds ranging from 5 to 10 feet in diameter, with the longest stems among them sprouting 25 feet above the street. “I love that somebody looking out their window could be experiencing an object one way, while someone standing on the sidewalk could be looking at the same object and having a totally different experience,” Mr. Ryman, 41, said during a recent visit to his loft on the . “If you look at the stems, they’re sort of dancing.”

A fanciful riff on a Park Avenue tradition of displaying seasonal flowers and ornamental trees, “The Roses,” which will remain on display through May 31 and take a weekend to install, appears undeniably whimsical. It even includes 20 oversize petals to be scattered along the ground, 6 of which have been comfort tested to double as lawn chairs.

Still, anyone tempted to dismiss “The Roses” as fluff may need to look closer. Between the 1-to-2-foot beetles, bees, ladybugs and aphids peeking out of these particular flowers, and the thorns the size of dinosaurs’ teeth protruding from the blossoms’ curving stems, Mr. Ryman aims to stir a sense of foreboding that will contrast with the project’s more obvious feel-good symbolism.

164 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

His works have already generated a double-edged impression. In a review in The New York Times in 2004 Ken Johnson wrote that Mr. Ryman’s work “has a genuinely cathartic feeling about it, as though he has indeed finally allowed long-neglected feelings to come out from the shadier corners of his psyche.”

The opening sequence of the David Lynch film “Blue Velvet” is an inspiration for Mr. Ryman’s new installation. “At first, there’s this house with a white picket fence, this perfect world,” Mr. Ryman said of the film. “But then the camera pans from a cheerful bed of roses to a churning, bug-filled underworld that is primal, menacing and, I think, ultimately the truth.”

In their exaggerated scale “The Roses,” which are fiberglass and stainless steel, evokes the Pop sculptures of Claes Oldenburg. Their surfaces, which are individually painted, tend to be bumpy and irregular and underscore Mr. Ryman’s reaction against the slicker works of artists like Jeff Koons and Takashi Murakami. (“For me, unless the hand is present, humanity is absent from the piece,” he said.) But in their cartoonish display of human expectation and failure, they also owe a powerful debt to Mr. Ryman’s lingering fascination with absurdist theater.

A Manhattan native, Mr. Ryman was born into a family of artists. (His father is the Minimalist painter Robert Ryman, and his mother, Merrill Wagner, paints abstractions on three-dimensional objects.) In a youthful attempt to buck what he jokingly referred to as “a business or a curse, depending on how you see it,” after graduating from high school he immersed himself in plays by Beckett, Sartre and Ionesco and began jotting down bits of dialogue, in the hope of establishing himself as a playwright.

While he worked a series of odd jobs, as a script reader, a prep chef and later a line cook, he “really wanted to make some noise as a playwright,” he said. “I thought that was my destiny.”

After struggling “very profoundly for years,” however, he said that by the age of 32 he had to accept that “my plays weren’t really commercial, as much as I wanted them to be.”

His realization led to a shift in materials. Instead of scribbling scenes in his journals — towering stacks of which cluttered his desk at his loft, bearing witness to the magnitude of his efforts — he resolved to make sculptures of his characters, the better to envision them more fully.

In a cramped studio apartment, measuring roughly 600 square feet, he said, “I basically took apart my bookshelves and my coat tree and got some papier-mâché and built a hundred or so figures about four feet tall.”

“I wanted to invent a new kind of theater,” he added, “in which actors were removed, and props somehow told the story.” In 2002 he moved into his current loft with the goal of turning it into a theater. After constructing nine scenarios with what he called his “crazy, disturbed figures,” he invited people he knew from the theater and film world to see his bizarre, nonnarrative production. As luck would have it, one of Mr. Ryman’s friends happened to bring the art dealer Tanja Grunert, who surprised him by proposing that he feature his dejected-looking characters in a solo show at her gallery in Chelsea.

Like a wry comment on his father’s pared-down paintings, which gravitate heavily toward white, Mr. Ryman’s “Pit,” exhibited at Klemens Gasser & Tanja Grunert in the summer of 2004, presented a white space the size of a room with an exterior staircase leading up to a platform. From there viewers were invited to peer into a gloomy hollow, painted entirely black, from which a hundred or so childlike figures in tennis shoes gazed up plaintively.

“The Pit” was included in “Greater New York,” a prominent roundup of emerging artists hosted by P.S. 1 Contemporary Art Center in 2005. Klaus Biesenbach, the center’s director and one of the show’s curators,

165 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

described Mr. Ryman’s piece as “very dramatic,” adding that it attracted intense interest among curators and critics.

In “Wall Street,” a subsequent installation from 2008 at 7 World Trade Center, Mr. Ryman presented some 15 characters ranging from three-inch-tall businessmen to figures like a homeless man sifting through trash and a hot dog vendor that rose nearly to 15 feet in height.

Mr. Ryman’s dramatic scale shifts persist. And while roses present a departure from his previous works in their replacement of the human figure, Mr. Ryman said he’ll rely upon Park Avenue residents and passers-by “to complete my piece.”

After all, as Adrian Benepe, New York’s parks and recreation commissioner, who organized “The Roses” with the sculpture committee of the Fund for Park Avenue, said, “A giant rose gets you thinking about public spaces in the city and your relationship to them.”

“Plus,” he added, “it’s a touch of color in the winter.”

http://www.nytimes.com/2011/01/16/arts/design/16ryman.html?ref=design

166 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

The Grass Is Fake, but the Splendor Real

By ARIEL KAMINER

Emily Berl for The New York Times The grass and trees in Park Here are fake, but some visitors stay all day. Margot Spiker and Patrick Kelly read by artificial light amid plastic foliage stapled to wooden trunks

Last Wednesday, the day of the snowpocalypse — or was it the snowmageddon, or the snow-my-God or whatever else the weather-fatigued headline writers had resorted to — Central Park was awash in white. Prospect Park in Brooklyn was a freezing no man’s land. And Van Cortlandt Park in the Bronx was almost invisible under all that the wind had delivered. But at the park on Mulberry Street, between Kenmare and Spring in , summer was just unfolding. Warm, happy people were peeling down to their T-shirts and soaking in the sunshine, while others were spreading picnic blankets and gazing up through the lush canopy of foliage. Some flopped down on the hearty carpet of green, curled up against one another and, lulled by the gentle chirping of birds, settled in for a nap. It’s not truly a park, at least not in any sense that the parks department might recognize; it is the simulacrum of a park, an indoor copy that in weather like this becomes more real than the city’s broad but dormant expanses. The pseudopark, which occupies the Openhouse Gallery through the end of the month and which is open to the public every day from 11 a.m. to 6 p.m., beckons visitors with a vibrant gardenlike environment and a warm, sunny glow (along with, at certain hours, food vendors like Luke’s Lobsterand Mexicue). It’s no great feat of agricultural engineering. The floor covering is artificial turf, not sod, despite the example that “The New York Earth Room,” just a few blocks away, has set for decades. The trees are plastic foliage stapled to wooden trunks. The sunlight emanates from light boxes designed to treat seasonal affective disorder. The birds chirp through a sound system. And beyond that? Park Here, as it’s called, is just the same white walls you see in any gallery, with exposed electric fixtures, hanging theatrical lights and a big industrial heating unit. It shouldn’t fool anyone. And yet it does: office workers looking for a break, couples looking for each other’s arms, the daily yoga class in the corner and, of course, the inevitable stroller brigade, all just relaxing and playing and letting down their guard in a way they would never do if the fake foliage was not there. Some arrive when the park opens and stay all day. “This used to be an art gallery,” one of the picnickers told his barefoot friend. “This is an art gallery,” she said authoritatively. “This is someone’s installation.”

167 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

Not quite. Openhouse calls itself a gallery but functions mostly as an events space, a rentable temporary home for pop-up shops, parties, publicity events and the like. The so-called installation is the work not of a solitary artist exploring the tension between nature and artifice but of a series of corporate partnerships set in motion by the people who run the place. For Openhouse, then, Park Here is nothing revolutionary, just a clever way to keep the place’s name in circulation during a slow season. Fine. But for the people who amble in, flop down, spread out, lie on top of each other, flirt, relax, catch up or check out, it’s something more: an experiment in urban sociology. Strolling around the place and watching the strangers at play, Dalton Conley, a New York University sociologist who has written about growing up in the city, observed that it was a quintessential New York phenomenon. “One of the factors which, despite perceptions, makes it easy to parent here is that there are no backyards, so you’re not atomized,” Professor Conley said. “You just go to a park,” he said, and automatically find a bunch of other kids to play with. Parks have the same effect on adults, throwing them into close and easy proximity, and promoting unexpected social encounters. Similar results have been achieved in other unnatural settings, most recently when Pipilotti Rist took over MoMA’s second-floor atrium with an oversize video installation and an enormous round couch on which viewers could just lie back and take it — and each other — all in. But that was under the protective cover of high art. It was critically sanctioned. It was safe. Park Here, in contrast, is just some random storefront, and the people flopped about it don’t necessarily have anything more in common than a preference for being inside to being outside. (Or is it the other way around?) “As a permanent thing, people probably would say, ‘We need real grass,’ ” Professor Conley said. “But as a temporary thing, they accept the lack of verisimilitude. In fact, I bet some of it is ironic.” Maybe it’s more fun, that is, because it’s more fake. Or maybe, in a city so starved for nature that a stroll through a greenmarket can feel like a restorative encounter with the great outdoors, New Yorkers are simply willing to cling to whatever crude substitutes we can find. Maybe we are like those monkey babies in the psychology experiments who pathetically tried to hug a terrycloth doll. In the depths of a January snowstorm, even a terrycloth doll is better than nothing. And the results of this experiment suggest that the ability to respond to even such unconvincing stimuli is an adaptive trait for urban survival. New Yorkers like to think no one can put anything over on them. But the field data from Park Here shows we enjoy putting one over on ourselves. At some point, during a long and lazy afternoon — sometime after lunch but before my nap — I sneezed. “Gesundheit,” said one of the people lounging by the seesaw. “Allergies,” said her friend. E-mail: [email protected]

http://www.nytimes.com/2011/01/16/nyregion/16critic.html?ref=design

168 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

An Artful Environmental Impact Statement

By ROBERTA SMITH

J Henry Fair/Gerald Peters Gallery Abstraction of Destruction J. Henry Fair’s aerial photographs in this show at Gerald Peters Gallery include “Lightning Rods”” (2009), depicting a holding tank at an oil sands facility in Alberta, Canada

The vivid color photographs of J. Henry Fair lead an uneasy double life as potent records of environmental pollution and as ersatz evocations of abstract painting. This makes “Abstraction of Destruction,” his exhibition at the Gerald Peters Gallery, a strange battle between medium and message, between harsh truths and trite, generic beauty. In one way these views of unnatural disasters belong to the great tradition of photojournalistic muckraking; the word could not be more appropriate in this context. Mr. Fair takes his photographs from airplanes, and occasionally helicopters, often capturing sights deliberately hidden from public view. His subjects include environmental degradation perpetuated on a regular, usually daily basis by paper mills, fertilizer factories, power plants, coal mining operations and oil companies. They include the Deepwater Horizon oil spill. Not only does the airplane provide access to restricted areas, it also makes possible panoramic vistas that convey the frightening scale of the destruction, creating the feeling that humankind has wrought its own form of irreversible natural catastrophe. In these vistas you can almost literally watch the poison spread across vast areas of land and sea, creating stains and patterns in a startling palette of deathly grays, lurid rusts and chemical greens and blues. But there is also a reductive side to this process: the expanses of color and texture in Mr. Fair’s pictures often bring to mind slick, printed versions of Abstract Expressionist painting. These images constitute a kind of toxic sublime. They are most shocking in “The Day After Tomorrow: Images of Our Earth in Crisis,” a book of Mr. Fair’s photographs scheduled for release next month by powerHouse Books. In the book the sights of things like the white, flowerlike aerating ponds of a paper mill — which affect the water table for miles around — provide visceral testimony to the monumental despoiling of the planet. Thanks partly to straightforward, detailed captions, the images’ communicative power and visual force are held in balance. That they look oddly ravishing does not obscure the ravaging they depict.

169 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

But it is a different story at the gallery. The main problem — as the show’s corny title indicates — is that the gallery images seem to have been selected mainly for their abstract impact. While there is considerable overlap between the book and the show, the images in the gallery depict far less often — almost not at all — the factories, smokestacks and earthmovers that do the damage or even the dikes that often contain the sludgy refuse. (Perhaps they will reappear in the images chosen for an exhibition of Mr. Fair’s work opening at Cooper Union on Thursday.) Instead we see mostly effluvia of gorgeous color that bring to mind the painterly strategies of Abstract Expressionists like Barnett Newman and Clyfford Still and countless followers. This focus reworks ground — albeit with bright colors and large prints — already broken by photographers like Harry Callahan and Aaron Siskind. They proved over a half-century ago that they could find motifs in the real world similar to those created by their Abstract Expressionist contemporaries, as is currently demonstrated by the juxtaposition of their work in “Abstract Expressionist New York” at the Museum of Modern Art. In certain images the sheer unnaturalness of the color sets off alarms, as with the lime-green water in a waste pond for a herbicide manufacturer in Luling, La. Here and in many other cases the colors seem too intense and artificial — or arty — to be good for the earth. Elsewhere a certain fuzziness eliminates detail and makes the scale of the images — and thus the extent of the damage — ambiguous. Sometimes we might almost be looking at luscious expanses of paint poured over a rough surface, as in the field of royal blue bisected by a single vein of red in a photograph of the Deepwater Horizon spill. In “Coal Slurry,” taken above Kayford Mountain, W.Va., in 2005, it is hard to tell if the river of white and gray waste running town a crenellated slope is several hundred yards long or several miles. There is a clearer sense of scale to a different image, taken above an aluminum plant in Darrow, La., where white foam flows through dry waste into an area submerged in water; both are tinted shades of dark red by bauxite, the ore that yields aluminum. Mr. Fair’s presentation at Gerald Peters is shot through with ambivalence about the relative value of art and documentary. The desire that the images be seen as art seems implicit in the trivializing titles (nowhere evident in the book) that Mr. Fair has added to the images on view, even though the labels go on to provide more factual details. For example the sinister bauxite landscape described above, one of the most powerful images in the show, is unfortunately titled “Expectoration.” “Lightning Rods” pictures a holding tank’s coagulated orange liquid and the metal walkway jutting over it at an oil-sands upgrader plant in northern Canada. The fact that one of these plants caught fire this month doesn’t improve the word choice. Too often — minus the telltale details that provide a sense of scale and also implicate human actions — the images read first and foremost as slick jokes about painting. They evoke the work that usually falls on what might be generously called art’s lightweight side, from Bouguereau’s academic nudes to Dale Chihuly glass sculptures. Mr. Fair’s most artistically powerful images are the most concrete, conveying as clearly as possible what is going on. In these destruction is not at all abstract; information and form work together, to devastating effect. “J. Henry Fair: Abstraction of Destruction” runs through Feb. 11 at the Gerald Peters Gallery, 24 East 78th Street, Manhattan; (212) 628-9760, gpgallery.com.

http://www.nytimes.com/2011/01/14/arts/design/14earth.html?ref=design

170 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

A Pink-Ribbon Race, Years Long

By RONI CARYN RABIN

Christopher Capozziello for The New York Times

PATIENT Suzanne Hebert at home in Connecticut with her children, Dominic and Grace.

By the time Suzanne Hebert realized that her doctor was wrong and that the hard lump in her breast wasn’t a normal part of breast-feeding, the tumor was the size of a stopwatch and the cancer had spread to her spine.

Still, Dr. Hebert, an optometrist in South Windsor, Conn., went to her first support group meeting thinking that as bad as things were, at least breast cancer was not an obscure disease; she would not be alone.

But the room was filled with women who had early localized cancers. Some had completed chemotherapy years ago; they were “survivors.” When one newcomer asked Dr. Hebert for her story, she couldn’t bring herself to tell the truth.

Although great strides have been made in the treatment of breast cancer, recent events, including Elizabeth Edwards’s death last month and the government’s decision to ban the drug Avastin as a treatment for metastatic breast cancer, have drawn attention to the limits of medical progress — and to the nearly 40,000 patients who die of the disease each year.

Of women who are given a diagnosis of breast cancer, only 4 percent to 6 percent are at Stage 4 at the time of diagnosis, meaning the cancer has already metastasized, or spread, to distant sites in the body. But about 25 percent of those with early-stage disease develop metastatic forms, with an estimated 49,000 new diagnoses each year, according to the American Cancer Society.

At least 150,000 Americans are estimated to be living with metastatic breast cancer — including Dr. Hebert, now 45, who got her diagnosis six years ago and now works with the nonprofit Metastatic Breast Cancer Network.

171 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

Stage 4 breast cancer can be treated, but it is considered incurable. Depending on the type of tumor, patients may live for many years — working, raising children, starting nonprofit foundations, doing yoga and even running half-marathons.

But theirs are not pink-ribbon lives: They live from scan to scan, in three-month gulps, grappling with pain, fatigue, depression, crippling medical costs and debilitating side effects of treatment, hoping the current therapy will keep the disease at bay until the next breakthrough drug comes along, or at least until the family trip to Disney World.

“This woman had just been diagnosed,” Dr. Hebert said of her support-group encounter, “and I couldn’t bring myself to tell her: ‘I have it in my bones. I have it in several parts of my body. My treatment is never going to end.’

“It was a horrible moment,” she went on. “I had nothing in common with them. I was what scared them.”

While perceptions of the disease may have changed in recent years, the number of deaths it causes has remained fairly static, said Dr. Eric P. Winer, director of the breast oncology center at the Dana-Farber Cancer Institute in Boston.

“All too often, when people think about breast cancer, they think about it as a problem, it’s solved, and you lead a long and normal life; it’s a blip on the curve,” he said. “While that’s true for many people, each year approximately 40,000 people die of breast cancer — and they all die of metastatic disease. You can see why patients with metastatic disease may feel invisible within the advocacy community.”

Many patients keep the spread of their disease private, and Mrs. Edwards’s 2007 announcement that her cancer had become “incurable” was an inspiration to many — it was also why her death was such a blow.

“She put a face on the disease,” Dr. Hebert said. “I could explain my situation to people.

“The day she stopped treatment was very emotional,” she added, “because I’ve been telling people, ‘I’m like Elizabeth Edwards.’ ”

Mrs. Edwards’s husband, the former senator and presidential candidate John Edwards, referred to her illness as a “chronic” disease, implying that it was manageable. In fact, however, the median life expectancy for patients with metastatic breast cancer is just 26 months, and fewer than 1 in 4 survive for more than five years.

But because breast cancer is a complex illness that encompasses many subtypes, generalizations are tricky. New drug treatments are keeping some patients alive for a decade or more, even after the disease has spread.

And they can enjoy a higher quality of life than patients did in the past, because treatments are better focused and have fewer side effects. The prognosis has especially improved for patients with certain aggressive cancers, like HER2-positive, that were considered extremely difficult to treat until recently.

“Over the past 20 years, we’ve had probably 15 new drugs approved by the F.D.A., and each of them adds an incremental amount to the length of life, “ said Dr. Gabriel N. Hortobagyi, director of the breast cancer research program at M. D. Anderson Cancer Center in Houston.

The average patient may receive eight or 10 different treatment regimens in sequence, he said.

172 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

“I would never tell a patient with a newly diagnosed metastasis that there is nothing I can do,” he said, “because there are actually dozens of things I can do — whether it is hormone therapy, whether it is Herceptin, whether it is irradiation therapy or single-agent chemotherapy — and there are many things we can do to control symptoms and prevent complications.”

But treatment at these advanced stages is an art as well as a science, involving “a certain amount of trial and error,” said Dr. James L. Speyer, director of the Cancer Center at N.Y.U. Langone Medical Center. “You try a treatment, based on your best knowledge about the patient and the features of the cancer, and if it’s working, great — you continue it unless the side effects are a problem,” he said. “And if it’s not working, you stop and try again.”

Patricia McWaters, who lives in Missouri City, Tex., a suburb of Houston, had frequent mammograms but did not learn she had breast cancer until it appeared in her liver and spine in 2003. Now 71, she has had nine treatments at M. D. Anderson, including combination chemotherapy, drugs that block estrogen production, more chemotherapy followed by a chemo drug that comes in pills, and now a new, more aggressive drug. “Whenever anything quits working and a spread is starting, then we change,” she said.

In some cases, metastatic breast cancer appears to go into long-term remission, but experts say that in most cases it will recur, eventually becoming resistant to all treatment.

Since it is metastasis that ultimately kills, some advocates want more resources devoted to its study and treatment. Even though many cancer drugs are initially tested on patients with advanced disease, Danny Welch, an expert on metastasis, says only a few hundred scientists in the world are trying to understand the process.

“It’s responsible for 90 percent of the morbidity and mortality, but gets less than 5 percent of the budget,” said Dr. Welch, a senior scientist at the Comprehensive Cancer Center at the University of Alabama at Birmingham, who studies genes that suppress metastasis. (Those genes are turned off when cancer is advanced.) “Funding agencies as a rule want to say their research portfolio is successful — they want a return on their investment very quickly.”

Patients with metastatic disease are frustrated because they are often barred from clinical trials if a certain number of chemotherapy regimens have failed to work for them.

When they find a drug their tumor responds to, they can achieve a remarkable degree of stability. Pat Strassner, 61, of Severna Park, Md., had breast cancer that spread to her lung and hip in 2007, but she has had success with a chemo pill called Xeloda for the past three years. The drug has side effects, including drying out the skin on her hands and feet so much that they crack and bleed, but Ms. Strassner is still able to enjoy running, and last month she completed a half-marathon with her husband in Charlotte, N.C.

Other drugs are proving more problematic. Last month the Food and Drug Administration announced that it was withdrawing approval for Avastin in metastatic breast cancer after four studies found that it did not prolong survival and led to life-threatening risks, including heart attack and heart failure.

Christi Turnage, 48, of Madison, Miss., who has been using Avastin for two and a half years, said she was terrified about losing access to it. “This drug is literally keeping me alive,” she said.

This kind of uncertainty keeps many patients from throwing themselves wholeheartedly into the ethos of hope and empowerment that helps sustain many women with less aggressive forms of the disease.

173 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

Dr. Hebert says that while the pink-ribbon campaign has raised awareness about breast cancer, it masks a relentless killer.

“People like the pretty story with the happy ending,” she said. “We don’t have the happy ending.

“You always hear stories about women who ‘battled it’ and ‘how courageous’ they were. Cancer doesn’t care if you’re courageous. It’s an injustice to all of us who have this. There are women who are no less strong and no less determined to be here, and they’ll be dead in two years.” http://www.nytimes.com/2011/01/18/health/18cancer.html?_r=1&nl=health&emc=healthupdateema2&pagew anted=print

174 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

Tests Show Promise for Early Diagnosis of Alzheimer’s

By GINA KOLATA

Researchers are reporting major advances toward resolving two underlying problems involving Alzheimer’s disease: How do you know if someone who is demented has it? And how can you screen the general population to see who is at risk?

One study, reported in The New York Times in June, evaluated a new type of brain scan that can detect plaques that are uniquely characteristic of Alzheimer’s disease.

On Thursday, an advisory committee to the Food and Drug Administration, which requested the study, will review it and make a recommendation on whether to approve the test for marketing.

The second study asked whether a blood test could detect beta amyloid, the protein fragment that makes up Alzheimer’s plaque, and whether blood levels of beta amyloid were associated with a risk of memory problems. The answer was yes, but the test is not ready to be used for screening.

Both studies are to published in The Journal of the American Medical Association on Wednesday.

“These are two very important papers, and I don’t always say that,” said Neil S. Buckholtz, chief of the Dementias of Aging Branch of the National Institute on Aging.

The new brain scan involved a dye developed by Avid Radiopharmaceuticals, now owned by Eli Lilly. The dye attaches to plaque in patients’ brains, making it visible on PET scans.

The study by Avid involved 152 people nearing the end of life who agreed to have a brain scan and a brain autopsy after they died. The investigators wanted to know whether the scans would show the same plaques as the autopsies.

Twenty-nine of the patients in the study died and had brain autopsies. In 28 of them, the scan matched the autopsy results. Alzheimer’s had been diagnosed in half of the 29 patients; the others had received other diagnoses.

One subject who was thought to have had Alzheimer’s did not have plaques on the scans or on the autopsy — the diagnosis was incorrect. Two other patients with dementia turned out to have had Alzheimer’s although they had received diagnoses of other diseases.

The study also included 74 younger and healthier people who underwent the scans. They were not expected to have plaques, and in fact they did not.

If the F.D.A. approves the scan, medical experts said they would use it to help determine whether a patient with dementia had Alzheimer’s. If no plaques were found, they would have to consider other diagnoses.

The Avid scan will also be used — and is being used — by companies that are testing drugs to remove amyloid from the brain. The scans can show if the drugs are working. And a large study sponsored in part by the National Institute on Aging is scanning healthy people and following them to see if the scans predict the risk of developing Alzheimer’s disease.

175 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

The other study, on a blood test for Alzheimer’s, indicates that such a test may work. But researcher agree that it is not ready for clinical use.

The study, by Dr. Kristine Yaffe of the University of California, San Francisco, and the San Francisco Veterans Affairs Medical Center, included 997 subjects whose average age was 74 when the study began. They were followed for nine years and given memory tests and a blood test looking for beta amyloid.

Beta amyloid is in the brain and flows into the spinal fluid. From there, it can enter the bloodstream. When amyloid accumulates in plaque, its levels in spinal fluid go down. That indicates risk for Alzheimer’s disease.

Dr. Yaffe and her colleagues asked whether they could show similar Alzheimer’s risk by measuring beta amyloid levels in blood. It is difficult; amyloid levels in blood are much lower than in spinal fluid. And there appear to be other sources of amyloid in blood, confounding the test results.

“I was interested in the blood test because I think it’s been given a bit of a write-off,” Dr. Yaffe said. Some studies concluded that it worked, but just as many said that it did not. She wanted to try again with a large study following people for a long period and using a sensitive test for amyloid.

She divided the subjects into groups and found that those with the most amyloid had the lowest risk of a decline in their mental abilities, and those with the least had the greatest risk. But other factors also played a role. Low levels of the protein were not as useful in predicting mental decline in people who had more education and were more literate. People with a gene, APO e4, that is associated with an increased risk of Alzheimer’s, seemed to be at a greater risk of a mental decline even if their blood levels of amyloid were high.

That does not necessarily mean that the more people use their minds the more they will be protected from Alzheimer’s disease, researchers note. But, Dr. Yaffe said, that idea needs more study.

The test’s precision, said Dr. Clifford Jack of the Mayo Clinic, was “not crisp enough” to accurately predict whether an individual was likely to show an intellectual decline over the decade after the test was given.

Still, said Dr. Ronald C. Petersen, chairman of the medical and scientific advisory council of the Alzheimer’s Association, there is an increasing need for such a test. If treatments are developed to slow or stop the disease, it will be important to start them before irreversible damage is done.

Current tests of Alzheimer’s risk — spinal taps and MRI and PET scans — are not suitable to screen large numbers of people. “They are either expensive or invasive, or both,” Dr. Petersen said. “We need a cheap and safe population screening tool, like cholesterol for cardiology.”

A blood test could be ideal, and this study is an encouraging step forward, he said. The idea might be to screen with such a test and then follow up with those who test positive, giving them a PET scan, for example.

But, Dr. Petersen said, “this study is not sufficiently convincing that this is the answer.” http://www.nytimes.com/2011/01/19/health/research/19alzheimers.html?ref=health

176 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

Close Look at Orthotics Raises a Welter of Doubts

By GINA KOLATA

Jordan Siemens/Getty Images

Benno M. Nigg has become a leading researcher on orthotics — those shoe inserts that many athletes use to try to prevent injuries. And what he has found is not very reassuring.

For more than 30 years Dr. Nigg, a professor of biomechanics and co-director of the Human Performance Lab at the University of Calgary in Alberta, has asked how orthotics affect motion, stress on joints and muscle activity.

Do they help or harm athletes who use them? And is the huge orthotics industry — from customized shoe inserts costing hundreds of dollars to over-the-counter ones sold at every drugstore — based on science or on wishful thinking?

His overall conclusion: Shoe inserts or orthotics may be helpful as a short-term solution, preventing injuries in some athletes. But it is not clear how to make inserts that work. The idea that they are supposed to correct mechanical-alignment problems does not hold up.

Joseph Hamill, who studies lower-limb biomechanics at the University of Massachusetts in Amherst, agrees.

“We have found many of the same results,” said Dr. Hamill, professor of kinesiology and the director of the university’s biomechanics laboratory. “I guess the main thing to note is that, as biomechanists, we really do not know how orthotics work.”

Orthotists say Dr. Nigg’s sweeping statement does not take into account the benefits their patients perceive.

177 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

The key measure of success, said Jeffrey P. Wensman, director of clinical and technical services at the Orthotics and Prosthetics Center at the University of Michigan, is that patients feel better.

“The vast majority of our patients are happier having them than not,” he said about orthotics that are inserted in shoes.

Seamus Kennedy, president and co-owner of Hersco Ortho Labs in New York, said there was an abundance of evidence — hundreds of published papers — that orthotics can treat and prevent “mechanically induced foot problems,” leading to common injuries like knee pain, shinsplints and pain along the bottom of the foot.

“Orthotics do work,” Mr. Kennedy said. “But choosing the right one requires a great deal of care.”

Yet Scott D. Cummings, president of the American Academy of Orthotists and Prosthetists, says the trade is only now moving toward becoming a science. So far, most of the focus in that direction has been on rigorously assessing orthotics and prosthetics for other conditions, like scoliosis, with less work on shoe orthotics for otherwise healthy athletes.

“Anecdotally, we know what designs work and what designs don’t work” for foot orthotics, said Mr. Cummings, who is an orthotist and prosthetist at Next Step in Manchester, N.H. But when it comes to science and rigorous studies, he added, “comparatively, there isn’t a whole lot of evidence out there.”

Dr. Nigg would agree.

In his studies, he found there was no way to predict the effect of a given orthotic. Consider, for example, an insert that pushes the foot away from a pronated position, or rotated excessively outward. You might think it would have the same effect on everyone who pronates, but it does not.

One person might respond by increasing the stress on the outside of the foot, another on the inside. Another might not respond at all, unconsciously correcting the orthotic’s correction.

“That’s the first problem we have,” Dr. Nigg said. “If you do something to a shoe, different people will react differently.”

The next problem is that there may be little agreement among orthotics makers about what sort of insert to prescribe.

In one study discussed in his new book, “Biomechanics of Sport Shoes,” Dr. Nigg sent a talented distance runner to five certified orthotics makers. Each made a different type of insert to “correct” his pronation.

The athlete wore each set of orthotics for three days and then ran 10 kilometers, about 6 miles. He liked two of the orthotics and ran faster with them than with the other three. But the construction of the two he liked was completely different.

Then what, Dr. Nigg asked in series of studies, do orthotics actually do?

They turn out to have little effect on kinematics — the actual movement of the skeleton during a run. But they can have large effects on muscles and joints, often making muscles work as much as 50 percent harder for the same movement and increasing stress on joints by a similar amount.

As for “corrective” orthotics, he says, they do not correct so much as lead to a reduction in muscle strength.

178 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

In one recent review of published papers, Dr. Nigg and his colleagues analyzed studies on orthotics and injury prevention. Nearly all published studies, they report, lacked scientific rigor. For example, they did not include groups that, for comparison, did not receive orthotics. Or they discounted people who dropped out of the study, even though dropouts are often those who are not benefiting from a treatment.

Being generous about studies with design flaws that could overstate effects, Dr. Nigg and his colleagues concluded that custom-made orthotics could help prevent and treat plantar fasciitis, a common injury to a tendon at the bottom of the foot, and stress fractures of the tibia, along the shin. They added, though, that the research was inadequate for them to have confidence in those conclusions.

Dr. Nigg also did his own study with 240 Canadian soldiers. Half of them got inserts and the others, for comparison, did not.

Those who got inserts had a choice of six different types that did different things to foot positioning. Each man chose the insert he found most comfortable and wore it for four months. The men selected five of the six inserts with equal frequency.

The findings were somewhat puzzling: While the group that used inserts had about half as many injuries — defined as pain that kept them from exercising for at least half a day — there was no obvious relation between the insert a soldier chose and his biomechanics without it. That’s why Dr. Nigg says for now it is difficult to figure out which orthotic will help an individual. The only indication seems to be that a comfortable orthotic might be better than none at all, at least for the activities of people in the military.

So where does this leave people like Jason Stallman, my friend and colleague at The New York Times? Jason has perfectly flat feet — no arch. He got his first pair of orthotics at 12 or 13 and has worn orthotics all the time, for walking and running ever since. About a year ago he decided to try going without them in his everyday life; he still wears them when he runs. Every medical specialist Jason has seen tried to correct his flat feet, but with little agreement on how to do it.

Every new podiatrist or orthopedist, he told me, would invariably look at his orthotics and say: “Oh, these aren’t any good. The lab I use makes much better ones. Your injury is probably linked to these poor-fitting orthotics.” So he tried different orthotic styles, different materials, different orthotics labs with every new doctor.

That is a typical story, Dr. Nigg says. In fact, he adds, there is no need to “correct” a flat foot. All Jason needs to do is strengthen his foot and ankle muscles and then try running without orthotics. Dr. Nigg says he always wondered what was wrong with having flat feet. Arches, he explains, are an evolutionary remnant, needed by primates that gripped trees with their feet.

“Since we don’t do that anymore, we don’t really need an arch,” he wrote in an e-mail. “Why would we? For landing — no need. For the stance phase — no need. For the takeoff phase — no need. Thus a flat foot is not something that is bad per se.”

So why shouldn’t Jason — or anyone, for that matter — just go to a store and buy whatever shoe feels good, without worrying about “correcting” a perceived biomechanical defect?

“That is exactly what you should do,” Dr. Nigg replied. http://www.nytimes.com/2011/01/18/health/nutrition/18best.html?ref=health

179 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

When Self-Knowledge Is Only the Beginning

By RICHARD A. FRIEDMAN, M.D.

It is practically an article of faith among many therapists that self-understanding is a prerequisite for a happy life. Insight, the thinking goes, will free you from your psychological hang-ups and promote well-being.

Perhaps, but recent experience makes me wonder whether insight is all it’s cracked up to be.

Not long ago, I saw a young man in his early 30s who was sad and anxious after being dumped by his girlfriend for the second time in three years. It was clear that his symptoms were a reaction to the loss of a relationship and that he was not clinically depressed.

“I’ve been over this many times in therapy,” he said. He had trouble tolerating any separation from his girlfriends. Whether they were gone for a weekend or he was traveling for work, the result was always the same: a painful state of dysphoria and anxiety.

He could even trace this feeling back to a separation from his mother, who had been hospitalized for several months for cancer treatment when he was 4. In short, he had gained plenty of insight in therapy into the nature and origin of his anxiety, but he felt no better.

What therapy had given this young man was a coherent narrative of his life; it had demystified his feelings, but had done little to change them.

180 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

Was this because his self-knowledge was flawed or incomplete? Or is insight itself, no matter how deep, of limited value?

Psychoanalysts and other therapists have argued for years about this question, which gets to the heart of how therapy works (when it does) to relieve psychological distress.

Theoretical debates have not settled the question, but one interesting clue about the possible relevance of insight comes from comparative studies of different types of psychotherapy — only some of which emphasize insight.

In fact, when two different types of psychotherapies have been directly compared — and there are more than 100 such studies — it has often been hard to find any differences between them.

Researchers aptly call this phenomenon the Dodo effect, referring to the Dodo bird in Lewis Carroll’s “Alice in Wonderland” who, having presided over a most whimsical race, pronounces everyone a winner.

The meaning for patients is clear. If you’re depressed, for example, you are likely to feel better whether your therapist uses a cognitive-behavioral approach, which aims to correct distorted thoughts and feelings, or an insight-oriented psychodynamic therapy.

Since the common ingredient in all therapies is not insight, but a nonspecific human bond with your therapist, it seems fair to say that insight is neither necessary nor sufficient to feeling better.

Not just that, but sometimes it seems that insight even adds to a person’s misery.

I recall one patient who was chronically depressed and dissatisfied. “Life is just a drag,” he told me and then went on to catalog a list of very real social and economic ills.

Of course, he was dead-on about the parlous state of the economy, even though he was affluent and not directly threatened by it. He was a very successful financial analyst, but was bored with his work, which he viewed as mechanical and personally unfulfilling.

He had been in therapy for years before I saw him and had come to the realization that he had chosen his profession to please his critical and demanding father rather than follow his passion for art. Although he was insightful about much of his behavior, he was clearly no happier for it.

When he became depressed, though, this insight added to his pain as he berated himself for failing to stand up to his father and follow his own path.

Researchers have known for years that depressed people have a selective recall bias for unhappy events in their lives; it is not that they are fabricating negative stories so much as forgetting the good ones. In that sense, their negative views and perceptions can be depressingly accurate, albeit slanted and incomplete. A lot of good their insight does them!

It even makes you wonder whether a little self-delusion is necessary for happiness.

None of this is to say that insight is without value. Far from it. If you don’t want to be a captive of your psychological conflicts, insight can be a powerful tool to loosen their grip. You’ll probably feel less emotional pain, but that’s different from happiness.

181 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

Speaking of which, my chronically depressed patient came to see me recently looking exceedingly happy. He had quit his job and taken a far less lucrative one in the art world. We got to talking about why he was feeling so good. “Simple,” he said, “I’m doing what I like.”

I realized then that I am pretty good at treating clinical misery with drugs and therapy, but that bringing about happiness is a stretch. Perhaps happiness is a bit like self-esteem: You have to work for both. So far as I know, you can’t get an infusion of either one from a therapist.

Dr. Richard A. Friedman is a professor of psychiatry at Weill Cornell Medical College in Manhattan. http://www.nytimes.com/2011/01/18/health/views/18mind.html?ref=health

182 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

Heavy Doses of DNA Data, With Few Side Effects

By JOHN TIERNEY

When companies tried selling consumers the results of personal DNA tests, worried doctors and assorted health experts rushed to the public’s rescue. What if the risk assessments were inaccurate or inconsistent? What if people misinterpreted the results and did something foolish? What if they were traumatized by learning they were at high risk for Alzheimer’s or breast cancer or another disease?

The what-ifs prompted New York State to ban the direct sale of the tests to consumers. Members of Congress denounced the tests as “snake oil,” and the Food and Drug Administration has recently threatened the companies with federal oversight. Members of a national advisory commission concluded that personal DNA testing needed to be carefully supervised by experts like themselves.

But now, thanks to new research, there’s a less hypothetical question to consider: What if the would-be guardians of the public overestimated the demand for their supervisory services?

In two separate studies of genetic tests, researchers have found that people are not exactly desperate to be protected from information about their own bodies. Most people say they’ll pay for genetic tests even if the predictions are sometimes wrong, and most people don’t seem to be traumatized even when they receive bad news.

“Up until now there’s been lots of speculation and what I’d call fear-mongering about the impact of these tests, but now we have data,” says Dr. Eric Topol, the senior author of a report published last week in The New England Journal of Medicine. “We saw no evidence of anxiety or distress induced by the tests.”

He and colleagues at the Scripps Translational Science Institute followed more than 2,000 people who had a genomewide scan by the Navigenics company. After providing saliva, they were given estimates of their genetic risk for more than 20 different conditions, including obesity, diabetes, rheumatoid arthritis, several forms of cancer, multiple sclerosis and Alzheimer’s. About six months after getting the test results, delivered in a 90-page report, the typical person’s level of psychological anxiety was no higher than it had been before taking the test.

183 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

Although they were offered sessions, at no cost, with genetic counselors who could interpret the results and allay their anxieties, only 10 percent of the people bothered to take advantage of the opportunity. They apparently didn’t feel overwhelmed by the information, and it didn’t seem to cause much rash behavior, either.

In fact, the researchers were surprised to see how little effect it had. While about a quarter of the people discussed the results with their personal physicians, they generally did not change their diets or their exercise habits even when they’d been told these steps might lower some of their risks.

“We had theorized there would be an improvement in lifestyle, but we saw no sign whatsoever,” Dr. Topol says. “Instead of turning inward and becoming activists about their health, they turned to medical screening. They had a significant increase in the intent to have a screening test, like a colonoscopy if they were at higher risk for colon cancer.”

The people in the study chose on their own to pay for the tests — about $225, a steep discount from the retail price at the time — so they weren’t necessarily representative of the general population. But in another study, published in Health Economics, researchers surveyed a representative sample of nearly 1,500 people and found most people willing to take a test even if didn’t perfectly predict their risks for disease.

About 70 percent of the respondents were willing to take even an imperfect test for genetic risks of Alzheimer’s, and more than three-quarters were willing to take such tests for arthritis, breast cancer and prostate cancer. Most people also said they’d be willing to spend money out of their own pocket for the test, typically somewhere between $300 and $600.

A minority of the respondents didn’t want the tests even if they were free, and explained that they didn’t want to live with the knowledge. But the rest attached much more value to the tests than have the experts who have been warning of the dangers.

“The medical field has been paternalistic about these tests,” says Peter J. Neumann, the lead author of the study, who is director of the Center for the Evaluation of Value and Risk in Health at Tufts Medical Center. “We’ve been saying that we shouldn’t give people this information because it might be wrong or we might worry them or we can’t do anything about it. But people tell us they want the information enough to pay for it.”

Why do experts differ from consumers on this issue? You could argue that the experts are better informed, but you could also argue that some of them are swayed by their own self-interest. Traditionally, people have had to go through a doctor to get a test, which could mean paying a fee to the physician as well as to a licensed genetic counselor. Buying tests directly from a company like Navigenics or 23andMe can cut out hundreds of dollars in fees to the middlemen.

To experts, the tests may seem unnecessary or wasteful when there’s nothing doctors can do to prevent the disease. But consumers have other reasons to want the results. They may find even bad news preferable to the anxious limbo of uncertainty; they may consider an imprecise test better than nothing at all.

“We should recognize that consumers might reasonably want the information for nonmedical reasons,” Dr. Neumann says. “People value it for its own sake, and because they feel more in control of their lives.”

The traditional structure of American medicine gives control to doctors and to centralized regulators who make treatment decisions for everyone. These genetic tests represent a different philosophy, and point toward a possible future with people taking more charge of their own care and seeking treatments customized to their bodies. “What we have today is population medicine at the 30,000-foot level,” says Dr. Topol. “These tests

184 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

are the beginning of a new way to individualize medicine. One of the most immediate benefits is being able to use the genetic knowledge to tweak the kind of drugs people take, like choosing among statins and beta blockers to minimize side effects.”

That may be the self-empowered future, but for now residents of New York still can’t be trusted to buy these tests directly. It’s paternalism run amok, says Lee Silver, a professor of molecular biology and of public policy at Princeton, who is developing another variety of genetic test for consumers.

“It seems like a no-brainer,” Dr. Silver says, “that any competent adult should be free to purchase an analysis of their own DNA as long as they have been informed in advance of what could potentially be revealed in the analysis. You should have access to information about your own genome without a permission slip from your doctor.”

The paternalists argue that it’s still unclear how to interpret some of these genetic tests — and it is, of course. But if you ban these tests, or effectively eliminate them for most people by imposing expensive and time- consuming restrictions, how does that help the public? When it comes to knowing their own genetic risks, most people seem to prefer imperfect knowledge to perfect ignorance. http://www.nytimes.com/2011/01/18/science/18tier.html?ref=health

185 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

Uganda: Male Circumcision May Help Protect Sexual Partners Against Cervical Cancer

By DONALD G. McNEIL Jr.

Male circumcision, which has been shown to decrease a man’s risk of contracting the virus that causes AIDS, also appears to help protect his sexual partners against cervical cancer.

In an offshoot of a landmark study of 1,200 heterosexual couples in Uganda involving circumcision and AIDS, researchers reported in The Lancet this month that having a circumcised partner reduced a woman’s risk of catching human papillomaviruses by about 25 percent. Such viruses lead to genital warts and cervical cancer.

The study, led by researchers from Johns Hopkins University, did not last long enough to see how many women actually developed cancer; that can take years or decades.

Cervical cancer was once a major killer in wealthy countries, but because of Pap smears it is now much rarer. In poor countries, it kills almost 250,000 women a year, according to the National Cancer Institute.

Papilloma vaccines like Gardasil and Cervarix provide much greater protection than circumcision does, but they are too expensive for most poor countries.

Doctors have long suspected that having circumcised husbands protected women against cervical cancer. A 1901 study in The Lancet noted that few British Jewish women died of it (although it erroneously concluded that they were protected by avoiding bacon). Later research in Israel found that stronger protection comes from a variant gene common in Jewish women all over the world. http://www.nytimes.com/2011/01/18/health/18global.html?ref=health

186 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

Lifting a Veil of Fear to See a Few Benefits of Fever

By PERRI KLASS, M.D.

Fever is common, but fever is complicated. It brings up science and emotion, comfort and calculation.

As a pediatrician, I know fever is a signal that the immune system is working well. And as a parent, I know there is something primal and frightening about a feverish child in the night.

So those middle-of-the-night calls from worried parents, so frequent in every pediatric practice, can be less than straightforward. A recent paper in The Journal of the American Medical Association pointed out one reason, and a longstanding discussion about parental perceptions reminds us of the emotional context.

The JAMA study looked at over-the-counter medications for children, including those marketed for treating pain and fever: how they are labeled, and whether the droppers and cups and marked spoons in the packages properly reflect the doses recommended on the labels.

The article concluded that many medications are not labeled clearly, that some provide no dosing instrument, and that the instruments, if included, are not marked consistently. (A dosing chart might recommend 1.5 milliliters, but the dropper has no “1.5 ml” mark.)

“Basically, the main message of the paper is that the instructions on the boxes and bottles of over-the-counter medications are really confusing,” said the lead author, Dr. H. Shonna Yin of New York University Medical Center, who is a colleague of mine and an assistant professor of pediatrics.

Too small a dose of an antipyretic (fever medicine) may be ineffective; too much can be toxic. But the dose depends on the child’s weight, which of course changes over time, and on the concentration of the medicine, which depends on whether it is acetaminophen or ibuprofen, children’s liquid or infant drops.

187 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

“We always make them get the bottle,” said Kathleen Martinez, a pediatric nurse practitioner who is clinical coordinator of the After Hours Telephone Care Program at the Children’s Hospital in Aurora, Colo. “What do you have at home? Is it the ibuprofen infant drops or the children’s? Have the bottle in hand and verify the concentration.

“And then we have to verify the instrument, and then we give the right dose based on weight. It’s time- consuming, and then of course it changes with the weight, so the poor parents have to call back.”

Concerns about fever — how worried should I be, and how much medicine should I give? — account for many of the calls that parents make at night to their children’s doctors. For me, these tricky measurement questions evoke memories of many conversations, often from a crowded, noisy place (my own child’s Little League game, the supermarket), trying to answer a question about a small child with fever.

One recent night, I talked to the mother of a toddler with fever and abdominal pain. I was more worried about the pain, and about whether he was drinking enough to stay hydrated; she was more worried about the fever, and no matter what I asked she kept coming back to that number on the thermometer.

Finally, I got so worried the child was dehydrated that I told her to go to the emergency room. And when she got there, she told them she was scared because the child had a high fever.

Fever can indeed be scary, and any fever in an infant younger than 3 months is cause for major concern because of the risk of serious bacterial infections. But in general, in older children who do not look very distressed, fever is positive evidence of an active immune system, revved up and helping an array of immunological processes work more effectively.

Of course, that may not be reassuring to a parent whose child’s temperature is spiking at midnight. (Fevers tend to go up in the late afternoon and evening, as do normal body temperatures.)

In 1980, Dr. Barton D. Schmitt, a professor of pediatrics at the University of Colorado School of Medicine, published a now classic article about what he termed “fever phobia.” Many parents, he wrote, believed that untreated fevers might rise to critical levels and that even moderate and low-grade fevers could have serious neurological effects (that is, as parents we tend to suspect that our children’s brains may melt).

A group at Johns Hopkins revisited Dr. Schmitt’s work in 2001, publishing a paper in the journal Pediatrics, “Fever Phobia Revisited: Have Parental Misconceptions About Fever Changed in 20 Years?” Their conclusion was that the fears and misconceptions persisted.

In fact, fever does not harm the brain or the body, though it does increase the need for fluids. And even untreated, fevers rarely rise higher than 104 or 105 degrees.

As many as 5 percent of children are at risk for seizures with fever. These seizures can be terrifying to watch but generally are not harmful and do not cause epilepsy. Still, a child who has a first febrile seizure should be checked by a physician. (These seizures tend to run in families, and children who have had one may well have another.)

“Parents are telling us that they’re worried that fever can cause brain damage or even death in their children,” said Dr. Michael Crocetti, an assistant professor of pediatrics at Johns Hopkins and lead author of the 2001 study. “I’ve been doing this for a long time, and it seems to me that even though I do a tremendous amount of education about fever, its role in illness, its benefit in illness, it doesn’t seem to be something they keep hold of from visit to visit.”

188 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

Dr. Janet Serwint, another author of the study and a professor of pediatrics at Johns Hopkins, agreed. “I personally think there should be much more education about this at well visits,” she told me, adding that parents need to understand “the helpfulness of fever — how fever actually is a well-orchestrated healthy response of our body.”

Other studies have looked at attitudes among medical personnel, who can be just as worried about fever as parents.

“Doctors are part of the problem,” Dr. Schmitt said. Some of the phobia “comes from doctors and nurses,” he added — “doctors and nurses who weren’t taught about fever and all the wondrous things fever does in the animal kingdom.” http://www.nytimes.com/2011/01/11/health/11klass.html?ref=health

189 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

Recalling a Fallen Star’s Legacy in High-Energy Particle Physics

By DENNIS OVERBYE

The machine known as the Tevatron is four miles around. Bison graze nearby on the 6,800 acres of former farmland occupied by the Fermi National Accelerator Laboratory in Batavia, Ill. Occasionally, physicists run races around the top of it.

It was turned on in 1983 to the sound of protesters who worried that its high-energy collisions between protons and antiprotons could bring about the end of the world or perhaps the whole universe.

For the next three decades it reigned as a symbol of human curiosity and of American technological might, becoming the biggest, grandest, most violent physics experiment of its time, devouring a small city’s worth of electricity to collide subatomic particles with energies of up to a trillion electron volts apiece in an effort to retrieve forces and laws that prevailed during the Big Bang.

The world as a whole never did end, but for American physicists a small piece of it has now. Last Monday the Department of Energy, which runs Fermilab, as it is known, announced that despite last-minute appeals by physicists, the Tevatron will shut down as scheduled in September.

The news disappointed American physicists who had hoped that three more years of running might give them a glimpse of as yet unobserved phenomena like the Higgs boson, a storied particle said to imbue other particles with mass.

“It’s a shame to shut it down,” said Lisa Randall, a Harvard physicist, who says she thinks the physics community gave up too easily. Dr. Randall organized a bunch of some 40 theorists to write a letter to the Department of Energy last summer urging them to keep the machine running. A message on her new Twitter account last week broke the news of the decision to shut down the Tevatron.

That leaves the field of future discovery free for the Large Hadron Collider, which started up a year ago outside Geneva at CERN, the European Organization for Nuclear Research, and is now the world champion. The collider is 17 miles around and capable eventually, CERN says, of producing 7 trillion-electron-volt protons. Hobbled with bad electrical connections, it ran at half that energy in 2010.

“How are we going to feel if they find it at the LHC?” Dr. Randall said. “The Tevatron had the capacity to give us complementary information.”

190 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

This moment has been coming ever since 1993, when Congress canceled the Superconducting Supercollider, a physics machine in Texas that would have been the biggest, most powerful machine on the planet. CERN’s collider is expected to dominate physics for the next 20 years.

The impending death of the Tevatron adds to a gloomy time for American science, coming as it does just as NASA has announced that its flagship project, the James Webb Space Telescope, is $1.6 billion over budget and will be years late, knocking the pins out from under hopes of mounting a mission anytime soon to investigate the dark energy that is boosting the expansion of the universe.

Michael Turner, a cosmologist at the University of Chicago and vice president of the American Physical Society, said American scientists were struggling to adjust to a world in which Europe and Asia are attaining parity with the United States. “We are used to dominating in science,” Dr. Turner said. “We seem to be unable to make decisions, and instead continue to chase every opportunity, in the end doing nothing.”

For the last year the Tevatron and the CERN collider have been engaged in a race to discover, among other things, the Higgs. By all accounts the Tevatron, with a 20-year head start, was ahead. Moreover, CERN had been scheduled to shut down for a year in 2012 to fix their machine and bring it up to par. In response to the prospect of the Tevatron extending its run, CERN had been talking recently about postponing its own shutdown for a year.

But the squeeze is on science budgets, and the continuation of the Tevatron was not to be. “Unfortunately the current budgetary climate is very challenging and additional funding has not been identified,” William Brinkman, director of the office of science at the Department of Energy, said in a letter to the High Energy Physics Advisory Panel on Jan. 6.

The Tevatron will be remembered scientifically for the discovery of the top quark, the last missing part of the ensemble that makes up ordinary matter, in 1995, and a host of other intriguing results like the controversial discovery last summer of a particle that goes back and forth between being itself and its evil-twin antiparticle a little faster in one direction than the other, providing a possible clue to why the universe is now made of matter and not antimatter.

Physicists will be analyzing and studying the data from its two big detectors, DZero and the Collider Detector at Fermilab, CDF, for years.

It will also be remembered as a fount of technological development whose influence spread far beyond high- energy physics. The development, in partnership with industry, of superconducting magnets for Fermilab’s machine, said Young-Kee Kim, deputy director of the lab, helped pave the way for cheap M.R.I. machines for hospitals.

Although it is the end for the Tevatron, it is not the end for Fermilab, which helped build the Large Hadron Collider and which hosts a control room for one of that accelerator’s gigantic particle detectors, and is also home to a thriving cosmology program. The lab has bet its long-term future on a new-generation accelerator program called Project X which would produce intense proton beams for producing and scrutinizing other particles like neutrinos.

Robert Roser, a Fermilab physicist, said, “I always knew it would be a long shot to run three additional years.” He credited the competition with Fermilab with spurring on the Europeans.

“I believe they made machine progress more rapidly than they would have had we not been part of the landscape in the coming years,” Dr. Roser wrote in an e-mail message. He and others pointed out that the machines were complementary — the Tevatron collides protons and their opposites, antiprotrons, while the

191 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

CERN collider bangs together protons and thus produces slightly different fireballs with different mixtures of particles and radiation coming out of them. Without the Tevatron’s data, Dr. Roser said, it would take the CERN longer to confirm the Higgs when and if they finally find it.

Physicists are trained to be unsentimental about facts, theories and machines, but the rest of us are not obliged to be so unsentimental about the Tevatron and what it meant for American science. Robert Wilson, Fermilab’s founding director, was an artist as well as physicist, and he took pains to ensure that the lab’s physical presence was as elegant as the ideas it was built to explore. The Tevatron was buried underground, to shield the world from the radiation of its beams, but Dr. Wilson had a circular berm built over it, so that the ring would be visible. From high in the sky the berm and the roads that circle it make a pattern that might intrigue an alien civilization that had sufficiently acute vision and lure them in.

Some day alien archaeologists could excavate the tunnel in which giant machines replayed the Big Bang and wonder what happened to the people who built it, and what they thought about their place in the universe. http://www.nytimes.com/2011/01/18/science/18collider.html?ref=science

192 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

Bending and Stretching Classroom Lessons to Make Math Inspire

By KENNETH CHANG

Erik D. Demaine

SHAPE OF THINGS TO COME Vi Hart looking through an icosahedron made from six balloons (which is computationally optimal).

FARMINGDALE, N.Y. — At the aptly named Tiny Thai restaurant here, a small table, about two and a half feet square, was jammed with a teapot, two plates of curry, a bowl of soup, two cups of tea, two glasses of water, a plate with two egg rolls, a plate of salad and an iPhone.

For most people, this would have been merely an unwieldy lunch.

For Vi Hart, her mind pondered the mathematical implications. “There’s a packing puzzle here,” she said. “This is the kind of thing where if you’re accustomed to thinking about these problems, you see them in everything.”

Mathematicians over the centuries have thought long and deep about how tightly things, like piles of oranges, can be packed within a given amount of space.

“Here we’ve got even another layer,” Ms. Hart said, “where you’re allowed to overhang off the edge of your square. So now you have a new puzzle, where maybe you want the big things near the edge because you can fit more of them off the edge before they fall off.”

Ms. Hart — her given name is Victoria, but she has long since dropped the last six letters — has an audacious career ambition: She wants to make math cool.

193 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

She effused, “You’re thinking about it, because it’s awesome.”

She calls herself a full-time recreational mathemusician, an off-the-beaten-path choice with seemingly limited prospects. And for most of the two years since she graduated from Stony Brook University, life as a recreational mathemusician has indeed been a meager niche pursuit.

Then, in November, she posted on YouTube a video about doodling in math class, which married a distaste for the way math is taught in school with an exuberant exploration of math as art .

The rapid-fire narration begins, “O.K., let’s say you’re me and you’re in math class, you’re supposed to be learning about exponential functions, but you’re having trouble caring about exponential functions because unfortunately your math class is probably not terribly engaging.”

The video never shows her face, just her hands doodling in a notebook. She talks about binary trees, Hercules cutting off the heads of a mythical hydra (each severed neck grows two new heads, which is the essence of a binary tree), and a fractal pattern known as Sierpinski’s Triangle.

She did another about drawing stars (really about geometry and polygons). Then another about doodling snakes (which segues into graph theory, “a subject too interesting to be included in most grade-school curricula,” she says). And another about prime numbers. (“Remember, we use prime numbers to talk to aliens. I’m not making this up.”)

The videos went viral, viewed more than a million times.

“You > Chuck Norris,” gushed a fan on her YouTube page.

At first glance, Ms. Hart’s fascination with mathematics might seem odd and unexpected. She graduated with a degree in music, and she never took a math course in college.

At second glance, the intertwining of art and math seems to be the family business. Her father, George W. Hart, builds sculptures based on geometric forms. His day job until last year was as a computer science professor at Stony Brook; he is now chief of content for the Museum of Mathematics, which is looking to open in Manhattan next year.

The summer Ms. Hart was 13, she tagged along with her father to a computational geometry conference. “And I was hooked, immediately,” she said. “It was so different from school, where you are surrounded by this drudgery and no one is excited about it. Any gathering of passionate people is fun, really no matter what they’re doing. And in this case, it was mathematics.”

In college, she continued attending math conferences and collaborated on a number of papers with Erik D. Demaine, an M.I.T. professor best known for his origami creations.

After finishing her music degree — as a senior, she composed and conducted a seven-part musical piece based on the seven Harry Potter books — “I couldn’t focus on one thing or ever see myself fitting into any little slot where I would have some sort of normal job,” Ms. Hart said. “If I want to spend a week carving fruit up into polyhedra, I want to spend a week carving fruit up into polyhedra, and where am I going to get a job doing that?”

She did indeed spend a week carving fruit into polyhedrons, posting photographs and instructions on her Web site, vihart.com.

194 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

Last summer, she became enamored of hyperbolic planes, mathematical surfaces that are typically represented as horse saddles or Pringles chips.

Whereas others make bracelets or necklaces out of beads, Ms. Hart constructed hyperbolic planes out of them. She painted images of hyperbolic planes. She dried slices of fruit, which warped into hyperbolic planes.

“It just wiggles all over the place,” she said of a hyperbolic plane. “People don’t think of it that way, as being like a wild and beautiful thing.”

Such mathematical musings drew modest amounts of interest. In the fall, she was looking over some of her doodles. She thought of taking photographs of them and writing instructions for those, too, but she decided to try something different. She made her first doodling video.

Working by herself, practically embracing a camera on a tripod, she created a video of her doodling seemingly from the point of view of the doodler.

“I want a real first-person view,” she said, “because I want people to feel they can do this. People can. It’s mathematics that anyone can do.”

The ensuing attention has come with job offers and an income. In one week in December, she earned $300 off the advertising revenue that YouTube shares with video creators. She is also happy that, unlike in her early efforts, which drew an audience typical of mathematics research — older and male, mostly — the biggest demographic for her new videos, at least among registered users, are teenage girls.

“I just think that’s really awesome,” she said, “because you’ve got girls in middle school and high school who are suddenly enjoy mathematics and enjoying being a little nerdy and smart, and we need that.”

Ms. Hart has not decided her next step. She could accept one of the job offers. She has thought of pursuing a graduate degree in mathematics, although she worries about all the undergraduate courses she would need to catch up on.

“What has become clear just recently is that I have options, and it’s very strange,” she said.

Ultimately, she hopes she can be a Martin Gardner for the Web 2.0 era. Mr. Gardner, who died last year at age 95, wrote mathematics columns for Scientific American and other publications. “I want to be the ambassador of mathematics,” she said.

For the holidays, she took advantage of the musical side of her mathemusician identity, rewriting “The 12 Days of Christmas.”

For example, “On the fourth day of Christmas, my true love gave to me: the smallest possible number of sides on a polyhedron, the number of points that define a plane, the divisor of even numbers and any other number to the power of zero.”

Mathematical translation: polyhedrons have a minimum of four sides, three points define a plane, two is a divisor of all even numbers, and any number raised to the power of zero is one. http://www.nytimes.com/2011/01/18/science/18prof.html?ref=science

195 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

A New Econ Core January 10, 2011

Ph.D. programs in economics are built on a common core of courses taken by students of a variety of specialties and philosophies within the field. In theory, this core represents must-know material for everyone in the field. But is it as relevant as it should be? And could it be discouraging creativity?

Those are questions posed by papers presented Friday in Denver at the annual meeting of the American Economic Association.

One paper represents the views of graduate students at leading doctoral programs who were brought together by the Committee on Economic Education (with support from the Teagle Foundation and other groups) to consider whether the core is as effective as it might be. The graduate students found plenty of room for improvement. (The students were from Columbia, Northwestern, and Princeton Universities: the Massachusetts Institute of Technology; and the Universities of Chicago, Pennsylvania and California at Berkeley.)

They offered suggestions for changes in each of the three main parts of the core: macroeconomics, microeconomics and econometrics -- with many of the suggestions broadly focused on issues of relevance.

Students described themselves as “happiest” with the micro portion of the core. But students noted “a blurred understanding of the rationale for using seemingly unrealistic assumptions and models” and said that they would prefer models that relate in clear ways to concrete problems.

The lack of application was seen as a greater problem in macro courses, with the paper noting that “students come out of their first year macro sequences frustrated by the lack of context…. They find it difficult to relate what they learn in the courses to the real-world economy.”

Macro should focus more on real economic institutions and policies and less on abstract models, the students concluded.

With regard to metrics, the students noted that computing has “removed many of the constraints on empirical research” faced by previous generations -- and that this advance suggests that a shift would be appropriate. Rather than trying to produce “econometric theorists,” graduate courses should focus on training “practitioners (and consumers) of econometrics,” the students said.

In a related paper, David Colander, a professor of economics at Middlebury College, argued that what the core really needs is “more creativity.”

He offered a series of suggestions for graduate programs, some of which he acknowledged would be seen as “radical”:

• Professors could use the core to introduce students “to the wide range of cutting-edge techniques and

approaches” used in the field, not just as a time to teach “a specific set of techniques.”

• In a recommendation similar to those of the students, professors were urged to add context to the

core courses, perhaps through creating a section of each core course focused on how various

techniques could actually be used.

196 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

• Programs could start a “creativity day” each semester in which courses are called off and students

and faculty work together on different kinds of economic problems that are of interest, but that don’t

come up in the standard curriculum.

• Departments should organize more coffee hours and beer nights.

— Scott Jaschik

http://www.insidehighered.com/layout/set/print/news/2011/01/10/new_ideas_for_the_core_for_graduate_stud ents_in_economics

197 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

'Academically Adrift' January 18, 2011

If the purpose of a college education is for students to learn, academe is failing, according to Academically Adrift: Limited Learning on College Campuses, a book being released today by University of Chicago Press.

The book cites data from student surveys and transcript analysis to show that many college students have minimal classwork expectations -- and then it tracks the academic gains (or stagnation) of 2,300 students of traditional college age enrolled at a range of four-year colleges and universities. The students took the Collegiate Learning Assessment (which is designed to measure gains in critical thinking, analytic reasoning and other "higher level" skills taught at college) at various points before and during their college educations, and the results are not encouraging:

* 45 percent of students "did not demonstrate any significant improvement in learning" during the first two years of college. * 36 percent of students "did not demonstrate any significant improvement in learning" over four years of college. * Those students who do show improvements tend to show only modest improvements. Students improved on average only 0.18 standard deviations over the first two years of college and 0.47 over four years. What this means is that a student who entered college in the 50th percentile of students in his or her cohort would move up to the 68th percentile four years later -- but that's the 68th percentile of a new group of freshmen who haven't experienced any college learning.

"How much are students actually learning in contemporary higher education? The answer for many undergraduates, we have concluded, is not much," write the authors, Richard Arum, professor of sociology and education at New York University, and Josipa Roksa, assistant professor of sociology at the University of Virginia. For many undergraduates, they write, "drifting through college without a clear sense of purpose is readily apparent."

The research findings at the core of the book are also being released today by their sponsor, the Social Science Research Council. (Esther Cho of the council is a co-author on that paper.)

The main culprit for lack of academic progress of students, according to the authors, is a lack of rigor. They review data from student surveys to show, for example, that 32 percent of students each semester do not take any courses with more than 40 pages of reading assigned a week, and that half don't take a single course in which they must write more than 20 pages over the course of a semester. Further, the authors note that students spend, on average, only about 12-14 hours a week studying, and that much of this time is studying in groups.

The research then goes on to find a direct relationship between rigor and gains in learning:

* Students who study by themselves for more hours each week gain more knowledge -- while those who spend more time studying in peer groups see diminishing gains. * Students whose classes reflect high expectations (more than 40 pages of reading a week and more than 20 pages of writing a semester) gained more than other students. * Students who spend more time in fraternities and sororities show smaller gains than other students. * Students who engage in off-campus or extracurricular activities (including clubs and volunteer opportunities) have no notable gains or losses in learning. * Students majoring in liberal arts fields see "significantly higher gains in critical thinking, complex reasoning, and writing skills over time than students in other fields of study." Students majoring in business, education, social work and communications showed the smallest gains. (The authors note that this could be more a reflection of more-demanding reading and writing assignments, on average, in the liberal arts courses than of the substance of the material.)

198 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

In section after section of the book and the research report, the authors focus on pushing students to work harder and worrying less about students' non-academic experiences. "[E]ducational practices associated with academic rigor improved student performance, while collegiate experiences associated with social engagement did not," the authors write.

In an interview, Arum said that the problems outlined in the book should be viewed as a moral challenge to higher education. Students who struggle to pay for college and emerge into a tough job market have a right to know that they have learned something, he said. "You can't have a democratic society when the elite -- the college-educated kids -- don't have these abilities to think critically," he said.

The book rejects the idea of federal mandates on testing or the curriculum, suggesting that such requirements rarely work. And the book acknowledges that many college educators and students don't yet see a crisis, given that students can enroll, earn good grades for four years, and graduate -- very much enjoying themselves in the process. But in an era when "the world has become unforgiving" to those who don't work hard or know how to think, Arum said that this may be a time to consider real change.

The culture of college needs to evolve, particularly with regard to "perverse institutional incentives" that reward colleges for enrolling and retaining students rather than for educating them. "It's a problem when higher education is driven by a student client model and institutions are chasing after bodies," he said.

The analysis in the book stresses that there is significant variation within institutions, not just among institutions, with students in some academic programs regularly outperforming others at the same campuses. Arum said this suggests that institutions can improve student learning by making sure that there is some consistency across disciplines in the rigor of requirements. "You need an internal culture that values learning," he said. "You have to have departments agree that they aren't handing out easy grades."

Further, he said that colleges need to shift attention away from measures of "social engagement" (everything that's not academic) and toward academic engagement, even if some of those measures of non-academic engagement help keep students engaged and enrolled. "It's a question of what outcome you want," he said. "If the outcome is student retention and student satisfaction, then engagement is a great strategy. If, however, you want to improve learning and enhance the academic substance of what you are up to, it is not necessarily a good strategy."

(If this sounds like a swipe at the National Survey of Student Engagement, Arum said it shouldn't be taken that way. He praises NSSE for asking questions that focus on the student experience, and says that many of NSSE's findings on the minimalist levels of academic work and studying are consistent with his own. Rather, he faults college administrators for paying little attention to those findings and more on NSSE measures of non-academic satisfaction.)

Arum acknowledged that the tough economy may be acting against reform, given that many professors report that increases in class size and course loads are leading them to cut down on the ambition of student assignments simply to keep up with grading. With fewer full-time positions, professors at many institutions "are overwhelmed," he said. But Arum challenged faculty members to be creative in finding ways to assign more writing and reading to students.

Distribution of the book is just starting, but there are signs it could generate buzz. The Social Science Research Council will host a panel this week in Washington featuring experts on assessment and higher education, with representatives from leading think tanks and foundations. The book will also be discussed at next week's meeting of the Association of American Colleges and Universities.

Debra Humphreys, vice president for communications and public affairs of AAC&U, said that she viewed the book as "devastating" in its critique of higher education. Faculty members and administrators (not to mention

199 Infoteca’s E-Journal No. 144 February 2011

Sistema de Infotecas Centrales Universidad Autónoma de Coahuila

students and parents) should be alarmed by how little learning the authors found to be taking place. She also said that the findings should give pause to those anxious to push students through and award more degrees -- without perhaps giving enough attention to what happens during a college education.

"In the race to completion, there is this assumption that a credit is a credit is a credit, and when you get to the magic number of credits, you will have learned what you need to learn," she said. What this book shows, Humphreys added, is that "you can accumulate an awful lot of credits and not learn anything."

AAC&U programs have in the past stressed the value of academic rigor and also of engagement of students outside the classroom. Humphreys said that she agreed with the book that some activities students enjoy may not add to their learning. But she said it was important not to view all engagement activities in the same way. It is important, she said, "not to lump together activities such as being in a fraternity or just hanging out with friends" with activities such as extracurricular activities that may in fact be quite educational and important, even if not linked to a specific course.

Students could benefit especially, she feels, from the point in the book about the variation among those at the same institution. "I don't think we are doing well enough at helping them understand that choices matter," she said. "Choices in the academic courses they take, how much they are working outside the classroom, how much they are studying, how much they are partying -- that balance is important." — Scott Jaschik http://www.insidehighered.com/news/2011/01/18/study_finds_large_numbers_of_college_students_don_t_lea rn_much

200 Infoteca’s E-Journal No. 144 February 2011