Saturday, February 28, 2009

3-D Visualizations for Cancer Diagnosis

University of Washington researchers have helped develop a new kind of microscope to visualize cells in three dimensions, an advance that could bring great progress in the field of early cancer detection. The technique could also bridge a widening gap between cutting-edge imaging techniques used in research and clinical practices, researchers said.

Eric Seibel, a UW mechanical engineering associate professor, and his colleagues have worked in collaboration with VisionGate, Inc., a privately held company in Gig Harbor, Wash., that holds the patents on the technology. The machine works by rotating the cell under the microscope lens and taking hundreds of pictures per rotation, and then digitally combining them to form a single 3-D image.

The 3-D visualizations could lead to big advances in early cancer detection, since clinicians today identify cancerous cells by using 2-D pictures to assess the cells' shape and size.
"It's a lot easier to spot a misshapen cell if you can see it from all sides," Seibel said. "A 2-D representation of a 3-D object is never perfectly accurate -- imagine trying to get an exact picture of the moon, seeing only one side."

The new microscope, known by the trademarked name Cell-CT, is so named because it works similarly to a CT-scan -- though on a very small scale, and using visible light instead of X-rays. In a CT-scan, the patient is immobile while the X-ray machine rotates. In the Cell-CT microscope, each cell is embedded in a special gel inside a glass tube that rotates in front of a fixed camera that takes many pictures per rotation. The gel has similar optical properties to the tube's so that no light reflects off the glass. In both processes, the end result is that hundreds of pictures are assembled to form a 3-D image that can be viewed and rotated on a computer screen.

The new 3-D microscope also helps to bring imaging techniques from the lab to the doctor's office. Although great advances have been made in microscope technology through the years, clinicians have been using essentially the same technique for cancer diagnoses for the last 300 years, Seibel said. Pathologists today still use a cell stain invented in the 1700s to examine sections of suspected cancers. Pathologists do not use any of the newer fluorescent molecular dyes that produce the precise, detailed cellular portraits found in biology journals.

"Scientists have been using fluorescent dyes in research for decades, but these techniques have not yet broken into everyday clinical diagnoses," Seibel said. "There's a big gap between the research and clinical worlds when it comes to cancer, and it's getting wider. We're trying to bridge that gap."

Part of the reason for this gap, Seibel said, is that there is no way to accurately match an image taken using the fluorescent dyes with an image taken using the traditional stains that currently form the basis for cancer diagnoses, and for which diagnostic standards exist. The new 3-D microscope will allow that matchup -- Seibel and his colleagues have shown simultaneous fluorescent and traditional staining of the same cells. The new device is the first 3-D microscope that can use both traditional and fluorescent stains, Seibel said.

"Now that we have a way to compare these stains, we hope this will provide a way to get some of those sophisticated research techniques into clinical use," Seibel said.
The new microscope is also more precise than other 3-D machines currently available. All other microscopes producing 3-D images have poor resolution in the up-down direction, the direction between the sample and the microscope's lens, Seibel said.

Qin Miao, a UW bioengineering doctoral student, used a tiny plastic particle of known dimensions to show the microscope's resolution. He found that the UW group's machine has three times better accuracy in that up-down direction than standard microscopes used in cancer detection. Miao will present the group's findings for the microscope's performance Feb. 9 at the SPIE Medical Imaging conference in Orlando, Fla.

In another recent publication, Seibel and his colleagues describe a study comparing cancer detection using traditional methods with their 3-D microscope. Pathologists using 3-D technology detected cancer with one-third the error rate compared to those using the traditional microscope. The authors also describe using their microscope to discover a "pre-cancer" cell, a cell that was on the verge of turning cancerous.

A New Generation of Materials

Scientists from the Madrid Institute of Advanced Studies in Materials (IMDEA Materiales)– in collaboration with a research group from CENIM, CSIC and the Iskra institute of Ufa, Russia – have developed a mechanical method to generate and stabilise at room temperature and atmospheric pressure crystalline phases of metals that until now have only been stable at very high pressures.

The atoms of metals are organized in ordered structures denominated crystal lattices. The geometry of the latter depends of the nature of the material as well as of temperature and pressure. At room temperature and atmospheric pressure, pure metals like gold, aluminium and copper have cubic lattices, and others like magnesium, titanium and zirconium have hexagonal structures (called alpha phases, a).

Increases in pressure occasionally cause changes in the geometry of the crystal lattice, resulting in the appearance of new phases. For example, in the case of titanium, the hexagonal a lattice, stable at 1 atm, transforms into a cubic structure (beta phase) when a hydrostatic pressure of approximately 1 million atmospheres is applied. If, once the cubic phase has been generated, the pressure is reduced down to 1 atm, the reverse transformation takes place, giving rise to the original hexagonal a phase. Due to the extreme pressure conditions needed to generate these new phases, the practical applications of these materials are very limited.

Scientists from (IMDEA Materiales) – in collaboration with the National Centre for Metallurgical Research] (CENIM) and the Iskra institute of Ufa, Russia – have developed a mechanical method to stabilise at room temperature and atmospheric pressure crystalline phases of metals that until now have only been stable at very high pressure. The method is based on simultaneously applying compression and shear strains. It has been proven that shear enhances the transformation kinetics significantly, eliminating the need for very high pressures. This technique has been successfully applied to pure titanium and zirconium and a patent application has been filed.

The high-pressure phases could have properties of great technological interest. For example, cubic titanium (beta phase) is very attractive for manufacturing bone implants, since its elastic modulus is more similar to bone than hexagonal titanium. Moreover, it is known that the critical superconducting temperature of beta titanium is also higher. This research therefore represents a first step towards manufacturing a new generation of materials with as yet unknown properties and opens the doors to their practical application.

Friday, February 27, 2009

Environment Ministers have Agreed to Negotiate Mercury Treaty

Six thousand tonnes of mercury enter the environment every year, posing a threat to human and animal health. Environment ministers meeting in Kenya have agreed to negotiate a treaty to reduce the supply and use of mercury worldwide.

The ministers from 140 countries, attending UNEP's Governing Council meeting in Nairobi Feb 16-20, reached consensus to begin negotiating a legally-binding instrument to control mercury pollution next year, leading to a treaty for signature in 2013.

Governments also agreed to increase the budget of UNEP, support renewable energy and energy efficiency, and underlined the importance of investment in a "green economy" as part of worldwide economic recovery.

Mercury is found in thermometers and household products, and is used in plastic production and mining. UNEP says of the around 6,000 tonnes of mercury entering the environment annually, some 2000 tonnes comes from power plants and coal burned in homes. The dense and highly toxic metal stays in the environment once released, travelling across the globe on air and sea currents.

"This decision to develop a mercury treaty is the first step in addressing the global mercury crisis. Levels of mercury have increased two- to three-fold in the last 200 years, to the point where large fish such as tuna, swordfish, shark, are not safe [to eat]," mercury campaigner Michael Bender told IPS.

Mercury is a dangerous neurotoxin that makes its way up to the food chain into humans. Even slight exposure to its most toxic form, methylmercury, causes irreversible damage to developing brain of children. In some countries, women of child-bearing age are advised not to eat certain types of fish with high mercury levels, particularly large predatory fish which have been established to contain high levels of mercury.

Bender, director of the Mercury Policy Project, a U.S.-based organisation promoting policies to eliminate mercury use, said, "The main concern is that pregnant mothers and foetuses are at greater risk of developing complications from consuming mercury contaminated fish."

While public awareness of poisoning from fish contamination is crucial, questions are being raised over the practicability of this dietary suggestion. "Such dietary restriction is terrible and impossible for many fisher communities in the world. For some people, because of the poverty level, fish is all they can afford because they can get it very fast straight from the water," said a statement by the Women's Major Group, comprising women present at the UNEP governing council meeting.

Another source of mercury poisioning is the substantial amounts of mercury used in mineral processing, often in highly unsafe and environmentally hazardous conditions. It is estimated that in more than 50 developing countries across Asia, Africa and South America, there are about 15 million artisanal and small-scale miners.

Upwards of 100 million people may be affected, directly and indirectly, by mercury from this sector, according to a 2007 global mercury project undertaken jointly by the Global Environment Facility, United Nations Industrial Development Organisation and the United Nations Development Programme.

"The workers are working in mines handling mercury like water; without gloves, barefoot and almost naked. They go back home with traces of mercury on their hands, putting the lives of those they live with in danger of inhaling mercury vapour," said Hemsing Hurrynag, the Africa coordinator Zero Mercury Campaign, an international coalition of 75 non-governmental organisations advocating for mercury reduction.

He says there is little awareness of the dangers of mercury both to humans and the environment in communities surrounding mines, where extensive environmental degradation and ecosystem contamination has been recorded, going on for decades after mining activities have ceased.

A UNEP 2008 publication, Mercury Use in Artisanal and Small Scale Gold Mining, states that the rising price of gold - up from 260 dollars per ounce in March 2001 to over 1000 dollars per ounce in March 2008 - has seen a gold rush involving poverty-driven miners in many countries. Small-scale mining provides an important source of income in rural communities and regions where economic alternatives are limited.

Given that emissions from one country are transported through the air water, mercury emissions are a global issue. Earlier resistance to a legally-binding treaty came from countries that are heavily dependent on coal for power generation. India and China previously supported only voluntary cuts in emissions. The new government of the United States also reversed its position, clearing the way for negotiations to begin. Under the Bush administration, the U.S. opposed any international efforts to reach legally-binding agreements such as the one now proposed for mercury.

The intended treaty is expected to reduce production of mercury, provide for safe storage of existing stockpiles and establish awareness creation mechanisms that will inform populations about the threats posed by this toxic substance.

According to UNEP's executive director, Achim Steiner, the global nature of mercury pollution requires well-coordinated international efforts that compel countries to commit to each other. And his organisation is embarking straight away on action.

Thursday, February 26, 2009

Acid Rain is Affecting a Major Portion of China since Last Few Years

Acid rain caused by worsening air pollution now affects one-third of China’s landmass, threatening soil quality and food safety. In 2005, acid rain hit more than half of the 696 cities and counties under air-quality monitoring, with some cities receiving all of their precipitation as acid rain.

While air quality has improved in some areas of China as a result of adjustments in the nation’s energy structure and stricter vehicle emissions standards, 40 percent of urban air quality remains below even second-grade national standards, reflecting various levels of pollutants. Sulfur dioxide and inhalable particulate matter are the two major acid rain-causing substances.

More than 25 million tons of sulfur dioxide belched from China’s coal-fired power and coking plants last year, double the level deemed safe for the facilities` environmental capacity. The desulfurization facilities in these plants have a combined capacity of 53 million kilowatts, representing only 14 percent of total installed capacity. Shanxi Province in northeastern China, famous for its local coking industry, produced more than 80 million tons of coke (a solid carbon residue used in making steel) in 2005, emitting high levels of sulfurous compounds. Of the more than 680 coking enterprises province-wide, only 65 have applied for environmental protection examination and approval; of these, only 30—or about 5 percent of all coking enterprises—currently meet national sulfur emission standards.

Inhalable particulate matter (PM) is the primary pollutant affecting human health and urban air quality in China. In 2005, 35.8 percent of the nation’s cities suffered from PM pollution at levels below the second-grade national standards; the most polluted regions are northern Shanxi, Inner Mongolia, Ningxia, and southwestern Sichuan provinces. Inhalable PM is caused by emissions of soot (fine black particles composed chiefly of carbon, produced by incomplete combustion of coal, oil, wood, or other fuels) and industrial powders. In 2004, Chinese soot emissions topped 11.8 million tons and industrial powder emissions totaled 9.11 million tons.

Wednesday, February 25, 2009

Intelligent Molecules are Designed for Treatment of Diseased Cells

Current treatments for diseases like cancer typically destroy nasty malignant cells, while also hammering the healthy ones. Using new advances in synthetic biology, researchers are designing molecules intelligent enough to recognize diseased cells, leaving the healthy cells alone.

We basically design molecules that actually go into the cell and do an analysis of the cellular state before delivering the therapeutic punch," said Christina Smolke, assistant professor of bioengineering who joined Stanford University in January.

"When you look at a diseased cell (e.g. a cancer cell) and compare it to a normal cell, you can identify biomarkers—changes in the abundance of proteins or other biomolecule levels—in the diseased cell," Smolke said. Her research team has designed molecules that trigger cell death only in the presence of such markers. "A lot of the trick with developing effective therapeutics is the ability to target and localize the therapeutic effect, while minimizing nonspecific side effects," she said.

Smolke will present the latest applications of her lab's work at the American Association for the Advancement of Science (AAAS) meeting in Chicago on Feb. 13.

These designer molecules are created through RNA-based technologies that Smolke's lab developed at the California Institute of Technology. A recent example of these systems, developed with postdoctoral researcher Maung Nyan Win (who joined Smolke in her move to Stanford), was described in a paper published in the Oct. 17, 2008, issue of Science.

"We do our design on the computer and pick out sequences that are predicted to behave the way we like," Smolke said. When researchers generate these sequences inside the operating system of a cell, they reprogram the cell and change its function. "Building these molecules out of RNA gives us a very programmable and therefore powerful design substrate," she said.

Smolke's team focuses on well-researched model systems in breast, prostate and brain cancers, including immunotherapy applications based on reprogramming human immune response to different diseases. The researchers work directly with clinicians at the City of Hope Cancer Center (a National Cancer Institute designated Comprehensive Cancer Center in Duarte, Calif.) that have ongoing immunotherapy trials for treating glioma, a severe type of brain cancer.

"Our goal is to make more effective therapies by taking advantage of the natural capabilities of our immune system and introducing slight modifications in cases where it is not doing what we would like it to do," Smolke said. She hopes to translate her technologies into intelligent cellular therapeutics for glioma patients in the next five years. "That's a very optimistic view," she said. "But so far things have been moving quickly."

The broader implications for using intelligent molecules in immunotherapy and gene therapy seem limitless. Researchers and doctors can use this approach by targeting a specific cellular function or behavior they want to control in a particular disease. Then they can identify signals indicative of viral infection, host immune response, or drugs the clinician is administering and engineer the molecules to change the cell function in response to those signals.

"In a lot of therapies, you have nonspecific side effects or you're balancing the desired effect of the therapy on diseased cells or infection with its undesired effects on the entire host," Smolke said. Current chemotherapy treatments for cancer, and even many gene therapies, have drastic and debilitating consequences for patients. The designer molecules provide a whole new targeting accuracy that should mitigate these side effects.

"This is all very front-end work," Smolke said. "We've just started to move these foundational technologies into these sorts of downstream medical applications, and so there is a lot to learn … which makes it that much more exciting."

Tuesday, February 24, 2009

Raw Biomass can be Turned into Biofuels by a Two-Step Chemical Process

The key to the new process is the first step, in which cellulose is converted into the "platform" chemical 5-hydroxymethylfurfural (HMF), from which a variety of valuable commodity chemicals can be made.

Raines and graduate student Joseph Binder, a doctoral candidate in the chemistry department, developed a unique solvent system that makes this conversion possible. The special mix of solvents and additives, for which a patent is pending, has an extraordinary capacity to dissolve cellulose, the long chains of energy-rich sugar molecules found in plant material. Because cellulose is one of the most abundant organic substances on the planet, it is widely seen as a promising alternative to fossil fuels. This solvent system can dissolve cotton balls, which are pure cellulose and it's a simple system—not corrosive, dangerous, expensive or stinky.

This approach simultaneously bypasses another vexing problem: lignin, the glue that holds plant cell walls together. Often described as intractable, lignin molecules act like a cage protecting the cellulose they surround. However, Raines and Binder used chemicals small enough to slip between the lignin molecules, where they work to dissolve the cellulose, cleave it into its component pieces and then convert those pieces into HMF.

In step two, Raines and Binder subsequently converted HMF into the promising biofuel 2,5-dimethylfuran (DMF). Taken together, the overall yield for this two-step biomass-to-biofuel process was 9 percent, meaning that 9 percent of the cellulose in their corn stover samples was ultimately converted into biofuel.

DMF has the same energy content as gasoline, doesn't mix with water and is compatible with the existing liquid transportation fuel infrastructure. It has already been used as a gasoline additive.

In addition to corn stover, Raines and Binder have tested their method using pine sawdust, and they're looking for more samples to try out. "Our process is so general I think we can make DMF or HMF out of any type of biomass," he says.

Monday, February 23, 2009

Scientists Map CO2 Emissions with Google Earth

The U.S. accounts for 25% of global emissions of carbon dioxide.

A team of U.S. scientists led by Purdue University unveiled an interactive Google Earth map on Feb. 19 showing carbon dioxide emissions from fossil fuels across the U.S. The high-resolution map, available at www. purdue.edu/eas/carbon/vulcan/GEarth, shows carbon dioxide emissions in metric tons in residential and commercial areas by state, county or per capita.

Called "Vulcan" after the Roman god of fire, the project, which took three years to complete, quantifies carbon dioxide emissions from burning fossil fuels such as coal and gasoline. It breaks down emissions by the sectors responsible including aircraft, commercial, electricity production, industrial, residential and transport.

"This will bring emissions information into everyone's living room as a recognizable, accessible online experience," said Kevin Gurney, the project leader and an assistant professor of earth and atmospheric sciences at Purdue. "We hope to eventually turn it into an interactive space where the public will feed information into the system to create an even finer picture of emissions down to the street and individual building level."

The U.S. for some 25% of global emissions of carbon dioxide, which scientists have identified as the most important human-produced gas contributing to global climate change.

Simon Ilyushchenko, an engineer at Internet search giant Google who worked on the project, said "integrating the data with Google Earth was a way to advance public understanding of fossil fuel energy usage. Dynamic maps of the data, broken down by the different sources of emissions, easily show where people burn more gasoline from driving or where they use more fuel for heating and cooling homes and businesses."

Vulcan integrates carbon dioxide emissions data from the U.S. Environmental Protection Agency and U.S. Department of Energy. The current data is from 2002, but the scientists said they plan to incorporate more recent data. Besides Purdue, the project also involved researchers from Colorado State University and Lawrence Berkeley National Laboratory. It was funded by NASA, the U.S. Department of Energy, the Purdue Showalter Trust and Indianapolis-based Knauf Insulation.

Technology Can Help US to Search for Bin Laden`s Hide-out in Pakistan


In a new study published online February 17 by the MIT International Review, the geographers report that simple facts, publicly available satellite imagery and fundamental principles of geography place the mastermind behind the Sept. 11 attacks against the U.S. in one of three buildings in the northwest Pakistan town of Parachinar, in the Kurram tribal region near the border with Afghanistan.

Despite keen interest in the terrorist recluse and a $25 million reward for information leading to his capture, academics have shied away from getting involved in the quest to find him, the researchers contend. Meanwhile, dramatic improvements in remote-sensing imagery have improved the odds of civilians doing so.

"We believe our work represents the first scientific approach to establishing bin Laden's current location," said John A. Agnew, study co-author and UCLA geography professor. "The methods are repeatable and could easily be updated with new information obtained by the U.S. intelligence community."

The researchers advocate that the U.S. investigate — but not bomb — the three buildings. They warn that if bin Laden indeed remains to this day in the tiny city of Parachinar, or even elsewhere in the relatively thinly populated tribal area of Kurram, he may move to the city of Peshawar (population 1.4 million) in the neighboring tribal area of North-West Frontier Province if Peshawar falls to the Taliban. News reports have warned of that possibility since last summer.
"If bin Laden were to move to Peshawar, which would become an option if the Taliban were in control there, the search would become much more complicated," Gillespie said. "It's the difference between looking for someone in L.A. versus in Big Bear," he added, referring to a mountain resort town 90 miles east of Los Angeles.

The findings are based on the last information on bin Laden's whereabouts to be made public by U.S. intelligence sources, which have closely guarded the details of any efforts to locate him. One and a half months after the coordinated attacks on the World Trade Center and the Pentagon claimed the lives of more than 3,000 people, a walkie-talkie radio broadcast placed bin Laden in Tora Bora, a cave complex in eastern Afghanistan. In an unsuccessful attempt to capture bin Laden, U.S. forces attacked the caves the following month.

The UCLA findings rely on two principles used in geography to predict the distribution of wildlife, primarily for the purposes of designing approaches to conservation. The first, known as distance-decay theory, holds that as one travels farther away from a precise location with a specific composition of species — or, in this case, a specific composition of cultural and physical factors —the probability of finding spots with that same specific composition decreases exponentially. The second, island biogeographic theory, holds that large and close islands have larger immigration rates and will support more species than smaller, more isolated islands.
Inspired by distance-decay theory, the seven-member team started by drawing concentric circles around Tora Bora on a satellite map of the area at a distance of 10 kilometers — or 6.1 miles — apart.

"The farther bin Laden moves from his last reported location into the more secular parts of Pakistan or into India, the greater the probability that he will be in an area with a different cultural composition, thereby increasing the probability of his being captured or eliminated," Gillespie said.

Then, informed by island biogeographic theory, the researchers scoured the rings for "city islands" — or distinctly separate settlements of considerable size.

"Island biology theory predicts that he would find his way to the largest but least isolated city of that area," said Gillespie, an authority on measuring and modeling biodiversity on Earth from space. "If you get stuck on an island, you would want it to be Hawaii rather than one with a single palm tree. It's a matter of resources."

The approach netted 26 cities within a 12.4-mile radius of Tora Bora on imagery from Landsat Enhanced Thematic Mapper Plus (ETM+), a global archive of satellite photos managed by NASA and the U.S. Geological Survey. With a 2.7-square-mile footprint, Parachinar turned out to be the largest and fourth-least isolated city, the team determined.

"Based on bin Laden's last known location in Tora Bora, we estimate that he must have traveled 1.9 miles over a 13,000-foot-high pass into Kurram and then headed for the largest city, which turns out to be Parachinar," said Agnew, who is the current president of the Association of American Geographers, the field's leading scholarly organization.

The researchers ruled out cities on the Afghanistan side of the border because the country was occupied at the time by U.S. and international forces and has been particularly unstable ever since.

"The Pakistan side of the border is much better for hiding because of its ambiguous political status within the country and the formal absence of U.S. or NATO troops," Agnew said.
Faced with the prospect of picking from more than 1,000 structures clearly portrayed in the satellite imagery of Parachinar, the team decided to come up with a short list of the criteria that bin Laden would need for housing, based on well-known information about him, including his height (between 6'4" and 6'6", depending on the source), his medical condition (apparently in need of regular dialysis and, therefore, electricity to run the machine) and several basic assumptions, such as a need for security, protection, privacy and overhead cover to shield him from being spotted by planes, helicopters and satellites.

So they looked for buildings that could house someone taller than 6'4" and were surrounded by walls more than 9 feet tall (both as judged by mid-afternoon shadows depicted on the satellite imagery), and that had more than three rooms, space separating them from nearby structures, electricity and a thick tree canopy.

Only three structures fit the criteria. The buildings also appeared to be the best fortified and among the largest in Parachinar. Two are clearly residences, the study states. The third may be a prison. But whatever the third structure is, it has "one of the best maintained gardens in all of Parachinar," the study says.

While the three structures meet all six of the criteria that the researchers believe would be required for lodging bin Laden, an additional 16 structures in Parachinar appear to meet five of the six criteria. If bin Laden is not in the first three structures, the U.S. military should investigate these other buildings, the study urges.

The outgrowth of an undergraduate geography course in remote sensing, the study lists five 2008 UCLA graduates as co-authors. The students have since gone on to a range of endeavors, from selling real estate and attending law school to earning a master's degree from Oxford University. One now works for a remote-sensing company.

Undergraduates had attempted to take on the same study in 2006, but at 30 x 30 meters — or nearly 100 x 100 feet — the resolution of publicly available satellite images of the area at the time was insufficient. In contrast, today's resolution is 0.6 meters, or just under 2 feet, Gillespie said. The remote-sensing company that employs one of the alumni authors plans soon to unveil a 0.4-meter resolution of the entire world.

"Technology has caught up to the question," said Gillespie, who serves as the director of the Spatial Demography Group for the UCLA-based California Center for Population Research.
"Finding Osama bin Laden: An Application of Biogeographic Theories and Satellite Imagery" is not the first attempt by Gillespie and Agnew to bring scientific analysis to nettlesome political issues. In September 2008, they received widespread attention for a satellite study of the density of lights in the night sky of Baghdad in the time leading up to, during and immediately following the U.S. military surge of 2007. The findings cast doubt on the role claimed by the U.S. military in quelling violence during that time and suggest instead that intra-sectarian conflict was responsible for clearing whole portions of the city, leaving them both dark and devoid of the objects of Iraqi-on-Iraqi violence.

Sunday, February 22, 2009

Unfortunately!!! Most Wars Occur In Earth's Richest Biological Regions

A new study published by in the journal Conservation Biology found that more than 80 percent of the world's major armed conflicts from 1950-2000 occurred in regions identified as the most biologically diverse and threatened places on Earth.

The study by leading international conservation scientists compared major conflict zones with the Earth's 34 biodiversity hotspots identified by Conservation International (CI). The hotspots are considered top conservation priorities because they contain the entire populations of more than half of all plant species and at least 42 percent of all vertebrates, and are highly threatened.

Russell A. Mittermeier, president of Conservation International (CI) and an author of the study, said, "This astounding conclusion – that the richest storehouses of life on Earth are also the regions of the most human conflict – tells us that these areas are essential for both biodiversity conservation and human well-being. Millions of the world's poorest people live in hotspots and depend on healthy ecosystems for their survival, so there is a moral obligation – as well as political and social responsibility - to protect these places and all the resources and services they provide."

The study found that more than 90 percent of major armed conflicts – defined as those resulting in more than 1,000 deaths – occurred in countries that contain one of the 34 biodiversity hotspots, while 81 percent took place within specific hotspots. A total of 23 hotspots experienced warfare over the half-century studied.

Examples of the nature-conflict connection include the Vietnam War, when poisonous Agent Orange destroyed forest cover and coastal mangroves, and timber harvesting that funded war chests in Liberia, Cambodia and Democratic Republic of Congo (DRC). In those and countless other cases, the collateral damage of war harmed both the biological wealth of the region and the ability of people to live off of it.

In addition, war refugees must hunt, gather firewood or build encampments to survive, increasing the pressure on local resources. More weapons means increased hunting for bush meat and widespread poaching that can decimate wildlife populations – such as 95 percent of the hippopotamus slaughtered in DRC's Virunga National Park.

In total, the hotspots are home to a majority of the world's 1.2 billion poorest people who rely on the resources and services provided by natural ecosystems for their daily survival. Environmental concerns tend to recede or collapse in times of social disruption, and conservation activities often get suspended during active conflicts. At the same time, war provides occasional conservation opportunities, such as the creation of "Peace Parks" along contested borders.

The study concluded that international conservation groups – and indeed the broader international community – must develop and maintain programs in war-torn regions if they are to be effective in conserving global biodiversity and keeping ecosystems healthy. It also called for integrating conservation strategies and principles into military, reconstruction and humanitarian programs in the world's conflict zones.

Glaciers in China are Melting at an Alarming Rate

A three-year study, to be used by the China Geological Survey Institute, shows that glaciers in the Yangtze source area, central to the Qinghai-Tibet plateau in south-western China, have receded 196 square kilometres over the past 40 years.

Glaciers at the headwaters of the Yangtze, China's longest river, now cover 1,051 square kilometres compared to 1,247 square kilometres in 1971, a loss of nearly a billion cubic metres of water, while the tongue of the Yuzhu glacier, the highest in the Kunlun Mountains fell by 1,500 metres over the same period.

Melting glacier water will replenish rivers in the short term, but as the resource diminishes drought will dominate the river reaches in the long term. Several major rivers including the Yangtze, Mekong and Indus begin their journeys to the sea from the Tibetan Plateau Steppe, one of the largest land-based wilderness areas left in the world.

“Once destroyed it will be extremely difficult to restore the high-altitude ecosystems,” said Dr Li Lin, head of Conservation Strategies for WWF-China. “If industrialized and developing countries do not focus their efforts on cutting emissions, some of this land will be lost forever and local populations will be displaced.”

Glacier retreat has become a major environmental issue in Tibet, particularly in the Chang Tang region of northern Tibet. The glacier melting poses severe threats to local nomads’ livelihoods and the local economy.

The most common impact is that lakes are increasing due to glacier melting and some of the best pastures are submerged. Meanwhile small glaciers are disappearing due to the speed of glacier melting and drinking water has become a major issue.

“This problem should convince governments to adopt a ‘mountain-to-sea’ approach to manage their rivers, the so-called integrated river basin management, and to ratify the UN Water Convention as the only international agreement by which to manage transboundary rivers,” said Li Lifeng, Director of Freshwater, WWF International.

“It should also convince countries to make more effort to protect and sustainably use their high altitude wetlands in the river source areas that WWF has been working on.”

Nuclear Explosion Can Cause Serious Environmental Problems

What would happen to our lives, and those of other organisms, if there was an above-ground nuclear explosion, either incidental or accidental? Though the probability of such an apocalyptic event is relatively small, the impact has the potential of being so cataclysmic that it warrants serious discussion.

So let us try and recount what actually happens when a nuclear bomb explodes, such as the 13-kiloton bomb which exploded over Hiroshima in1945. Although this was a very primitive nuclear device, it managed to kill over 45,000 people within 24 hours of the blast and several generations continue to languish as casualties.

Unlike conventional explosives which rely on the generated by chemical combustion, rely on the extreme which is generated when an atomic reaction takes place in which one element is converted into another element (for example when hydrogen is converted to helium). The difference in the which is generated is immense. For example a sphere of plutonium about the size of a ball is capable of producing an explosion equivalent to 20,000tons of . There are basically three types of nuclear bombs which have been developed. The first kind are atomic bombs which use fission reactions, or the splitting of atomic nuclei to generate. This is the kind of bomb which was dropped by the Americans on the Japanese cities of Hiroshima and Nagasaki in 1945. The second variety are thermonuclear devices which use an atomic trigger and a uranium jacket to start a fusion reaction in which lighter elements such as hydrogen are forced to undergo a fusion reaction to combine and form a heavier element. The liberated from 0.5 kg (1.1 lb)of hydrogen-isotope fuel is equivalent to that of about 29 kilotons of, or almost three times as much as from the uranium in an atomic bomb. The environmental impact of both these bombs would, however, be similar though the magnitude would be greater in the case of a thermonuclear device. The third kind of nuclear weapon is the neutron bomb which is a modified thermonuclear device that does not have a uranium jacket and thus reduces the chance of widespread radioactive fallout. The neutrons generated from the thermonuclear device can,however, generate radioactivity within a small impact radius, killing life but without causing widespread fallout destruction to building sand infrastructure (the neutron bomb is thus a tactical weapon).

The greatest devastation can be caused by a nuclear device when it is actually detonated slightly above ground rather than on the ground itself because the expanse of the damage can be dispersed more quickly. The detonation of a nuclear device about five hundred meters above land would first generate an enormous fireball, whose radiant would at the speed of light in all directions. The intense heat generated at several thousand degrees Celsius would incinerate all organic material within seconds. Even stable substances such as sand would be thermally changed to glass. The extreme temperatures would cause otherwise harmless combustion processes to release deadly pyrotoxins that would as gaseous clouds beyond ground zero. For example, a woolen suit when burned at extreme temperatures can release enough hydrogen cyanide to kill seven people.

The shockwave generated by the blast would at the speed of sound shaking the foundations of buildings and bringing them down within a matter of minutes. The damage radius increases with the power of the bomb, approximately in proportion to its cube root. If exploded at the optimum height, therefore, a 10-megaton weapon, which is 1000times as powerful as a 10-kiloton weapon, will increase the distance tenfold, that is, out to 17.7 km (11 mi) for severe damage and 24 km(15 mi) for moderate damage. Meanwhile, looming over the scene would be the proverbial mushroom cloud. Propelled by the intense pressure differentials, the cloud would suck up debris and hurl it several miles into the earth's atmosphere. This cloud, depending on the intensity of the blast would blanket the area with a pall that could last for several days, blocking out sunlight and causing severe micro climatic changes. After the extreme heat of the blast has dissipated, the debris cloud would block sunlight, thereby decreasing the proximate temperature below freezing. The effect would be similar to the global temperature decreases which occurred in 1991 when Mount Pinatubo erupted in the Philippines. Even below ground nuclear test scan cause severe seismic variations that can lead to earthquakes and tremors within a thousand mile radius.

The most insidious environmental damage of a nuclear explosion would,however, result from the release of radioactive materials that would generate intensely penetrating capable of causing cellular damage for years to come. Carcinogenic (cancer-causing) and teratogenic (initiating birth defects) effects of radiation have been documented from the Hiroshima and Nagasaki blasts as well as the Chernobyl nuclear reactor meltdown. In the case of Chernobyl (which was not even a deliberate explosion), a study conducted by the Center for Disease Control and Yale University estimated that out of the 115,000 people evacuated as a consequence of the 1986 incident, 24,000would have a doubled risk of acquiring acute leukemia. This discussion may seem irrelevant to many people who believe that since we are simply developing the weapons as a deterrent, there is no point in thinking about their actual use. What we must remember is that there is always the chance of an accident. Indeed, there are documented cases of accidents involving tests in manyparts of the world. Several islands in the South Pacific are uninhabitable for this very reason. Even the usually reticent US Defense Nuclear Agency has stated that "accidents have occurred...which released radioactive contamination because of fire or high explosive detonations".

Saturday, February 21, 2009

Pakistan is in Urgent Need of Solar Energy

During the last century world population increased near four folds and the energy usage multiplied about twenty times. According to International Atomic Energy Agency (IAEA), one of the foremost challenges Pakistan will be facing in future is the supply of adequate energy. According to Vision 2030 of the Planning Commission, the reserves of natural gas, which contribute about 50% of energy to Pakistan, will start declining within the next decade and the storage capacity of dams is reducing due to silting e.g. the capacity of Tarbela Dam has decreased by 27 percent. Some careful estimates expect the demand and supply gap to reach up to 8,000 Mega Watts by 2010.

On the other hand, the dependency on electric power is also increasing with the projected usage of electronic equipments, cell phones being the major players. Since the introduction of Insta and Paktel mobile phones based on Analog Mobile Phone System (AMPS) technology in the second half of 1990’s, Pakistan is witnessing continued growth of cell phone subscribers. The operators have been constantly expanding network coverage in small cities, suburbia, countryside and mountainous areas.

With over twenty thousand towers erected throughout the country, cellular phone companies have access to only 61% of their target market according to estimates of Pakistan Telecommunication Authority (PTA). The companies are in vigorous competition to get touched by the rest of 39% and it is a step toward development of country both technologically and economically. The total number of mobile-phone users projected from 22 million at beginning of 2006 to 77 million at the end of 2007.

London Business School estimates that an increase of 10 mobile phones per 100 people boosts GDP growth by 0.6% and telecommunication alone contributed to 27.92% of the total Foreign Direct Investment (FDI) of Pakistan in 2007-08, according to PTA.

Operating Power of Mobile Telecom Networks comes from either electric-grids run by Water and Power Development Authority (WAPDA), in the case of Pakistan or privately owned/leased diesel-fueled generators, called Gensets in telecommunication terminology. In the situation of power outages by WAPDA, which increased to 10 hours in summer of 2008, Gensets serve as alternate but they have typically limited life due to likelihood of poor fuel, transportation challenges, disposition to thefts and poor maintenance etc.

Humans have been using energy from sun for cooking and warmth for centuries. Due to technological advances solar energy is being used for heating, cooking, production of hydrogen and electric power generation, to name the few. To contribute toward tackling energy crises engineers and technologists are devising ways to power Base Stations, integral and the most power consuming equipments of a cellular network, through alternate and renewable energy resources.

Based on approximate figures, a typical Base Station costs nearly $100,000 and requires 3000 Watts to run, excluding the Base Station Controller (BSC) and Mobile Switching Center (MSC). One of the techniques, popular in telecom circles nowadays, for running the Base Stations is Solar Energy which in addition to renewable and everlasting is also green or eco-friendly. Some of the major advances in this regard are mentioned in the lines to be followed.

Alcatel-Lucent announced to have powered its 200th radio site including BTS, microwave and other electrical components with solar power in the first quarter of 2008. The site provides coverage to remote population on some islets of Senegal which had not any access to wireless communications previously. Easily deployable, the system stipulates only few solar panels and consumes low energy, comparatively.

Diesel generators are inefficient at places with high daytime humidity as in case of hot clime areas of Punjab and Sindh. Claiming to reduce fuel consumption by 75%, Maryland-based Integrated Power Corporation runs the equipment of Etisalat, the national telecommunication operator of UAE, by solar power during daylight hours and by diesel generator at night.

Engineers at Motorola Labs developed a power system making use of both solar and wind energy for powering remote Base Stations. The system can generate 1,200 Watts of electric power continuously, enough to run a mid-sized cell covering an operating radius of 120 km. The combined energy, after addition of 6,000 Watts from the wind turbine, is stored in a bank of specially designed lead-acid batteries which last up to three years before requiring replacement. Such hybrid power supply system is operational and is powering a cellular base station in a Namibian village to the date.

Vihaan Networks Limited (VNL), an Indian based network company, shocked everyone in July, 2008 when it announced its rural-centric WorldGSMTM , a GSM network deemed as a catalyst for emerging cellular services to far flung rural population of Africa, in particular. A Base Station in WorldGSM costs as low as $25,000 and requires less than 100 Watts of power which is provided by solar panels not larger than 8 square meters. It does not require any building and air conditioning and has functionally integrated BSC and MSC thus eliminating the need of skilled staff. In contrast, a typical Base Station demands skilled personnel including radio network planners, site engineers, civil engineers, and equipment specialists etc, and is housed in some building having capacity to accommodate three refrigerator-sized cabinets, dual air conditioning units and a roof site etc. Concurrently powering itself and recharging, the Base Stations of VNL have battery back-ups of about 72 hours and are designed to last eight years.

The whole story revolves around three solar-powered boxes: the BlueBox, as Base Transceiver Station (BTS); the GreenBox 160i, as Base Station Controller; and the OrangeBox 600i, as Mobile Switching Center (MSC). Based on geographic parameters, WorldGSM can be deployed in two main configurations: Rural Deployment which uses Cascading Star Architecture to cover an entire rural area, and Road Deployment which uses a series of bidirectional antennas to provide coverage along the roads.

In Pakistan also, some cellular operators are considering the solar option. Warid Telecom took initiative by deploying country’s first solar powered BTS site in late summer of 2008. The setup uses Huawei’s environmentally-friendly Solar Powered Macro Base Stations (BTS). Ufone launched its first Solar-Powered Cell Site at Haroonabad, Bhawalnagar in November, 2008.

Warid Telecom, for the second time deployed Solar-Powered BTS site at Lahore in January, 2009. Engineered at Pakistan, the BTS claims to show improved performance by saving thousands of liters of diesel per annum and eliminating the need of generator. Besides, Telenor has also been said working for alternate sources for powering Base Stations. These BTS eliminate the probability of power interruption on the operator’s part.

Fortunately, Pakistan is located in the sunny belt and can ideally utilize solar energy. The initiatives of VNL and the like suit to the situation of Pakistan’s less-populated rural areas particularly those of Balochistan and the Tribal Areas. In these areas population density is low and roads running hundreds of kilometer connect distant towns and villages. In such energy-deficient times of Pakistan’s history, there is an urgent need on government’s part to consider these environmentally-friendly and viable options.

Friday, February 20, 2009

China Unveils Electric Car

China's largest independent carmaker Chery Automobile rolled off its first plug-in electric car this week, the latest Chinese automotive company to produce an alternative energy vehicle.

The all-electric car, S18, can go up to 150 kilometers (93 miles) on one charge and has a maximum speed of 120 kilometers (72 miles) an hour, the company said.

The battery can be fully charged within six hours using a 220-volt home outlet, while 80% of the battery can be charged within 30 minutes.

"The price will be very suitable for families," Yuan Tao, vice president of Chery said, without offering details.It was also unclear when the car would be available to buy.

Unlike another Chinese carmaker, the BYD Co., which began selling its plug-in electric hybrid car in China in December, Chery has not given the S18 the option of running on petrol.

BYD's plug-in hybrid, named the F3DM, can travel 100 kilometers on its battery, or 580 kilometers in hybrid mode with petrol.

Domestic manufacturers of clean vehicles are likely to get a boost from the government in the form of a policy package to help the car industry through the global economic crisis.

China's Ministry of Finance said the government planned to subsidise purchases of alternative energy vehicles to expand domestic demand, boost the domestic car industry and reduce pollution emissions.

Earth is Destined Towards Water Bankruptcy

The acute droughts in Kenya, Argentina and the U.S. state of California are among the latest phenomena to illustrate that the global environment has been dangerously degraded. And participants in the recent World Economic Forum in Davos, Switzerland, heard that the planet could be destined towards "water bankruptcy".

It might surprise many to learn, then, that water issues are not directly included in the Kyoto protocol, the main international agreement on tackling climate change. Ensuring that this omission is not replicated in a follow-up accord scheduled to be finalised at talks in Copenhagen, Denmark, near the end of 2009, was one of the main topics addressed at a conference in Brussels Feb. 12 and 13. According to Maude Barlow, an adviser on water to the United Nations general assembly, the underlying assumptions made by many decision-makers have been misguided. Whereas they have tended to view water shortage as a consequence of climate change, the unsustainable exploitation of water is in fact "one of the major causes of climate change."

Pollution, the overstretching of rivers, and the mining of groundwater supplies are all contributing to this ecological and social calamity. So, too, is the way of life to which people in the wealthier parts of the world have become accustomed. Millions of roses sold in Europe to celebrate Valentine's Day this year have originated in Africa's Rift Valley. The habitat of the hippopotamus, an endangered species, its water supplies have been heavily drained by agri- business companies involved in the flower trade. While private entrepreneurs have profited handsomely from this situation, Africa contains some of the worst incidences of water-related diseases on earth. More children die from such diseases than the next three causes of death combined.

Data by the World Health Organisation suggests that 80 percent of infectious diseases in the world could be caused by dirty water. Mikhail Gorbachev, the former Soviet president, said that the conventional model of economic development being followed in much of the world is in crisis. "The unsustainability of this model is reflected by the water problem," he added. "A recent report by the UN Development Programme said that at least 700 million people - until recently it was 1 billion - face a shortage of water. At the same time, demand for water is growing all the time."

During 2008 the UN's Human Rights Council decided to carry out a three- year investigation into how access to water relates to basic rights. About 1 billion people worldwide do not have access to an adequate supply of drinking water, and 2.5 billion are not guaranteed the amount of water they need for sanitation. Despite the underlying issues of justice, water has been increasingly viewed by policymakers as an economic good, rather than as a universal right over the past few decades. The bottled water industry, for example, registered global sales of 200 billion litres in plastic containers last year. Almost 90 percent of these bottles were dumped, rather than recycled.

"We need to re-commit to public water," said Barlow. "We must make it uncool to go around with a bottle of commercial water on our hips."

Next month, the key players in the private water industry will gather in Istanbul. Danielle Mitterrand, widow of the late French president François Mitterrand and a human rights campaigner in her own right, said that the 100 euro (129 dollars) per day admission fee for the event illustrated its elitist nature. "Managing water is not an industrial challenge," she said. "It is a democratic challenge." Luigi Infanti, a Catholic bishop in Chile, noted that a constitution introduced in his country in 1980 by the military dictator Augusto Pinochet promoted the privatisation of water. "Eighty percent of water was handed over to private hands," he said. "It was handed over for free and forever. In Chile, we have been fighting for years for human rights. We should fight with the same intensity for human rights relating to the environment." The European Union has been eager to promote privatisation in poor countries by negotiating free trade agreements with them. One such accord signed between the EU and the Caribbean region earlier this year, for example, is designed to give western firms the possibility of having a greater role in the provision of basic services. Oxfam is among the anti-poverty organisations to have expressed concern about how water could fall into private hands as a result. But Karl Falkenberg, director-general for environment in the European Commission and a former top-level EU trade negotiator, said: "We all agree that access to high quality water at a price affordable to all is important." A policy paper that his institution hopes to publish in late March will "begin to focus on the concrete actions necessary" to address global water issues, he added. Tony Allan, a scientist working in King's College in London, said that the world has enough water to meet the needs of its current population of 6 billion and the 9 billion to which it has been projected to rise by the middle of this century.

The problem, however, is that access to safe water is frequently tied to income. "Only poor people are short of water," he said. "Rich people can always access water for domestic uses, for their jobs and for their food."

Environmental Impacts Threatens Food Security

Worldwide demand for food is expected to grow steadily over the next 40 years, but 25 percent of the world's food production may be lost to 'environmental breakdowns' by 2050 unless urgent action is taken.

This is the message in a document presented to environment ministers from more than 140 countries meeting in Nairobi, Kenya under the auspices of the United Nations Environmental Programme Governing Council to discuss climate change and other environmental challenges. The document, titled "The Environmental Food Crisis: The Environment's Role in Averting Future Food Crises" calls for an increase in food production to meet the needs of an estimated 2.7 billion more people. "Elevated food prices have had dramatic impact on the lives and livelihoods, including increased infant and child mortality, of those already undernourished or living in poverty and spending 70-80 percent of their daily income on food," it reads. The UNEP meeting comes as the host country Kenya is engulfed in a severe food crisis with up to 10 million people facing starvation due to poor rainfall and high fertiliser prices among other things.

Kenya's policies were criticised for failing to address the problem by developing systems geared at improving food production. "Kenya should be one of the countries rethinking how agriculture production systems should be improved. Kenya should not be facing food shortage. It needs to be able to feed itself not only today but in years to come even when population increases," Achim Steiner, UNEP executive director said. Maize flour, the staple food in Kenya, is now retailing at about 80 U.S. cents a kilogramme, way too high for a country where half the population lives on less than a dollar day. A year ago, maize cost the equivalent of 30 cents a kilo. Similarly steep price hikes led to riots in in Cameroon in February 2008, when protesters outraged by high food prices took to the streets demanding huge cuts in prices. The unrest was the worst in 15 years in the central African country. "People could not understand how a country which was previously food sufficient could suddenly be food insufficient, with high prices on basic food commodities," Mary Fosi, a senior official in the country's environment ministry told the meeting. "The main problem is that mechanised agriculture in the country is very small. There is need to focus on advanced agricultural systems that will increase food production," she noted. It emerges that lack of investment in agricultural development, including modern technology and machinery has played a role in reducing yields in Africa, where most farmers still use the hand hoe to till land. Critics contend that for the continent to achieve food security, it needs to move from the idea of carrying hoes and machetes to the farm and embrace a new era of technology-driven agriculture. But authorities are on the defensive, saying governments cannot afford to invest in new technologies and machinery just yet. "The technology is there; it is not that we do not want it, but our economies are poor," Bonaventure Baya, director of Tanzania's National Environment Management Council told IPS at the meeting. According to Baya, immediate measures to achieve food security must include educating farmers to diversify and plant alternative crops that are resistant to changing climatic conditions. This, he says, will also help conserve the environment. "Intensive land cultivation and growing of the same crop over a long period of time degrades the soil. Increasing food production and security must take into account protection of the environment, including the soil," he observed. As the meeting considers ways of increasing food production, farmers think they have the answer - government subsidies. Peter Andenje is one such farmer. As chairman of the Association of Small Scale Maize Growers in Kitale, western Kenya, he says the government needs to subsidise fertilisers and high-yielding seeds which are critical in getting increased harvests. "Many farmers cannot afford the high cost of fertilisers and seeds; some are now growing the plant without applying fertilisers. This has resulted in to very low yields. Some have abandoned growing the crop because of the high cost of inputs," he said in an interview with IPS. The UNEP document launched at the Nairobi meeting cites the issue of providing subsidies to farmers as a crucial safety net in achieving increased food production and security. But subsidies for African farmers have been vehemently opposed by donors and remain a contentious issue at international trade talks. "What we must not do is neglect the fact that we have an environmental crisis unfolding in the agricultural production sectors and we must tackle that alongside the trade agenda, not one after the other because we are running out of time for both," Steiner stated. "This is a reasonable, fair and appropriate measure now that we are facing the challenge of sustainability in agriculture production."

Thursday, February 19, 2009

Math and Your Body

Here are some statistics about your body (assuming you are adult, and reasonably “average”).

Blood, Sweat and Tears

We have around 42 billion blood vessels and if we were to put them all end-to-end, it would stretch about 160,000 km (4 times around the Earth’s equator, or almost half way to the moon).
The heart pumps around 8,000 liters (800 buckets) of blood each day and 219 ML (megaliters) during a lifetime (around 88 Olympic swimming pools).

We excrete 14,200 liters of sweat (or around 1,400 buckets’ worth), but this will vary depending on your climate and lifestyle.

Most of us will cry 68 liters of tears (7 buckets) and there will be gender differences with this statistic.

Scaffolding

We are constantly replacing our bones and will produce the equivalent of 12 skeletons of new bone during a lifetime.

Length of our DNA

Our DNA would stretch to the moon and back 8000 times.

The total length of DNA present in one adult human is calculated as:

(length of 1 base pair) × (number of base pairs per cell) ×(number of cells in the body)
= (3.4 × 10-10 m)(6 × 109)(1013)
= 2.0 × 1013 meters

That is the equivalent of nearly 70 trips from the earth to the sun and back.

2.0 × 1013 meters = 133.69 astronomical units
133.69 / 2 = 66.84 round trips to the sun


Air

We take around 500 million breaths and inhale around 300 million liters of air during our lifetime.

Population

There are over a quarter of a million extra people each day (almost 3 per second).

Population growth is one of the Earth’s major problems, since it is a major contributor to poverty, environmental degradation and global warming. I hope some of you (especially those in countries with high population growth) will study population issues and become activists in this area.

Learning

While in our mother’s womb, we produce something like 250,000 new brain cells every minute.
We have (almost) all of the neurons we are ever going to have at birth, although the brain continues to grow until we are in our early 20s. At birth, our brain is around 12% of our body weight and it drops to 2% of our body weight by our late teens

Tips for Learning Math Formulas

Here are 10 tips you can do to improve your memory for learning math formulas.

1. Read ahead

Read over tomorrow’s math lesson today. Get a general idea about the new formulas in advance, before your teacher covers them in class.

As you read ahead, you will recognize some of it, and other parts will be brand new. That’s OK - when your teacher is explaining them you already have a “hook” to hang this new knowledge on and it will make more sense — and it will be easier to memorize the formulas later.

This technique also gives you an overview of the diagrams, graphs and vocabulary in the new section. Look up any new words in a dictionary so you reduce this stumbling block in class.
This step may only take 15 minutes or so before each class, but will make a huge difference to your understanding of the math you are studying.

I always used to read ahead when I was a student and I would be calm in class while all my friends were stressed out and confused about the new topic.

2. Meaning

All of us find it very difficult to learn meaningless lists of words, letters or numbers. Our brain cannot see the connections between the words and so they are quickly forgotten.

Don’t just try to learn formulas by themselves — it’s just like learning that meaningless list.
When you need to learn formulas, also learn the conditions for each formula (it might be something like “if x > 0″).

Also draw a relevant diagram or graph each time you write the formula (it might be a parabola, or perhaps a circle). You will begin to associate the picture with the formula and then later when you need to recall that formula, the associated image will help you to remember it (and its meaning, and its conditions).

During exams, many of my students would try to answer a question with the wrong formula! I could see that they successfully learned the formula, but they had no idea how to apply it.

Diagrams, graphs and pictures always help.

Most of us find it difficult to learn things in a vacuum, so make sure you learn the formulas in their right context.

When you create your summary list of formulas, include conditions and relevant pictures, graphs and diagrams.

3. Practice

You know, math teachers don’t give you homework because they are nasty creatures. They do it because they know repetition is a very important aspect of learning. If you practice a new skill, the connections between neurons in your brain are strengthened. But if you don’t practice, then the weak bonds are broken.

If you try to learn formulas without doing the practice first, then you are just making it more difficult for yourself.

4. Keep a list of symbols

Most math formulas involve some Greek letters, or perhaps some strange symbols like ^ or perhaps a letter with a bar over the top.

When we learn a foreign language, it’s good to keep a list of the new vocabulary as we come across it. As it gets more complicated, we can go back to the list to remind us of the words we learned recently but are hazy about. Learning mathematics symbols should be like this, too.
Keep a list of symbols and paste them up somewhere in your room, so that you can update it easily and can refer to it when needed. Write out the symbol in words, for example: ∑ is “sum”; ∫ is the “integration” symbol and Φ is “capital phi”, the Greek letter.

Just like when learning whole formulas, include a small diagram or graph to remind you of where each symbol came from.

Another way of keeping your list is via flash cards. Make use of dead time on the bus and learn a few formulas each day.

5. Absorb the formulas via different channels

I’ve already talked about writing and visual aids for learning formulas. Also process and learn each one by hearing it and speaking it.

An example here is the formula for the derivative of a fraction involving x terms on the top and bottom (known as the “Quotient Rule”). Then in words, the derivative is:

dy/dx = bottom times derivative of top minus top times derivative of bottom all over bottom squared.

6. Use memory techniques

Most people are capable of learning lists of unrelated numbers or words, as long as they use the right techniques. Such techniques can be applied to the learning of formulas as well.

One of these techniques is to create a story around the thing you need to learn. The crazier the story, the better it is because it is easier to remember. If the story is set in some striking physical location, it also helps to remember it later.

7. Know why

In many examinations, they give you a math formula sheet so why do you still need to learn formulas? As mentioned earlier, if students don’t know what they are doing, they will choose a formula randomly, plug in the values and hope for the best. This usually has bad outcomes and zero marks.

I encourage you to learn the formulas, even if they are given to you in the exam. The process of learning the conditions for how to use the formula and the associated graphs or diagrams, means that you are more likely to use the correct formula and use it correctly when answering the question. This is also good for future learning, because you have a much better grasp of the basics.

8. Sleep on it

Don’t under-estimate the importance of sleep when it comes to remembering things. Deep sleep is a phase during the night where we process what we thought about during the day and this is when more permanent memories are laid down. During REM (rapid eye movement) sleep, we rehearse the new skills and consolidate them.

Avoid cramming your math formulas the night before an exam until late. Have a plan for what you are going to learn and spread it out so that it is not overwhelming.

9. Healthy body, efficient brain

The healthier you are, the less you need to worry about sickness distracting from your learning. Spend time exercising and getting the oxygen flowing in your brain. This is essential for learning.

10. Remove distractions

This one is a problem for those of us that love being on the Internet, or listening to music, or talking to our friends. There are just so many things that distract us from learning what we need to learn.

Turn off all those distractions for a set time each day. You won’t die without them. Concentrate on the formulas you need to learn and use all the above techniques.

Tips for Understanding Math Formulas

Many people reported that they find math difficult because they have trouble understanding math formulas. I wrote some tips on learning math formulas here:

a. Understanding Math is like understanding a foreign language:

Say you are a native English speaker and you come across a Japanese newspaper for the first time. All the squiggles look very strange and you find you don’t understand anything.
If you want to learn to read Japanese, you need to learn new symbols, new words and new grammar. You will only start to understand Japanese newspapers (or manga comics ^_^) once you have committed to memory a few hundred symbols & several hundred words, and you have a reasonable understanding of Japanese grammar.

When it comes to math, you also need to learn new symbols (like π, θ, Σ), new words (math formulas & math terms like “function” and “derivative”) and new grammar (writing equations in a logical and consistent manner).

So before you can understand math formulas you need to learn what each of the symbols are and what they mean (including the letters). You also need to concentrate on the new vocabulary (look it up in a math dictionary for a second opinion). Also take note of the “math grammar” — the way that it is written and how one step follows another.

A little bit of effort on learning the basics will produce huge benefits.

b. Learn the formulas you already understand:

All math requires earlier math. That is, all the new things you are learning now depend on what you learned last week, last semester, last year and all the way back to the numbers you learned as a little kid.

If you learn formulas as you go, it will help you to understand what’s going on in the new stuff you are studying. You will better recognize formulas, especially when the letters or the notation are changed in small ways.

Don’t always rely on formula sheets. Commit as many formulas as you can to memory — you’ll be amazed how much more confident you become and how much better you’ll understand each new concept.

c. Always learn what the formula will give you and the conditions:

I notice that a lot of students write the quadratic formula as

[−b ± √(b2 − 4ac)]/2a

But this is NOT the quadratic formula! Well, it’s not the whole story. A lot of important stuff is missing — the bits which help you to understand it and apply it. We need to have all of the following when writing the quadratic formula:

“The solution for the quadratic equation

ax2 + bx + c = 0
is given by
x = [−b ± √(b2 − 4ac)]/2a”

A lot of students miss out the “x =” and have no idea what the formula is doing for them. Also, if you miss out the following bit, you won’t know how and when to apply the formula:

ax2 + bx + c = 0

Learning the full situation (the complete formula and its conditions) is vital for understanding.

d. Keep a chart of the formulas you need to know:

Repetition is key to learning. If the only time you see your math formulas is when you open your textbook, there is a good chance they will be unfamiliar and you will need to start from scratch each time.

Write the formulas down and write them often. Use Post-It notes or a big piece of paper and put the formulas around your bedroom, the kitchen and the bathroom. Include the conditions for each formula and a description (in words, or a graph, or a picture).

The more familiar they are, the more chance you will recognize them and the better you will understand them as you are using them.

e. Math is often written in different ways — but with the same meaning:

A lot of confusion occurs in math because of the way it is written. It often happens that you think you know and understand a formula and then you’ll see it written in another way — and panic.
A simple example is the fraction “half”. It can be written as 1/2, and also diagonally, as ½ and in a vertical arrangement like a normal fraction. We can even have it as a ratio, where it would be written 1:1.

Another example where the same concept can be written in different ways is angles, which can be written as capital letters (A), or maybe in the form ∠BAC, as Greek letters (like θ) or as lower case letters (x). When you are familiar with all the different ways of writing formulas and concepts, you will be able to understand them better.

Every time your teacher starts a new topic, take particular note of the way the formula is presented and the alternatives that are possible.

Fifth Largest Ozone Hole was seen in 2008


The Antarctic ozone hole reached its annual maximum on Sept. 12, 2008, stretching over 27 million kilometers, or 10.5 square miles. The area of the ozone hole is calculated as an average of the daily areas for Sept. 21-30 from observations from the Ozone Monitoring Instrument (OMI) on NASA’s Aura satellite.

NOAA scientists, who have monitored the ozone layer since 1962, have determined that this year’s ozone hole has passed its seasonal peak for 2008. Data is available at online.
The primary cause of the ozone hole is human-produced compounds called chlorofluorocarbons, or CFCs, which release ozone-destroying chlorine and bromine into the atmosphere. Earth’s protective ozone layer acts like a giant umbrella, blocking the sun’s ultraviolet-B rays. Though banned for the past 21 years to reduce their harmful build up, CFCs still take many decades to dissipate from the atmosphere.

According to NOAA scientists, colder than average temperatures in the stratosphere may have helped play a part in allowing the ozone hole to develop more fully this year.

“Weather is the most important factor in the fluctuation of the size of the ozone hole from year-to-year,” said Bryan Johnson, a scientist at NOAA’s Earth System Research Laboratory in Boulder, which monitors ozone, ozone-depleting chemicals, and greenhouse gases around the globe. “How cold the stratosphere is and what the winds do determine how powerfully the chemicals can perform their dirty work.”

NASA satellites measured the maximum area of this year’s ozone hole at 10.5 million square miles and four miles deep, on Sept. 12. Balloon-borne sensors released from NOAA’s South Pole site showed the total column of atmospheric ozone dropped to its lowest count of 107 Dobson units on Sept. 28. Dobson units are a measure of total ozone in a vertical column of air.

In 2006, record-breaking ozone loss occurred as ozone thickness plunged to 93 Dobson units on Oct. 9 and sprawled over 11.4 million square miles at its peak. Scientists blamed colder-than-usual temperatures in the stratosphere for its unusually large size. Last year’s ozone hole was average in size and depth.

Starting in May, as Antarctica moves into a period of 24-hour-a-day darkness, rotating winds the size of the continent create a vortex of cold, stable air centered near the South Pole that isolates CFCs over the continent. When spring sunshine returns in August, the sun’s ultraviolet light sets off a series of chemical reactions inside the vortex that consume the ozone. The colder and more isolated the air inside the vortex, the more destructive the chemistry. By late December the southern summer is in full swing, the vortex has crumbled, and the ozone has returned—until the process begins anew the following winter.

The 1987 Montreal Protocol and other regulations banning CFCs reversed the buildup of chlorine and bromine, first noticed in the 1980s.

“These chemicals—and signs of their reduction—take several years to rise from the lower atmosphere into the stratosphere and then migrate to the poles,” said NOAA’s Craig Long, a research meteorologist at NOAA’s National Centers for Environmental Prediction. “The chemicals also typically last 40 to 100 years in the atmosphere. For these reasons, stratospheric CFC levels have dropped only a few percent below their peak in the early 2000s.“

Wednesday, February 18, 2009

Effects of Climate Change on Water Cycle

Climate change is having an impact on the water cycle, raising the issue of whether we should be investing in adapting to these impacts or focusing on more pressing water resource issues, such as providing water and sanitation for increasing populations? If investment in adapting to climate change is a priority, then is it best to invest in protecting natural ecosystems or developing engineered infrastructure?

The traditional way of handling extreme events such as floods and droughts, with engineering works should be complemented with the ecosystems approach which integrates the management of land and water that promotes conservation and sustainable use in an equitable way”, says Dr. Max Campos, Review Editor for the Latin American Chapter for IPCC Impacts and Adaptation Report .

“Climate change is indeed an important issue, but it needs to be seen in context of the many other global challenges affecting water resources such as population growth, urbanization and land use change. Adaptation is vital – but we need to adapt to the full range of factors that are stressing water resources, and not focus on human-forced climate change to the exclusion of everything else”, says Oliver Brown from the International Institute for Sustainable Development (IISD).

“It should be a must for vulnerable communities whether in the developed or developing world to ensure that their development ambitions are prepared for climate change. Adaptation should not be limited to the rich”, said Dr. Henk Van Schaik, Deputy Programme Coordinator UNESCO-IHE. He argued that vulnerable communities in the developed world are preparing and investing to protect their societies, economies and environments to the impacts of climate change. This is not so in transition economies nor in developing countries.

Going beyond the issue of investment in pressing development issues or adaptation measures, is the question of looking at natural versus engineered solutions.

“Conventional approaches to climate change adaptation range from water conservation and efficient use to new operational techonologies”, says Dr Mark Smith, Head of the IUCN Water Programme. “Dams and reservoirs are still considered as the most effective structural means of risk management. But we need to start thinking of the environment as infrastructure for adaptation as well. Health and intact river basins, wetlands and floodplains make us less vulnerable to climate change. Lowering risk is a good reason for investing in watersheds and the environment.”

Dont Take Stress during Pregnancy!!

Stress during a mother's pregnancy can cause developmental and emotional problems for offspring has long been observed by behavioral and biological researchers, but the objective measuring and timing of that stress and its results are difficult to prove objectively in humans, since the evidence is based to a large extent on anecdotal recollections and is also strongly influenced by genetic and other factors.

One researcher who has long wrestled with the problem of how to prove the connection between prenatal stress and its effects on offspring is Prof. Marta Weinstock-Rosin of the Hebrew University of Jerusalem School of Pharmacy, who in her experimental work with rats has been able to demonstrate that relationship in a conclusive, laboratory-tested manner.

She says, "There is an enormous advantage in working with rats, since we are able to eliminate the genetic and subjective element." The researchers were able to compare the behavior of the offspring of stressed rat mothers with those whose mothers were not stressed. They also were able to compare the results of administering various types of stress at different periods during the gestation process to see which period is the most sensitive for the production of different behavioral alterations.

Weinstock-Rosin's work, along with that of colleagues from Israel, the UK and elsewhere, will be presented at an international conference, "Long Term Consequences of Early Life Stress," which she is co-chairing with Dr. Vivette Glover of the Imperial College, London, and that will be held at Mishkenot Sha'ananim in Jerusalem on October 29 and 30.

Weinstock-Rosin has been able to show through her laboratory experiments that when rat mothers were subject to stressful situations (irritating sounds at alternating times, for example), their offspring were later shown to have impaired learning and memory abilities, less capacity to cope with adverse situations (such as food deprivation), and symptoms of anxiety and depressive-like behavior, as compared to those rats in control groups that were born to unstressed mothers. All of these symptoms parallel the impairments that have been observed in children born to mothers who were stressed in pregnancy, she points out.

Further experiments by Weinstock-Rosin and her students have shown the crucial effect of excessive levels of the hormone cortisol that is released by the adrenal gland during stress and reaches the fetal brain during critical stages of brain development. Under normal conditions, this hormone has a beneficial function in supplying instant energy, but it has to be in small amounts and for a short period of time. Under conditions of excessive stress, however, the large amount of this hormone reaching the fetal brain can cause structural and functional changes. In humans, above-normal levels of cortisol can also stimulate the release of another hormone from the placenta that will cause premature birth, another factor that can affect normal development.
Weinstock-Rosin says that further experimental work is required in order to study possible other effects on the offspring resulting from raised hormonal levels. What does seem to be obvious already is that avoidance of stress to as great an extent as possible is a good prescription for a healthy pregnancy and healthy offspring.

Tuesday, February 17, 2009

Light can be Harnessed to Drive Nanomachines

Science fiction writers have long envisioned sailing a spacecraft by the optical force of the sun's light. But, the forces of sunlight are too weak to fill even the oversized sails that have been tried. Now a team led by researchers at the Yale School of Engineering & Applied Science has shown that the force of light indeed can be harnessed to drive machines — when the process is scaled to nano-proportions.

Their work opens the door to a new class of semiconductor devices that are operated by the force of light. They envision a future where this process powers quantum information processing and sensing devices, as well as telecommunications that run at ultra-high speed and consume little power.
The research demonstrates a marriage of two emerging fields of research — nanophotonics and nanomechanics. – which makes possible the extreme miniaturization of optics and mechanics on a silicon chip.
The energy of light has been harnessed and used in many ways. The "force" of light is different — it is a push or a pull action that causes something to move.
"While the force of light is far too weak for us to feel in everyday life, we have found that it can be harnessed and used at the nanoscale," said team leader Hong Tang, assistant professor at Yale. "Our work demonstrates the advantage of using nano-objects as "targets" for the force of light — using devices that are a billion-billion times smaller than a space sail, and that match the size of today's typical transistors."
Until now light has only been used to maneuver single tiny objects with a focused laser beam — a technique called "optical tweezers." Postdoctoral scientist and lead author, Mo Li noted, "Instead of moving particles with light, now we integrate everything on a chip and move a semiconductor device."
"When researchers talk about optical forces, they are generally referring to the radiation pressure light applies in the direction of the flow of light," said Tang. "The new force we have investigated actually kicks out to the side of that light flow."
While this new optical force was predicted by several theories, the proof required state-of-the-art nanophotonics to confine light with ultra-high intensity within nanoscale photonic wires. The researchers showed that when the concentrated light was guided through a nanoscale mechanical device, significant light force could be generated — enough, in fact, to operate nanoscale machinery on a silicon chip.
The light force was routed in much the same way electronic wires are laid out on today's large scale integrated circuits. Because light intensity is much higher when it is guided at the nanoscale, they were able to exploit the force. "We calculate that the illumination we harness is a million times stronger than direct sunlight," adds Wolfram Pernice, a Humboldt postdoctoral fellow with Tang.
"We create hundreds of devices on a single chip, and all of them work," says Tang, who attributes this success to a great optical I/O device design provided by their collaborators at the University of Washington.
It took more than 60 years to progress from the first transistors to the speed and power of today's computers. Creating devices that run solely on light rather than electronics will now begin a similar process of development, according to the authors.
"While this development has brought us a new device concept and a giant step forward in speed, the next developments will be in improving the mechanical aspects of the system. But," says Tang, "the photon force is with us."

Conversion of Methane Gas to Powder Form

Scientists have developed a material made out of a mixture of silica and water which can soak up large quantities of methane molecules. The material looks and acts like a fine white powder which, if developed for industrial use, might be easily transported or used as a vehicle fuel.

Methane is the principal component of natural gas and can be burnt in oxygen to produce carbon dioxide and water. The abundance of the gas and its relatively clean burning process makes it a good source of fuel, but due to its gaseous state at room temperature, methane is difficult to transport from its source.

Many natural gas reserves are geographically remote and can only be extracted via pipelines, so there is a need to look for other ways to transport the gas. It has been suggested that methane gas hydrate could be used as a way of containing methane gas for transportation. The disadvantage of methane gas hydrate for industry use is that it is formed at a very slow rate when methane reacts with water under pressure.

To counteract these difficulties the team used a method to break water up into tiny droplets to increase the surface area in contact with the gas. The Team did this by mixing water with a special form of silica – a similar material to sand – which stops the water droplets from coalescing. This ‘dry water’ powder soaks up large quantities of methane quite rapidly at around water’s normal freezing point.

The team also found that ‘dry water’ could be more economical than other potential products because it is made from cheap raw materials. The material may also have industrial applications if methane could be stored more conveniently and used to power clean vehicles.
Chemists at Liverpool are now investigating ways to store larger quantities of methane gas at higher temperatures and lower pressures as part of a project funded by the UK Engineering and Physical Sciences Research Council (EPSRC).

Monday, February 16, 2009

Helium Can be Solid and Perfect Liquid at Same Time

At very low temperatures, helium can be solid and a perfect liquid at the same time. Theoreticians, though, have incorrectly explained the phenomenon for a long time. Computer simulations at ETH Zurich have shown that only impurities can make this effect possible.

Matthias Troyer and his team carry out experiments at their computers. Troyer is Professor of Computational Physics at ETH Zurich’s Institute of Theoretical Physics. He simulates quantum phenomena such as “supersolid” structures. Supersolidity describes a physical phase which can occur at very low temperatures and where a material appears to be solid and “superfluid” at the same time.

Enquiries from the armed forces
However, the word can be misunderstood, as was discovered by one of Troyer’s colleagues who works on the phenomenon in the USA. The US Navy thought that “supersolid” meant “extremely hard” and so asked the physicist whether such a material could be used to armour ships or at least put into a spray can or be used to kill someone. The physicist answered “No” – because “supersolid” does not mean super-hard. After that, the army showed no further interest.

The researchers carry out fundamental research and no direct applications for “supersolidity” are yet on the horizon. At the same time, a group of physicists led by Matthias Troyer has shed light on how the phenomenon occurs. Their results have been published in a series of articles in Physics Review Letters. The first author of the article is post-doctoral researcher Lode Pollet, who has since moved from ETH Zurich to the Universities of Massachusetts and Harvard University in the US. He is in discussions for a professorship, even though he is not yet thirty.

An incorrect explanation
Theoreticians first predicted the “supersolidity” phenomenon in 1969. Their explanation was incorrect, but this escaped notice for some time. The first evidence for “Supersolidity” was measured in an experiment only in 2004. This involved attaching a disc-shaped helium crystal to a spring and rotating it to and fro. In this arrangement, the vibration frequency depends on the rotating mass. The researchers found that the frequency became higher if they cooled the apparatus down to below 0.2 Kelvin – almost down to absolute zero. Part of the mass no longer participated in the rotation; it behaved as a superfluid, meaning it behaved like a friction-free liquid. In other words, it had become “supersolid”.

Up to this point, the measurements were still in line with the theory, but further experiments showed that the proportion of the crystal that became supersolid increased with the number of defects in the crystal. However, the theoreticians who predicted the phenomenon had done their calculations using perfect crystals, ones totally free from defects.
No effect with perfect crystals

At this juncture, the problem became interesting for the computer-assisted physics group led by Matthias Troyer at ETH Zurich and their colleagues in the US and Canada. Although the physicists also carry out experiments, they do so on computer models rather than on the material itself. This allows them to monitor the crystal more closely. For example, they experimented with crystals free from impurities, i.e. perfect crystals of the kind that cannot be grown in the laboratory. No “supersolidity” occurred here.

However, the scientists also grew virtual crystals with defects, for example by orienting the structure of one half of the crystal in a different direction to the other half. They performed this experiment using about one hundred variations with different temperatures and orientations. The result: “supersolidity” occurred where the layers of atoms with different orientations came together, and did so only if the layers did not fit together particularly well. This meant that it depended on the defects, exactly as in the laboratory experiments.
At US customs

Initially, these results were met with rejection from a few scientists. The fact that the phenomenon was possible only when impurities were present did not fit with the view held by the theoreticians, who usually ignore impurities in their considerations. However, the explanation has since gained wide acceptance.

Scientists are not the only people interested in the physicists’ results. When Lode Pollet arrived in the US, a customs officer asked him whether he was the man who worked on this material that was solid and liquid at the same time. Clearly, the American government has not yet lost interest completely