Category: Technology

1 Nov

New Property of Flames Sparks Advances in Technology

Chemists at UCL have discovered a new property of flames, which allows them to control reactions at a solid surface in a flame and opens up a whole new field of chemical innovation. Published in the journal Angewandte Chemie, authors of the new study have discovered their previous understanding of how flames interact with a solid surface was mistaken. For the first time, they have demonstrated that a particular type of chemistry, called redox chemistry, can be accurately controlled at the surface. This finding has wide implications for future technology, for example in detection of chemicals in the air, and in developing our understanding of the chemistry of lightning. It also opens up the possibility of being able to perform nitrogen oxide and carbon dioxide electrolysis at the source for the management of green house gases. Results of the study show that depending on the chemical make-up of the flame, scientists can record a distinctive electrical fingerprint. The fingerprint is a consequence of the behaviour of specific chemical species at the surface of a solid conducting surface, where electrons can exchange at a very precise voltage. Dr Daren Caruana, from the UCL Department of Chemistry, said: "Flames can be modelled to allow us to construct efficient burners and combustion engines. But the presence of charged species or ions and electrons in flames gives them a unique electrical property." Dr Caruana added: "By considering the gaseous flame plasma as an electrolyte, we show that it is possible to control redox reactions at the solid/gas interface." The team developed an electrode system which can be used to probe the chemical make-up of flames. By adding chemical species to the flame they were able to pick up current signals at specific voltages giving a unique electrochemical finger print, called a voltammogram. The voltammograms for three different metal oxides -- tungsten oxide, molybdenum oxide and vanadium oxide -- are all unique. Furthermore, the team also demonstrated that the size of the current signatures depend on the amount of the oxide in the flame. Whilst this is possible and routinely done in liquids, this is the first time to be shown in the gas phase. UCL chemists have shown that there are significant differences between solid/gas reactions and their liquid phase equivalents. Liquid free electrochemistry presents access to a vast number of redox reactions, current voltage signatures that lie outside potential limits defined by the liquid. The prospect of new redox chemistries will enable new technological applications such as electrodeposition, electroanalysis and electrolysis, which will have significant economic and environmental benefits. Dr Caruana said: "The mystique surrounding the properties of fire has always captivated our imagination. However, there are still some very significant technical and scientific questions that remain regarding fire and flame. "
1 Nov

Robots Get a Feel for the World: Touch More Sensitve Than a Human’s

What does a robot feel when it touches something? Little or nothing until now. But with the right sensors, actuators and software, robots can be given the sense of feel -- or at least the ability to identify different materials by touch. Researchers at the University of Southern California's Viterbi School of Engineering published a study June 18 in Frontiers in Neurorobotics showing that a specially designed robot can outperform humans in identifying a wide range of natural materials according to their textures, paving the way for advancements in prostheses, personal assistive robots and consumer product testing. The robot was equipped with a new type of tactile sensor built to mimic the human fingertip. It also used a newly designed algorithm to make decisions about how to explore the outside world by imitating human strategies. Capable of other human sensations, the sensor can also tell where and in which direction forces are applied to the fingertip and even the thermal properties of an object being touched. Like the human finger, the group's BioTac® sensor has a soft, flexible skin over a liquid filling. The skin even has fingerprints on its surface, greatly enhancing its sensitivity to vibration. As the finger slides over a textured surface, the skin vibrates in characteristic ways. These vibrations are detected by a hydrophone inside the bone-like core of the finger. The human finger uses similar vibrations to identify textures, but the robot finger is even more sensitive. When humans try to identify an object by touch, they use a wide range of exploratory movements based on their prior experience with similar objects. A famous theorem by 18th century mathematician Thomas Bayes describes how decisions might be made from the information obtained during these movements. Until now, however, there was no way to decide which exploratory movement to make next. The article, authored by Professor of Biomedical Engineering Gerald Loeb and recently graduated doctoral student Jeremy Fishel, describes their new theorem for solving this general problem as "Bayesian Exploration." Built by Fishel, the specialized robot was trained on 117 common materials gathered from fabric, stationery and hardware stores. When confronted with one material at random, the robot could correctly identify the material 95% of the time, after intelligently selecting and making an average of five exploratory movements. It was only rarely confused by pairs of similar textures that human subjects making their own exploratory movements could not distinguish at all. So, is touch another task that humans will outsource to robots? Fishel and Loeb point out that while their robot is very good at identifying which textures are similar to each other, it has no way to tell what textures people will prefer. Instead, they say this robot touch technology could be used in human prostheses or to assist companies who employ experts to assess the feel of consumer products and even human skin. Loeb and Fishel are partners in SynTouch LLC, which develops and manufactures tactile sensors for mechatronic systems that mimic the human hand. Founded in 2008 by researchers from USC's Medical Device Development Facility, the start-up is now selling their BioTac sensors to other researchers and manufacturers of industrial robots and prosthetic hands.
1 Nov

Particle Physics: BaBar Data Hint at Cracks in the Standard Model

Recently analyzed data from the BaBar experiment may suggest possible flaws in the Standard Model of particle physics, the reigning description of how the universe works on subatomic scales. The data from BaBar, a high-energy physics experiment based at the U.S. Department of Energy's (DOE) SLAC National Accelerator Laboratory, show that a particular type of particle decay called "B to D-star-tau-nu" happens more often than the Standard Model says it should. In this type of decay, a particle called the B-bar meson decays into a D meson, an antineutrino and a tau lepton. While the level of certainty of the excess (3.4 sigma in statistical language) is not enough to claim a break from the Standard Model, the results are a potential sign of something amiss and are likely to impact existing theories, including those attempting to deduce the properties of Higgs bosons. "The excess over the Standard Model prediction is exciting," said BaBar spokesperson Michael Roney, professor at the University of Victoria in Canada. The results are significantly more sensitive than previously published studies of these decays, said Roney. "But before we can claim an actual discovery, other experiments have to replicate it and rule out the possibility this isn't just an unlikely statistical fluctuation." The BaBar experiment, which collected particle collision data from 1999 to 2008, was designed to explore various mysteries of particle physics, including why the universe contains matter, but no antimatter. The collaboration's data helped confirm a matter-antimatter theory for which two researchers won the 2008 Nobel Prize in Physics. Researchers continue to apply BaBar data to a variety of questions in particle physics. The data, for instance, has raised more questions about Higgs bosons, which arise from the mechanism thought to give fundamental particles their mass. Higgs bosons are predicted to interact more strongly with heavier particles -- such as the B mesons, D mesons and tau leptons in the BaBar study -- than with lighter ones, but the Higgs posited by the Standard Model can't be involved in this decay. "If the excess decays shown are confirmed, it will be exciting to figure out what is causing it," said BaBar physics coordinator Abner Soffer, associate professor at Tel Aviv University. Other theories involving new physics are waiting in the wings, but the BaBar results already rule out one important model called the "Two Higgs Doublet Model." "We hope our results will stimulate theoretical discussion about just what the data are telling us about new physics," added Soffer. The researchers also hope their colleagues in the Belle collaboration, which studies the same types of particle collisions, see something similar, said Roney. "If they do, the combined significance could be compelling enough to suggest how we can finally move beyond the Standard Model." The results have been presented at the 10th annual Flavor Physics and Charge-Parity Violation Conference in Hefei, China, and submitted for publication in the journal Physical Review Letters. The paper is available on arXiv in preprint form. This work is supported by DOE and NSF (USA), STFC (United Kingdom), NSERC (Canada), CEA and CNRS-IN2P3 (France), BMBF and DFG (Germany), INFN (Italy), FOM (The Netherlands), NFR (Norway), MES (Russia), and MICIIN (Spain), as well as support from Israel and India. Individuals have received funding from the Marie Curie EIF (European Union) and the A.P. Sloan Foundation (USA).
1 Nov

Data from NASA’s Voyager 1 Point to Interstellar Future

Data from NASA's Voyager 1 spacecraft indicate that the venerable deep-space explorer has encountered a region in space where the intensity of charged particles from beyond our solar system has markedly increased. Voyager scientists looking at this rapid rise draw closer to an inevitable but historic conclusion -- that humanity's first emissary to interstellar space is on the edge of our solar system. "The laws of physics say that someday Voyager will become the first human-made object to enter interstellar space, but we still do not know exactly when that someday will be," said Ed Stone, Voyager project scientist at the California Institute of Technology in Pasadena. "The latest data indicate that we are clearly in a new region where things are changing more quickly. It is very exciting. We are approaching the solar system's frontier." The data making the 16-hour-38 minute, 11.1-billion-mile (17.8-billion-kilometer), journey from Voyager 1 to antennas of NASA's Deep Space Network on Earth detail the number of charged particles measured by the two High Energy telescopes aboard the 34-year-old spacecraft. These energetic particles were generated when stars in our cosmic neighborhood went supernova. "From January 2009 to January 2012, there had been a gradual increase of about 25 percent in the amount of galactic cosmic rays Voyager was encountering," said Stone. "More recently, we have seen very rapid escalation in that part of the energy spectrum. Beginning on May 7, the cosmic ray hits have increased five percent in a week and nine percent in a month." This marked increase is one of a triad of data sets which need to make significant swings of the needle to indicate a new era in space exploration. The second important measure from the spacecraft's two telescopes is the intensity of energetic particles generated inside the heliosphere, the bubble of charged particles the sun blows around itself. While there has been a slow decline in the measurements of these energetic particles, they have not dropped off precipitously, which could be expected when Voyager breaks through the solar boundary. The final data set that Voyager scientists believe will reveal a major change is the measurement in the direction of the magnetic field lines surrounding the spacecraft. While Voyager is still within the heliosphere, these field lines run east-west. When it passes into interstellar space, the team expects Voyager will find that the magnetic field lines orient in a more north-south direction. Such analysis will take weeks, and the Voyager team is currently crunching the numbers of its latest data set. "When the Voyagers launched in 1977, the space age was all of 20 years old," said Stone. "Many of us on the team dreamed of reaching interstellar space, but we really had no way of knowing how long a journey it would be -- or if these two vehicles that we invested so much time and energy in would operate long enough to reach it." Launched in 1977, Voyager 1 and 2 are in good health. Voyager 2 is more than 9.1 billion miles (14.7 billion kilometers) away from the sun. Both are operating as part of the Voyager Interstellar Mission, an extended mission to explore the solar system outside the neighborhood of the outer planets and beyond. NASA's Voyagers are the two most distant active representatives of humanity and its desire to explore. The Voyager spacecraft were built by NASA's Jet Propulsion Laboratory in Pasadena, Calif., which continues to operate both. JPL is a division of the California Institute of Technology. The Voyager missions are a part of the NASA Heliophysics System Observatory, sponsored by the Heliophysics Division of the Science Mission Directorate in Washington.
1 Nov

Nanotechnology Used to Harness Power of Fireflies

What do fireflies, nanorods, and Christmas lights have in common? Someday, consumers may be able to purchase multicolor strings of light that don't need electricity or batteries to glow. Scientists at Syracuse University found a new way to harness the natural light produced by fireflies (called bioluminescence) using nanoscience. Their breakthrough produces a system that is 20 to 30 times more efficient than those produced during previous experiments. It's all about the size and structure of the custom, quantum nanorods, which are produced in the laboratory by Mathew Maye, assistant professor of chemistry in SU's College of Arts and Sciences; and Rebeka Alam, a chemistry Ph.D. candidate. Maye is also a member of the Syracuse Biomaterials Institute. "Firefly light is one of nature's best examples of bioluminescence," Maye says. "The light is extremely bright and efficient. We've found a new way to harness biology for non-biological applications by manipulating the interface between the biological and non-biological components." Their work, "Designing Quantum Rods for Optimized Energy Transfer with Firefly Luciferase Enzymes," was published online May 23 in Nano Letters and is forthcoming in print. Collaborating on the research were Professor Bruce Branchini and Danielle Fontaine, both from Connecticut College. Fireflies produce light through a chemical reaction between luciferin and it's counterpart, the enzyme luciferase. In Maye's laboratory, the enzyme is attached to the nanorod's surface; luciferin, which is added later, serves as the fuel. The energy that is released when the fuel and the enzyme interact is transferred to the nanorods, causing them to glow. The process is called Bioluminescence Resonance Energy Transfer (BRET). "The trick to increasing the efficiency of the system is to decrease the distance between the enzyme and the surface of the rod and to optimize the rod's architecture," Maye says. "We designed a way to chemically attach, genetically manipulated luciferase enzymes directly to the surface of the nanorod." Maye's collaborators at Connecticut College provided the genetically manipulated luciferase enzyme. The nanorods are composed of an outer shell of cadmium sulfide and an inner core of cadmium seleneide. Both are semiconductor metals. Manipulating the size of the core, and the length of the rod, alters the color of the light that is produced. The colors produced in the laboratory are not possible for fireflies. Maye's nanorods glow green, orange, and red. Fireflies naturally emit a yellowish glow. The efficiency of the system is measured on a BRET scale. The researchers found their most efficient rods (BRET scale of 44) occurred for a special rod architecture (called rod-in-rod) that emitted light in the near-infrared light range. Infrared light has longer wavelengths than visible light and is invisible to the eye. Infrared illumination is important for such things as night vision goggles, telescopes, cameras, and medical imaging. Maye's and Alam's firefly-conjugated nanorods currently exist only in their chemistry laboratory. Additional research is ongoing to develop methods of sustaining the chemical reaction -- and energy transfer -- for longer periods of time and to "scale-up" the system. Maye believes the system holds the most promise for future technologies that that will convert chemical energy directly to light; however, the idea of glowing nanorods substituting for LED lights is not the stuff of science fiction.
1 Nov

Engineers Perfecting Carbon Nanotubes for Highly Energy-Efficient Computing

Energy efficiency is the most significant challenge standing in the way of continued miniaturization of electronic systems, and miniaturization is the principal driver of the semiconductor industry. "As we approach the ultimate limits of Moore's Law, however, silicon will have to be replaced in order to miniaturize further," said Jeffrey Bokor, deputy director for science at the Molecular Foundry at the Lawrence Berkeley National Laboratory and Professor at UC-Berkeley. To this end, carbon nanotubes (CNTs) are a significant departure from traditional silicon technologies and a very promising path to solving the challenge of energy efficiency. CNTs are cylindrical nanostructures of carbon with exceptional electrical, thermal and mechanical properties. Nanotube circuits could provide a ten-times improvement in energy efficiency over silicon. Early promise When the first rudimentary nanotube transistors were demonstrated in 1998, researchers imagined a new age of highly efficient, advanced computing electronics. That promise, however, is yet to be realized due to substantial material imperfections inherent to nanotubes that left engineers wondering whether CNTs would ever prove viable. Over the last few years, a team of Stanford engineering professors, doctoral students, undergraduates, and high-school interns, led by Professors Subhasish Mitra and H.-S. Philip Wong, took on the challenge and has produced a series of breakthroughs that represent the most advanced computing and storage elements yet created using CNTs. These high-quality, robust nanotube circuits are immune to the stubborn and crippling material flaws that have stumped researchers for over a decade, a difficult hurdle that has prevented the wider adoption of nanotube circuits in industry. The advance represents a major milestone toward Very-large Scale Integrated (VLSI) systems based on nanotubes. "The first CNTs wowed the research community with their exceptional electrical, thermal and mechanical properties over a decade ago, but this recent work at Stanford has provided the first glimpse of their viability to complement silicon CMOS transistors," said Larry Pileggi, Tanoto Professor of Electrical and Computer Engineering at Carnegie Mellon University and director of the Focus Center Research Program Center for Circuit and System Solutions. Major barriers While there have been significant accomplishments in CNT circuits over the years, they have come mostly at the single-nanotube level. At least two major barriers remain before CNTs can be harnessed into technologies of practical impact: First, "perfect" alignment of nanotubes has proved all but impossible to achieve, introducing detrimental stray conducting paths and faulty functionality into the circuits; second, the presence of metallic CNTs (as opposed to more desirable semiconducting CNTs) in the circuits leads to short circuits, excessive power leakage and susceptibility to noise. No CNT synthesis technique has yet produced exclusively semiconducting nanotubes. "Carbon nanotube transistors are attractive for many reasons as a basis for dense, energy efficient integrated circuits in the future. But, being borne out of chemistry, they come with unique challenges as we try to adapt them into microelectronics for the first time. Chief among them is variability in their placement and their electrical properties. The Stanford work, that looks at designing circuits taking into consideration such variability, is therefore an extremely important step in the right direction," Supratik Guha, Director of the Physical Sciences Department at the IBM Thomas J. Watson Research Center . "This is very interesting and creative work. While there are many difficult challenges ahead, the work of Wong and Mitra makes good progress at solving some of these challenges," added Bokor. Realizing that better processes alone will never overcome these imperfections, the Stanford engineers managed to circumvent the barriers using a unique imperfection-immune design paradigm to produce the first-ever full-wafer-scale digital logic structures that are unaffected by misaligned and mis-positioned CNTs. Additionally, they addressed the challenges of metallic CNTs with the invention of a technique to remove these undesirable elements from their circuits. Striking features The Stanford design approach has two striking features in that it sacrifices virtually none of CNTs' energy efficiency and it is also compatible with existing fabrication methods and infrastructure, pushing the technology a significant step toward commercialization. "This transformative research is made all the more promising by the fact that it can co-exist with today's mainstream silicon technologies, and leverage today's manufacturing and system design infrastructure, providing the critical feature of economic viability," said Betsy Weitzman of the Focus Center Research Program at the Semiconductor Research Corporation The engineers next demonstrated the possibilities of their techniques by creating the essential components of digital integrated systems: arithmetic circuits and sequential storage, as well as the first monolithic three-dimensional integrated circuits with extreme levels of integration. The Stanford team's work was featured recently as an invited paper at the International Electron Devices Meeting (IEDM) as well as a "keynote paper" in the IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems. "Many researchers assumed that the way to live with imperfections in CNT manufacturing was through expensive fault-tolerance techniques. Through clever insights, Mitra and Wong have shown otherwise. Their inexpensive and practical methods can significantly improve CNT circuit robustness, and go a long way toward making CNT circuits viable," said Sachin S. Sapatnekar, Editor-in-Chief, IEEE Transactions on CAD. "I anticipate high reader interest in the paper," Sapatnekar noted.
1 Nov

Robotic Assistants May Adapt to Humans in the Factory

In today's manufacturing plants, the division of labor between humans and robots is quite clear: Large, automated robots are typically cordoned off in metal cages, manipulating heavy machinery and performing repetitive tasks, while humans work in less hazardous areas on jobs requiring finer detail. But according to Julie Shah, the Boeing Career Development Assistant Professor of Aeronautics and Astronautics at MIT, the factory floor of the future may host humans and robots working side by side, each helping the other in common tasks. Shah envisions robotic assistants performing tasks that would otherwise hinder a human's efficiency, particularly in airplane manufacturing. "If the robot can provide tools and materials so the person doesn't have to walk over to pick up parts and walk back to the plane, you can significantly reduce the idle time of the person," says Shah, who leads the Interactive Robotics Group in MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL). "It's really hard to make robots do careful refinishing tasks that people do really well. But providing robotic assistants to do the non-value-added work can actually increase the productivity of the overall factory." A robot working in isolation has to simply follow a set of preprogrammed instructions to perform a repetitive task. But working with humans is a different matter: For example, each mechanic working at the same station at an aircraft assembly plant may prefer to work differently -- and Shah says a robotic assistant would have to effortlessly adapt to an individual's particular style to be of any practical use. Now Shah and her colleagues at MIT have devised an algorithm that enables a robot to quickly learn an individual's preference for a certain task, and adapt accordingly to help complete the task. The group is using the algorithm in simulations to train robots and humans to work together, and will present its findings at the Robotics: Science and Systems Conference in Sydney in July. "It's an interesting machine-learning human-factors problem," Shah says. "Using this algorithm, we can significantly improve the robot's understanding of what the person's next likely actions are." Taking wing As a test case, Shah's team looked at spar assembly, a process of building the main structural element of an aircraft's wing. In the typical manufacturing process, two pieces of the wing are aligned. Once in place, a mechanic applies sealant to predrilled holes, hammers bolts into the holes to secure the two pieces, then wipes away excess sealant. The entire process can be highly individualized: For example, one mechanic may choose to apply sealant to every hole before hammering in bolts, while another may like to completely finish one hole before moving on to the next. The only constraint is the sealant, which dries within three minutes. The researchers say robots such as FRIDA, designed by Swiss robotics company ABB, may be programmed to help in the spar-assembly process. FRIDA is a flexible robot with two arms capable of a wide range of motion that Shah says can be manipulated to either fasten bolts or paint sealant into holes, depending on a human's preferences. To enable such a robot to anticipate a human's actions, the group first developed a computational model in the form of a decision tree. Each branch along the tree represents a choice that a mechanic may make -- for example, continue to hammer a bolt after applying sealant, or apply sealant to the next hole? "If the robot places the bolt, how sure is it that the person will then hammer the bolt, or just wait for the robot to place the next bolt?" Shah says. "There are many branches." Using the model, the group performed human experiments, training a laboratory robot to observe an individual's chain of preferences. Once the robot learned a person's preferred order of tasks, it then quickly adapted, either applying sealant or fastening a bolt according to a person's particular style of work. Working side by side Shah says in a real-life manufacturing setting, she envisions robots and humans undergoing an initial training session off the factory floor. Once the robot learns a person's work habits, its factory counterpart can be programmed to recognize that same person, and initialize the appropriate task plan. Shah adds that many workers in existing plants wear radio-frequency identification (RFID) tags -- a potential way for robots to identify individuals. Steve Derby, associate professor and co-director of the Flexible Manufacturing Center at Rensselaer Polytechnic Institute, says the group's adaptive algorithm moves the field of robotics one step closer to true collaboration between humans and robots. "The evolution of the robot itself has been way too slow on all fronts, whether on mechanical design, controls or programming interface," Derby says. "I think this paper is important -- it fits in with the whole spectrum of things that need to happen in getting people and robots to work next to each other." Shah says robotic assistants may also be programmed to help in medical settings. For instance, a robot may be trained to monitor lengthy procedures in an operating room and anticipate a surgeon's needs, handing over scalpels and gauze, depending on a doctor's preference. While such a scenario may be years away, robots and humans may eventually work side by side, with the right algorithms. "We have hardware, sensing, and can do manipulation and vision, but unless the robot really develops an almost seamless understanding of how it can help the person, the person's just going to get frustrated and say, 'Never mind, I'll just go pick up the piece myself,'" Shah says. This research was supported in part by Boeing Research and Technology and conducted in collaboration with ABB.
1 Nov

Toddler Spatial Knowledge Boosts Understanding of Numbers

Children who are skilled in understanding how shapes fit together to make recognizable objects also have an advantage when it comes to learning the number line and solving math problems, research at the University of Chicago shows. The work is further evidence of the value of providing young children with early opportunities in spatial learning, which contributes to their ability to mentally manipulate objects and understand spatial relationships, which are important in a wide range of tasks, including reading maps and graphs and understanding diagrams showing how to put things together. Those skills also have been shown to be important in Science Technology, Engineering and Math (STEM) fields. Scholars at UChicago have shown, for instance, that working with puzzles and learning to identify shapes are connected to improved spatial understanding and better achievement, particularly in geometry. A new paper, however, is the first to connect robust spatial learning with better comprehension of other aspects of mathematics, such as arithmetic. "We found that children's spatial skills at the beginning of first and second grades predicted improvements in linear number line knowledge over the course of the school year," said Elizabeth Gunderson, a UChicago postdoctoral scholar who is lead author of the paper, "The Relation Between Spatial Skill and Early Number Knowledge: The Role of the Linear Number Line," published in the current issue of the journal Development Psychology. In addition to finding the importance of spatial learning to improving understanding of the number line, the team also showed that better understanding of the number line boosted mathematics performance on a calculation task. "These results suggest that improving children's spatial thinking at a young age may not only help foster skills specific to spatial reasoning but also improve symbolic numerical representations," said co-author Susan Levine, a leading authority on spatial and mathematical learning. "This is important since spatial learning is malleable and can be positively influenced by early spatial experiences," added Levine, the Stella M. Rowley Professor in Psychology at UChicago. Gunderson, PhD'12, and the research team reasoned that improved understanding of spatial relationships would help students figure out the approximate location of numbers along a line and could lead to better mathematics performance. They tested their idea with two experiments. In the first experiment, the team studied 152 first- and second-grade boys and girls from diverse backgrounds in five urban schools. It gave them tests at the beginning and end of the school year, to see how well they could locate numbers on a straight, unmarked line with zero at one end and 1,000 at the other. At the beginning of the school year, the researchers also assessed children's spatial knowledge on a task that required them to choose the correct piece from among four alternatives, which could be added to others to complete a square shape. The students with the strongest spatial skills showed the most growth in their number line knowledge over the course of the school year. In a second experiment, the team showed the relationship among spatial skills, number line knowledge and facility in solving mathematics problems. That study was based on information gathered from a study of 42 children, who were videotaped between the ages of five and eight while having everyday interactions with their parents and caregivers. The children were tested for spatial knowledge when they were five-and-a-half years old, and for number line knowledge when they were a little older than six. At age eight their calculation skills were assessed on a task that required them to approximate the answer. Consistent with the results of the first study, this study showed clearly that the children with better spatial skills performed better on number line tests. Importantly, this number line knowledge was related to their later performance on the approximate calculation tests when they were eight years old. "Improving children's spatial skills may have positive impacts on their future success in science, technology, engineering or mathematics disciplines, not only by improving spatial thinking but also by enhancing the numerical skills that are critical for achievement in all STEM fields," Gunderson said. Joining Gunderson and Levine in writing the paper were Gerardo Ramirez, a graduate student in psychology at UChicago; and Sian Beilock, associate professor in psychology at UChicago. Grants from the National Institute of Child Health and Human Development, the National Science Foundation and the National Center for Education Research supported the research.
1 Nov

Researchers Watch Tiny Living Machines Self-Assemble

Enabling bioengineers to design new molecular machines for nanotechnology applications is one of the possible outcomes of a study by University of Montreal researchers that was published in Nature Structural and Molecular Biology June 10. The scientists have developed a new approach to visualize how proteins assemble, which may also significantly aid our understanding of diseases such as Alzheimer's and Parkinson's, which are caused by errors in assembly. "In order to survive, all creatures, from bacteria to humans, monitor and transform their environments using small protein nanomachines made of thousands of atoms," explained the senior author of the study, Prof. Stephen Michnick of the university's department of biochemistry. "For example, in our sinuses, there are complex receptor proteins that are activated in the presence of different odor molecules. Some of those scents warn us of danger; others tell us that food is nearby." Proteins are made of long linear chains of amino acids, which have evolved over millions of years to self-assemble extremely rapidly -- often within thousandths of a split second -- into a working nanomachine. "One of the main challenges for biochemists is to understand how these linear chains assemble into their correct structure given an astronomically large number of other possible forms," Michnick said. "To understand how a protein goes from a linear chain to a unique assembled structure, we need to capture snapshots of its shape at each stage of assembly said Dr. Alexis Vallée-Bélisle, first author of the study. "The problem is that each step exists for a fleetingly short time and no available technique enables us to obtain precise structural information on these states within such a small time frame. We developed a strategy to monitor protein assembly by integrating fluorescent probes throughout the linear protein chain so that we could detect the structure of each stage of protein assembly, step by step to its final structure." The protein assembly process is not the end of its journey, as a protein can change, through chemical modifications or with age, to take on different forms and functions. "Understanding how a protein goes from being one thing to becoming another is the first step towards understanding and designing protein nanomachines for biotechnologies such as medical and environmental diagnostic sensors, drug synthesis of delivery," Vallée-Bélisle said. This research was supported by the Natural Sciences and Engineering Research Council of Canada and Le fond de recherché du Québec, Nature et Technologie. The article, "Visualizing transient protein folding intermediates by tryptophan scanning mutagenesis," published in Nature Structural & Molecular Biology, was coauthored by Alexis Vallée-Bélisle and Stephen W. Michnick of the Département de Biochimie de l'Université de Montréal. The University of Montreal is known officially as Université de Montréal.
 
1 Nov

Woolly Mammoth Extinction Has Lessons for Modern Climate Change

Although humans and woolly mammoths co-existed for millennia, the shaggy giants disappeared from the globe between 4,000 and 10,000 years ago, and scientists couldn't explain until recently exactly how the Flintstonian behemoths went extinct. In a paper published June 12 in the journal Nature Communications, UCLA researchers and colleagues reveal that not long after the last ice age, the last woolly mammoths succumbed to a lethal combination of climate warming, encroaching humans and habitat change -- the same threats facing many species today. "We were interested to know what happened to this species during the climate warming at the end of the last ice age because we were looking for insights into what might happen today due to human-induced climate change," said Glen MacDonald, director of UCLA's Institute of the Environment and Sustainability (IoES). "The answer to why woolly mammoths died off sounds a lot like what we expect with future climate warming." MacDonald, a professor of geography and of ecology and evolutionary biology, worked with UCLA IoES scientists Robert Wayne and Blaire Van Valkenburgh, UCLA geographer Konstantine Kremenetski, and researchers from UC Santa Cruz, the Russian Academy of Science and the University of Hawaii Manoa. Their work shows that although hunting by people may have contributed to the demise of woolly mammoths, contact with humans isn't the only reason this furry branch of the Elephantidae family went extinct. By creating the most complete maps to date of all the changes happening thousands of years ago, the researchers showed that the extinction didn't line up with any single change but with the combination of several new pressures on woolly mammoths. When the last ice age ended about 15,000 years ago, woolly mammoths were on the rise. Warming melted glaciers, but the still-chilly temperatures were downright comfy for such furry animals and kept plant life in just the right balance. It was good weather for growing mammoths' preferred foods, while still too cold for the development of thick forests to block their paths or for marshy peatlands to slow their stride. But the research explains that the end was coming for the last of the woolly mammoths, who inhabited Beringia, a chilly region linked by the Bering Strait that included wide swaths of Alaska, the Yukon and Siberia. Though humans had hunted woolly mammoths in Siberia for millennia, it wasn't until the last ice age that people crossed the Bering Strait and began hunting them in Alaska and the Yukon for the first time. After a harsh, 1,500-year cold snap called the Younger Dryas about 13,000 years ago, the climate began to get even warmer. The rising temperatures led to a decline in woolly mammoths' favored foods, like grasses and willows, and encouraged the growth of low-nutrient conifers and potentially toxic birch. Marshy peatlands developed, forcing the mammoths to struggle through difficult and nutritionally poor terrain, and forests became more abundant, squeezing mammoths out of their former territory. "It's not just the climate change that killed them off," MacDonald said. "It's the habitat change and human pressure. Hunting expanded at the same time that the habitat became less amenable." Most of the woolly mammoths died about 10,000 years ago, with the final small populations, which were living on islands, lingering until about 4,000 years ago. Many previous theories about the mammoths' extinction tended to blame only one thing: hunting, climate changes, disease or even an ice-melting, climate-changing meteor, MacDonald said. The new research marks the first time scientists mapped out and dated so many different aspects of the era at once. Using radiocarbon dating of fossils, the researchers were able to trace the changing locations of peatlands, forests, plant species, mammoth populations and human settlements over time, and they cross-referenced this information with climate-change data. The research used 1,323 mammoth radiocarbon dates, 658 peatland dates, 447 tree dates, and 576 dates from Paleolithic archaeological sites. Scientists from IoES and other UCLA departments obtained samples and worked on radiocarbon dating of the peatlands and the forests, and they created a database uniting information on hundreds of previously dated mammoth samples, developing the final map from thousands of dates and latitude and longitude records. That's what drew Van Valkenburgh, a paleontologist and professor of ecology and evolutionary biology, to the project. "Glen's project combined paleobotanical, paleontological, genetic, archaeological and paleoclimate data and did it in a bigger way, with many more data points, than has been done before," said Van Valkenburgh, who interpreted the archaeological record. "I was excited to be able to contribute to such an ambitious and exciting study." She and Wayne, a UCLA molecular geneticist and professor of ecology and evolutionary biology who studies ancient DNA, used different methods of examining the mammoth fossils to reconstruct the ancient population size. "It's a dramatic advance in the amount of data," said Wayne, who reconstructed mitochondrial DNA from radiocarbon-dated woolly mammoth remains. "Essentially, larger populations should have greater genetic diversity. However, in this case, the extent of fossil remains provided a more high-resolution picture of how the population size changed through time than genetic diversity." Mapping the size and location of both mammoth and human populations alongside temperature changes and plant locations through time gave the researches a uniquely complete view of what happened, MacDonald said. "We are, in a sense, time-traveling with our maps to look at the mammoths," he said. It's something MacDonald has dreamed of for a long time, he said. He was working in Siberia several years ago when a colleague found a woolly mammoth tooth. "We looked at it and held it, and just the thought that those immense creatures had been there not that long ago in geologic time and yet completely disappeared was really amazing," MacDonald said. "How warming in the past has been involved in extinction might help us prevent extinctions in the future."