November 30, 2013
3-D Imaging Technique Utilizes Famous Mathematician's Theory
Dr. Xiaohu Guo (right), associate professor of computer science, led a team of researchers that included
PhD candidate Zichun Zhong (left) in developing a technique to create 3-D images.
UT Dallas computer scientists have developed a technique to make 3-D images that finds practical applications of a theory created by a famous mathematician.
This technique uses anisotropic triangles – triangles with sides that vary in length depending on their direction – to create 3-D “mesh” computer graphics that more accurately approximate the shapes of the original objects, and in a shorter amount of time than current techniques. These types of images are used in movies, video games and computer modeling of various phenomena, such as the flow of water or air across the Earth, the deformation and wrinkles of clothes on the human body, or in mechanical and other types of engineering designs. Researchers hope this technique will also lead to greater accuracy in models of human organs to more effectively treat human diseases, such as cancer.
“Anisotropic mesh can provide better simulation results for certain types of problems, for example, in fluid dynamics,” said Dr. Xiaohu Guo, associate professor of computer science in the Erik Jonsson School of Engineering and Computer Science whose team created the technique.
The technique finds a practical application of the Nash embedding theorem, which was named after mathematician John Forbes Nash Jr., subject of the film A Beautiful Mind.
“The underlying mathematics we used to solve this problem are rigorous and beautiful,” Guo said. “Finding a way to use the theory in a practical application will have a great impact in the field.”
Guo and his team found that replacing isotropic triangles (part 1) with anisotropic triangles (part 2)
in the particle-based method of creating images resulted in smoother representations of objects.
The computer graphics field represents shapes in the virtual world through triangle mesh. Traditionally, it is believed that isotropic triangles – where each side of the triangle has the same length regardless of direction – are the best representation of shapes. However, the aggregate of these uniform triangles can create edges or bumps that are not on the original objects. Because triangle sides can differ in anisotrophic images, creating images with this technique would allow the user flexibility to more accurately represent object edges or folds.
Guo and his team found that replacing isotropic triangles with anisotropic triangles in the particle-based method of creating images resulted in smoother representations of objects. Depending on the curvature of the objects, the technique can generate the image up to 125 times faster than common approaches. For example, 155 seconds to create a circular image with Guo’s approach, versus more than 19,500 seconds for a common approach to generate an image of similar quality.
Objects using anisotropic triangles are of a more accurate quality, and most noticeable to the human eye when it comes to wrinkles and movement of clothes on human representatives.
The next step of this research is moving from representing the surface of 3-D objects to representing 3-D volume.
“If we are going to create accurate representations of human organs, we need to account for the movement of cells below the organ’s surface,” Guo said.
The research was presented at the Association for Computing Machinery SIGGRAPH conference earlier this year.
Zichun Zhong, research assistant in computer science and PhD candidate at UT Dallas, was also involved in this research. Researchers from the University of Hong Kong, Inria Nancy Grand Est in France, Nvidia Corporation in California and UT Southwestern Medical Center also participated.
Media Contact: LaKisha Ladson, UT Dallas, (972) 883-4183, email@example.com
November 30, 2013
Slaying dragon-kings could prevent financial crashes
20 November 2013 by Lisa Grossman
HUGO CAVALCANTE saw the disaster coming. From his lab at the Federal University of Paraíba in Brazil, he detected the warning signs of an epic crash. At the last minute, he managed to nudge his system back to safety. Crisis averted.
OK, so Cavalcante's impending crisis was only a pair of credit-card-sized circuits that were about to start oscillating out of sync – hardly the stuff of the evening news. But the experiment is the first to show that a class of extreme events, colourfully called dragon-kings, can be predicted and suppressed in a real, physical system. The feat suggests that some day we may also be able to predict, or in some cases prevent, some of the catastrophes in the real world that seem unstoppable, including financial crashes, brain seizures and storms.
"People were hoping if you could forecast extreme events, maybe we could find a way to control them," says Cavalcante's colleague Daniel Gauthier at Duke University in Durham, North Carolina. "We were able to completely suppress the dragon-king events."
Dragon-kings aren't the first animal used to describe a class of catastrophic events. In 2001, Nassim Taleb published a book called The Black Swan, his name for catastrophes that always catch us off-guard. But though difficult to predict, black swans actually fall within an accepted mathematical distribution known as a power law, which says there will be exponentially more small events than large ones (see diagram).
Most events or objects found in a complex system – including earthquakes, hurricanes, moon craters, even power imbalances in war – also obey a power law, a ubiquity that some say hints at a deeper organising principle at work in the universe. Others, like Taleb, focus on the fact that a power law can't predict when black swans will occur.
Now there's another beast to reckon with. In 2009, Didier Sornette at the Swiss Federal Institute of Technology in Zurich reported that some events lift their heads above the power law's parapet, the way a king's power and wealth vastly outstrip that of the more plentiful peasant. So big that they should be rare, these events have a greater probability of occurring than a power law would mandate.
"There seem to be certain extremes that happen much more often than they should if you just believe the power-law distribution predicted by their smaller siblings," Sornette says.
He christened them dragon-kings. The dragon part of the name stems from the fact that these events seem to obey different mathematical laws, just as a dragon's behaviour differs from that of the other animals.
Sornette got his first whiff of dragon-kings when studying cracks that develop in spacecraft. Since then, he has spotted them everywhere, from a rainstorm that hit Venezuela in 1999 and the financial crashes in 2000 and 2007, to some epileptic seizures.
But he wasn't satisfied with merely recognising dragon-kings. The fact that they don't follow a power law suggests they are being produced by a different mechanism, which raises the possibility that, unlike events that follow the power law, dragon-kings may be predictable.
He and his colleagues have had some success, predicting a slip in the Shanghai Stock Exchange before it happened in August 2009 and using a few electrical pulses to suppress seizures that might have become dragon-kings in rats and rabbits. But the difficulty of running controlled experiments in real financial systems or brains prevented them from going any further.
Enter Cavalcante and Gauthier's oscillating circuits. Gauthier spent the early 1990s studying pairs of identical circuits that behaved chaotically on their own, but would synchronise for long periods of time when coupled in a certain way. "It's a little bit politically incorrect, but it's sometimes called the 'master-slave' configuration," Gauthier says. He coupled the two circuits by measuring the difference between the voltages running through them, and injecting a current into the "slave" circuit to make it more like the "master". Most of the time this worked and the two would oscillate together like a pair of swinging pendulums, with only slight deviations away from synchronisation.
But every so often, the slave would stop following the master and march to its own beat for a short time, before getting back in step. Gauthier realised at the time that there were recognisable signs that this disconnect was about to happen. It wasn't until he saw Sornette's work that he checked for dragon-kings.
He and his colleagues have now shown that the differences in the circuits' voltages during these desynchronisations are indeed dragon-kings. "They were as big as the system would physically allow, like a major disaster," Gauthier says.
The pair went on to show that they could reliably forecast when a big event was about to happen: whenever the differences between the circuits' oscillations decreased to a certain value, a leap of dragon-king proportions was almost always imminent. And once they saw it coming, they found they could apply a small electrical nudge to the slave circuit to make sure it didn't tear away from its master (Physical Review Letters, doi.org/p44).
"We basically kill the dragon-king in the egg," Sornette says. "The counter-mechanism kills it when it is burgeoning."
It's a long way to go from a pair of coupled circuits to the massive complexity of the real world. But by using this simple system to find out at what stage in the process a dragon-king can be prevented, Sornette hopes to see whether financial regulation could prevent a crash once a stock market bubble has already begun to grow, a controversial topic among regulators.
"The fear of central banks is that their intervention might actually worsen the situation and trigger the crashes, destabilising the system even further," he says. "That's the type of insight we could test and check and probe with our system."
Some physicists think the gap between so-called low dimensional systems like the pair of oscillators, which can be described by just three variables each, and real-world complex systems like the stock market, is too wide to bridge. "The conclusions of the paper appear correct and interesting for people studying low dimensional chaos," says Alfred Hubler of the University of Illinois at Urbana-Champaign. "But in the real world, low dimensional chaos is very rare. Most real-world complex systems have many interacting parts."
Others agree with Sornette that having a simple physical system to manipulate will be useful. "Having a mechanical system where you can explore it in the lab is crucially important," says Neil Johnson at the University of Miami in Coral Gables. He studies dragon-kings in simulations of stock markets and traffic jams and can't wait to start using a pair of oscillators to see how they relate.
Sornette thinks the circuits are just the beginning of a future in which we can monitor, diagnose, forecast and ultimately control our world. "I think we are on the verge of a revolution where we are going to be able to steer our planet better, informed by this kind of science." It's quite a promise – not all storms, seizures and crashes are dragon-kings, after all. But we now have a tool to explore how to deal with those that are.
This article appeared in print under the headline "Crashing market, hidden dragon?"
November 30, 2013
HIV virus spread and evolution studied through computer modeling
Scanning electron micrograph of HIV-1 budding (in green) from cultured lymphocytes.
The image has been colored to highlight important features.
Photo credit: C. Goldsmith, Centers for Disease Control and Prevention.
LOS ALAMOS, N.M., Nov. 19, 2013—Researchers at Los Alamos National Laboratory are investigating the complex relationships between the spread of the HIV virus in a population (epidemiology) and the actual, rapid evolution of the virus (phylogenetics) within each patient’s body.
“We have developed novel ways of estimating epidemics dynamics such as who infected whom, and the true population incidence of infection versus mere diagnoses dates,” said Thomas Leitner, principal investigator. “Obviously, knowledge about these things is important for public health monitoring, decision making and intervention campaigns, and further to forensic investigations.”
The team models the uninfected population using traditional differential equations on the computer; this is done for computational speed, because an agent-based component is much more demanding. Once a person is infected, he/she becomes an “agent” in computer modeling terms, and the model starts following their behavior individually, as well as the viral HIV evolution within the person.
Agent-based modeling clarifies infection history
This new modeling approach distinguishes between susceptible and infected individuals to capture the full infection history, including contact tracing data for infected individuals. The uninfected individuals are modeled at a population level and stratified by transmission risk and social group. The social network in this model forms – and can change – during the simulation. Thus, the model is much more realistic than traditional models.
The advantage of this epidemiological model, Leitner said, is that “it allows us to simulate many possible outcomes of epidemics with known parameters of human interactions, where social networks form as part of the agent interactions. It is a flexible system that has the ability to describe realistic human populations.”
Within a National Institutes of Health-supported project “Reconstructing HIV Epidemics from HIV Phylogenetics,” the team has published 10 papers describing new mathematical models and results from real data addressing these issues. Most recently, they published a Nature correspondence on the limitations of HIV forensics and the need for better standards.
Who infected whom
A key question is on the fundamental limitations to the inference of who infected whom, based on a concept known as the pre-transmission interval (which this group first described back in 1999). Another publication, published in Epidemics, developed a new hybrid model to simulate and analyze the spread of HIV or other pathogens spread in a human population. The work also appeared in PLoS-ONE Public Library of Science online publication.
As an example, the team modeled a Latvian HIV-1 epidemic, and they showed that injecting drug users fueled the heterosexual population, thereby sustaining the overall epidemic. The researchers are now expanding this hybrid model to also include HIV genetic evolution, which occurs in every infected individual.
The researchers have shown that in fast HIV epidemics – such as among individuals injecting themselves with drugs – HIV viral evolution is slow, resulting in little diversification at the population level. Meanwhile, slower-spreading epidemics display more HIV evolution over the same amount of time.
New field of phylodynamics evolves
Understanding HIV’s genetic evolution will soon allow investigations of how accurately researchers can reconstruct different epidemiological scenarios using pathogen genetic materials, an important and growing field called phylodynamics.
The team also has developed a new mathematical model that facilitates estimation of when a person was infected with HIV based on a previously used biomarker (BED IgG).
“This is important because most HIV infected persons are not discovered shortly after infection rather, they are often discovered long after, often years after infection, said Leitner. “Thus, to estimate true incidence, that is when infections actually occurred, cannot be done based on diagnosis dates.”
Using Swedish surveillance data, the team has shown that the common assumption that infection occurred on average half way between last negative test and first positive test, is wrong. Instead, the actual infection is strongly skewed towards the first positive sample.
This finding should have large impact on epidemiological models used worldwide by public health organizations, Leitner says. “Currently, we have further developed this model to also correct for unknown cases, such as infected people not yet discovered but who contribute to new infections and thereby the true incidence of the disease.”
The team behind the insights
Researchers include Frederik Graw, Thomas Leitner, Ruy M. Ribeiro, and Helena Skar (Los Alamos National Laboratory) and Jan Albert (Karolinska Institute and Karolinska University Hospital). The National Institutes of Health funded the research.
November 30, 2013
A celebration of a Persian mystic leads to better understanding of dynamics
Virginia Tech engineering science and mechanics faculty member James Hanna
and his colleagues were inspired by art to develop mathematical
equations to improve the understanding of dynamics.
Blacksburg, VA , November 27, 2013
Virginia Tech College of Engineering
James Hanna likes to have fun with his engineering views of physics.
So when he and his colleague Jemal Guven visited their friend Martin Michael Müller in France on a rainy, dreary day, the three intellects decided to stay in. Guven, absent-mindedly switching between channels on the television, stumbled upon a documentary on whirling dervishes, best described as a Sufi religious order, who commemorate the teachings of 13th century Persian mystic and poet Rumi through spinning at a fixed speed in their floor length skirts.
“Their skirts showed these very striking, long-lived patterns,” Hanna, the engineer, recalled.
The film caused physicists Guven and Müller to think about structures with conical symmetry, or those shapes that can be defined as a series of straight lines emanating from a single point. By contrast, Hanna, the engineer with a physicist’s background, thought about rotating flexible structures, namely strings or sheets.
The three put their thoughts together and the whirling dervishes became the inspiration for a technical paper appearing in the Nov. 27 of the New Journal of Physics, published by the Institute of Physics and the German Physical Society. They gave their work the slightly bemusing title of “Whirling skirts and rotating cones.”
The three authors reflect an international collaboration. Hanna is on the faculty of Virginia Tech’s Department of Engineering Science and Mechanics, while Guven hails from the nuclear sciences institute of the National Autonomous University of Mexico, and Müller is part of the BioPhysStat research group at the University of Lorraine, France.
In terms of the impact of their work, it widens the scientist’s understanding of the dynamics of flexible objects and of pattern formation in rotating systems. They may also “shed some light on the previously known instabilities of turbine disks and hard disks,” Hanna said.
“Rotating systems often break axisymmetry. Were it not so, there would be no hurricanes,” Hanna explained. “There are steadier examples. The skirts of the whirling dervish carry cusped wave patterns which seem to defy gravity and common sense.” A YouTube video of the Whirling Dervishes in action can be viewed here.
Watching the swirling skirts, the three linked their patterns to the presence of the Coriolis force, the reason why objects deflect on the Earth. They realized the Coriolis force was essential in the creation of the different patterns, and were subsequently able to develop a set of mathematical equations that govern how fixed or free-flowing cone-shaped structures behave when rotating.
According to the Institute of Physics, the simple equations the three developed were able to reproduce the sharp peaks and gentle troughs that appear along the flowing surface of the dervishes’ skirts.
The three are presenting their findings this week at the 66th Annual Meeting of the American Physical Society of Fluid Dynamics.
Hanna joined Virginia Tech in the fall of 2013 after working for three and a half years as a postdoctoral research associate in the Department of Physics at the University of Massachusetts, Amherst. He has focused on the dynamics of strings and other flexible bodies, and the design of thin sheet structures by inhomogeneous swelling and buckling.
November 30, 2013
Peering into the future: How cities grow
Migration patterns into and out of cities are the result of millions of individual decisions, which in turn are affected by thousands of factors like economics, location, politics, security, aesthetics, sentiments and others. However, it is becoming more and more critical for urban planners to be able to predict the rate of growth of a given city in order to better address future issues that arise from increasing urbanization. Publishing in the Journal of the Royal Society Interface, researchers from EPFL have discovered a law that could prove fundamental in forecasting the growth of cities. Cities are often regarded as living organisms, bustling with life and activity. This metaphorical image has a real basis, as migration in and out of cities is a constant, dynamic force. But as cities boom across the world, it is becoming necessary to determine patterns in urban migration. This would allow the prediction of a city’s growth in the future, resulting in better planning for administration, design, logistics, and finance. The problem is that human migration over time and space is motivated by a vast range of individual decisions, giving rise to enormously complex data that are not easy to analyze. Brownian motion First, the way a city will grow in the future depends heavily on how it grew in the past. Although this seems intuitive, it is not always true since city growth depends on a multitude of factors. By using their model, the team found that it is possible to use past data to predict a city’s rate of growth, as long as that city has at least 10,000 inhabitants. They also found that this strong relationship between past and future lasts for approximately 15 years, during which it runs at a constant rate. What this means is that predictions of city growth require at least 15 years’ worth of population data for best accuracy. Second, a city’s growth is strongly influenced by how neighboring cities grow. This is especially true when neighboring cities are within 80 Km; as the distance increases, the influence of neighboring cities weakens. This means that predictions of city growth would be challenging for cities like Las Vegas or Perth in Australia, which are large but relatively remote. The relationship between city growth and distance from neighbors means that geography is an important factor when forecasting the future urban sprawl of a particular city. Predicting the future of cities
Migration patterns into and out of cities are the result of millions of individual decisions, which in turn are affected by thousands of factors like economics, location, politics, security, aesthetics, sentiments and others. However, it is becoming more and more critical for urban planners to be able to predict the rate of growth of a given city in order to better address future issues that arise from increasing urbanization. Publishing in the Journal of the Royal Society Interface, researchers from EPFL have discovered a law that could prove fundamental in forecasting the growth of cities.
Cities are often regarded as living organisms, bustling with life and activity. This metaphorical image has a real basis, as migration in and out of cities is a constant, dynamic force. But as cities boom across the world, it is becoming necessary to determine patterns in urban migration. This would allow the prediction of a city’s growth in the future, resulting in better planning for administration, design, logistics, and finance. The problem is that human migration over time and space is motivated by a vast range of individual decisions, giving rise to enormously complex data that are not easy to analyze.
First, the way a city will grow in the future depends heavily on how it grew in the past. Although this seems intuitive, it is not always true since city growth depends on a multitude of factors. By using their model, the team found that it is possible to use past data to predict a city’s rate of growth, as long as that city has at least 10,000 inhabitants. They also found that this strong relationship between past and future lasts for approximately 15 years, during which it runs at a constant rate. What this means is that predictions of city growth require at least 15 years’ worth of population data for best accuracy.
Second, a city’s growth is strongly influenced by how neighboring cities grow. This is especially true when neighboring cities are within 80 Km; as the distance increases, the influence of neighboring cities weakens. This means that predictions of city growth would be challenging for cities like Las Vegas or Perth in Australia, which are large but relatively remote. The relationship between city growth and distance from neighbors means that geography is an important factor when forecasting the future urban sprawl of a particular city.
Predicting the future of cities
November 30, 2013
A brief exploration of Infinity
Sunday, Dec 1, 2013, 10:25 IST | Agency: DNA
Dr.Mayank Vahia looks at the various facets of Infinity from a mathematical, physical and philosophical perspective, and looks at how ancient India saw this fascinating concept.
By infinity, most of us would conjure up a number line or a large number to which you can add another number and make it even larger. To a mathematician, infinity is the endless extent of numbers. For others it is endless spaces and fields or even the universe; and yet others may think of the time since something happened or the long-time one will have to wait for something to happen. Therefore, when infinity is mentioned, it conjures up three images, endless numbers, space and time. There are interesting differences between the three.
Dr Mayank Vahia is a scientist working at the Tata Institute of Fundamental Research since 1979. His main fields of interest are high-energy astrophysics, mainly Cosmic Rays, X-rays and Gamma Rays. He is currently looking at the area of archeo-astronomy and learning about the way the our ancestors saw the stars, and thereby developed intellectually. He has, in particular, been working on the Indus Valley Civilisation and taking a deeper look at their script.
November 30, 2013
To What Extent Do We See With Mathematics?
When I first became fascinated with mathematics’ tightly knit abstract structures, its prominence in physics and engineering reassured me. Mathematics’ indisputable value in science made it clear that my preoccupation with its intangible expressions was not pathological. The captivating creative activity of doing mathematics has real consequences.
During my graduate school years, I began to consider that the appearance of reality actually depended on the kind of mathematics we use to see it. Was it possible that the use of mathematical ideas, like a lens, could bring some aspects of the world into sharp focus while blurring others to the point of invisibility? A new mathematics, whose development is being led by author and theoretical physicist David Deutsch, may actually highlight what mathematics can do to help us “see” our reality, and maybe even tell us something about how the process works. Deutsch is best known for his pioneering work on the quantum theory of computation, where some of the more mysterious quantum phenomena are harnessed to dramatically enhance computation. While his new mathematics is related to the quantum theory of computation, it is also distinct from it. He calls the new mathematics constructor theory: a theory designed to tell us, in the most general sense, what is and is not possible in the physical world.
Deutsch has discussed constructor theory before. An unexpected early success of the theory, he has said, has provided a new foundation for information theory. Information theory involves the quantification of information. But perhaps most relevant to this discussion is that information theory equates abstract things such as words, coded data and algorithms with physical things such as like electric signals, chemical exchanges and molecular coding. Since they are all information, the employment of information theory is transdisciplinary. Just a few of the disciplines included in its range of application are physics, electrical engineering, linguistics and neurobiology. The processing of information, expressed in the formalism of mathematics, captures the action of many kinds of systems.
The physical nature of information is one of the keys to constructor theory. “There is no such thing as an abstract computer,” Deutsch explains. Information is “instantiated in radically different physical objects that obey different laws of physics.” In other words, information becomes represented by an instance, or an occurrence, like the attribute of a creature determined by the information in its DNA. In Deutsch’s October 2012 presentation for an online discussion group, he makes the point more personally.
The ‘information’ is the only thing that remains unchanged in each of these transformations and this is the focus of constructor theory. Constructor theory is meant to get at what Deutsch calls this “substrate independence of information,” which necessarily involves a more fundamental level of physics than particles, waves and space-time. And he suspects that this ‘more fundamental level’ may be shared by all physical systems.
Information and knowledge are, for Deutsch, part of the stuff of physical life. In his book, The Beginning of Infinity, Deutsch compares and contrasts human brains and DNA molecules. “Among other things,” he says, they are each “general-purpose information-storage media…” This information he calls knowledge, and he aligns biological information with explanatory information when he says that knowledge “is very unlikely to come into existence other than through the error-correcting process of evolution or thought.” The non-explanatory nature of biological knowledge is distinguished only by its limits, affecting largely only the physical attributes of creatures and their immediate experience. The explanatory nature of human knowledge, however, allows us access to things far beyond our immediate experience.
Constructor theory begins with the understanding that every physical system can be understood in terms of transformations, where one physical system is changed by the action of another. The object causing the transformation is called the constructor. But a key-defining characteristic of a constructor, as Deutsch’s colleague Chiara Marletto explained to me, is that in the process the constructor “remains unchanged in its ability to cause the transformation again.” In a philosophical paper on the subject,
Deutsch gives the heat engine in thermodynamics as an example of a constructor. Continually converting thermal energy into mechanical energy, or heat into work, it operates cyclically, returning to its initial state at the end of each cycle, able to repeat the process over and over again. In this configuration, the constructor together with its input and output states jointly describe an isolated system.
But, Marletto also explained, the constructor itself, the thing that causes a transformation, is abstracted away in constructor theory, leaving only the input/output states.
These input/output states are expressed as “ordered pairs of states” and are called construction tasks. The idea is no doubt a distant cousin of the ordered pairs of numbers we learned about in algebra. The composition of tasks, or networks of tasks, are also defined to account for the interaction of more than one system. One of the challenges of building the theory is defining the relationships among tasks (or the algebra of tasks), so that they make sense with respect to one another and are able to accurately express known physical laws. Constructor theory’s own laws, which are actually “laws about laws,” have been given the name principles. Principles describe the constraints on other laws rather than the behavior of physical objects directly. “We guess principles,” Marletto told me, “and then build a mathematical structure to be consistent with that conjecture.” The algebra must make sense within itself, be able to express known physical laws, and be able to define the laws (or principles) of constructor theory. Describing something of their thought process Deutsch tells us:
The basic principle of the theory is that subsidiary theories, or all physical theories compatible with constructor theory, must be expressible entirely in terms of statements about which tasks are possible, which are not, and why. “If you have this theory of what is possible and what is impossible,” Deutsch says, “it implicitly tells you what all the laws of physics are.” In a constructor-theoretic description of the physical world “what actually happens is seen as an emergent consequence of what could happen, rather than vice-versa.”
Although it resembles other abstract algebras, the algebra of constructor theory is new. But Marletto and I agreed that it rests on very familiar, fundamental notions: ordered pairs, one-to-oneness, inputs and outputs, compositions and the very idea of a mathematical transformation. There is a thread that leads from constructor theory algebra back to the algebra that most of us learn in high school or college. But the hopes for this algebra are far-reaching. Constructor theory principles, together with the laws of subsidiary theories that are compatible with these principles, are expected to produce new laws which have no equivalent in existing theories.
Image provided by Chiara Marletto
I am intrigued by the fact that, after centuries of observation and skillful experiment, and after centuries of analysis and mathematical insight, that this algebraic, information-driven mathematical structure may be able to reach the depth necessary to produce new physical theory, and one that is not expressible within the framework of current theories. The algebra is not designed to systematize current theories, but rather to find their foundation and then open a window onto things that we have not yet seen.
So where does the relationship between physical reality and mathematical reality actually lie? Perhaps it lies in mathematics’ relationship to cognition itself. Cognitive processes are themselves transformations—of sensory data into the perceived attributes of the things in our experience, or of associated experiences into meaningful narratives or into symbol. Fundamental cognitive processes work in mathematical ways. Neural cells in the visual brain, for example, are specialized to respond to abstractions like verticality. Distance is discerned by the measurement of discrepancies between the view of each of our two eyes, and probabilities are calculated when the brain brings meaning to new sensory data using past experience. Perhaps mathematics, as a thoughtful activity, is a continuation of what the body is already doing—what it accomplishes outside of our awareness. Then mathematics works to shape our experience of deeper realities, as if the body is working to see more, or to see differently. And this would support the confidence that Deutsch expresses near the end of his presentation, that the work of science can bring us closer to what is really there.
Since the early part of the 19th century, mathematicians and physicists alike have pondered the relationship between mathematics and physics. Over the course of the intervening years questions about their relationship have broadened and given rise to new questions. Mathematician and computer scientist Gregory Chaitin, for example, has recently introduced a mathematical or, more precisely, a software account of biology in his book, Proving Darwin: Making Biology Mathematical. Once we discovered software, Chaitin has said, we were able to see that it is everywhere around us. DNA, he explains, is “a universal programming language found in every cell,” and, consequently, biology is about algorithmic information. Chaitin has begun the construction of a mathematical biology, where genetic history can be described algorithmically. In his working models, there is no difference between mathematical creativity and the biological creativity of evolution.
The proposals from both Deutsch and Chaitin share the view that mathematics is integral to biological processes. Perhaps they each represent an evolution of thought in science, where the distinction between what was once considered mechanical action and thoughtful action becomes less clear and where thoughtful action is understood as part of the life of the universe.
November 30, 2013
A Mathematical Approach to Physical Problems: An Interview with Rupert Frank
What do you work on?
I work in this area called mathematical physics. It involves taking things that we see and observe in nature and trying to explain them mathematically from first principles. In mathematics, people often say that they're doing algebra or geometry or something, where they are talking about the methods they are using. However, for us it's more that we use whatever methods we need in order to understand a concrete problem. It's much more problem-specific.
For example, one thing that we still cannot explain—that we are actually really far from being able to explain—is the emergence of periodic structures; that is, structures that repeat themselves. It's clear in nature that it does happen. We see crystals, for example. But we still have no idea why this happens. It's embarrassing really.
So how do you approach a problem like that?
We like to start, for example, with the rules of quantum mechanics—some axioms, which describe the state and the energy of a system. From there, we would like to see that periodic structures can emerge on a macroscopic scale.
Sometimes we work with smaller dimensions—one-dimensional or two-dimensional models, not three dimensional, as nature is. Or we work with discrete models where you assume that all objects can only sit at discrete sites; they cannot move continuously through space. There is a hope that by working with such models, one can reveal more about the overall system.
What problems are you currently addressing?
An important aspect of my work is symmetry and symmetry breaking. Periodicity is a particular case of symmetry.
A problem that I'm always working on is how to explain superconductivity. Superconductivity is a quantum phenomenon that happens on a macroscopic scale, meaning that I can observe it with my bare eyes. [The phenomenon involves the electrical resistance of certain metals and ceramics dropping to zero when cooled below a particular critical temperature. This means such materials can conduct electricity for longer periods, more efficiently. They also repel magnetic fields.] But I cannot explain it with ordinary classical mechanics; I need quantum mechanics. So again, the point is how do we come up with a theory for superconductivity on a macroscopic scale from a microscopic model using the laws of quantum mechanics? And that has been understood, I would say, on a physical level, and there are models that work numerically very well, but mathematically it has not been clarified.
How would you say the discipline of mathematical physics informs both mathematics and physics?
Well, mathematics and physics have always been interrelated, and a lot of mathematics has been developed while trying to solve physical problems. I think physics, from a mathematics perspective, leads to interesting mathematical problems. You are trying to prove something, and it's typically related to some optimization problem—where you want to minimize energy costs or something. So it gives you a way of thinking.
In terms of the benefit to physics, I think we can sometimes provide a different perspective. Physicists typically speak about what they consider to be typical cases within a model, whereas in mathematics, one usually works on the negative side—trying to exclude the atypical. So from time to time, we come up with problems that really require physical explanation that has not been there before.
How did you originally become interested in mathematics and physics?
Actually, both my mother and my father are mathematicians, and one of my brothers is a mathematician; the other is a computer scientist. So it was around when I was growing up, that's for sure. By my third year of university studies, I knew which field of mathematics I wanted to focus on. It can be called functional analysis, operator theory, or mathematical physics. And I saw that all of this was intrinsically related to quantum mechanics. To a certain extent, this field of mathematics was created to explain quantum mechanics. So it was clear that I had to go into physics.
Why did you decide to come to Caltech?
Well, it's a very nice place, and it's a smaller place. That gives you a lot of opportunities because you're not only one of the many. Everybody expects you to do something, and they help you to do it. That's something that I really appreciate.
Written by Kimm Fesenmaier
Novenber 17, 2013
Friedberg Named Fellow of American Mathematical Society
Sol Friedberg. Photo by Lee Pellegrini
By ED HAYWARD | CHRONICLE STAFF
Published: Nov. 13, 2013
James P. McIntyre Professor of Mathematics Solomon Friedberg has been named a fellow of the American Mathematical Society, one of a select group of mathematicians who make up only the second class of fellows selected by the 125-year-old organization.
Friedberg — who is on sabbatical this year as chair of Mathematics — is one of 50 mathematicians from around the world selected for their “outstanding contributions to the creation, exposition, advancement, communication, and utilization of mathematics,” according to the society.
In particular, the organization praised Friedberg for his scholarly contributions to number theory, representation theory and automorphic forms, an analytical approach to complex mathematical problems. He was also recognized for his work to establish BC’s PhD program in mathematics three years ago.
College and Graduate School of Arts and Sciences Dean David Quigley said Friedberg has not only sought to advance mathematical study on campus, but also worked extensively to improve math education in Massachusetts’ K-12 schools.
“Sol Friedberg has distinguished himself as an international leader in number theory while closer to home he has been a successful chair of the Mathematics Department and he’s committed to improving K-12 math education across the Commonwealth,” Quigley said. “I’m very happy to learn of his appointment as Fellow of the American Mathematical Society.”
Friedberg’s research has been supported by the National Security Agency and he is currently a co-principal investigator on a six-year National Science Foundation grant to support math teachers in high-need schools.
A member of the board of directors of Math for America Boston, which supports K-12 and college-level math teachers, Friedberg has also advised the Massachusetts Department of Elementary and Secondary Education on mathematics and science education. He is one of the organizers of the BC-MIT Number Theory Seminar.
Regarding the new fellows, AMS President David A. Vogan Jr. said, “It is always a pleasure to learn about the work of other mathematicians. Each of us knows many amazing individuals; the Fellows program is a chance to shout from the rooftops.”
To learn about this year’s American Mathematical Society fellows, see www.ams.org/profession/ams-fellows.
Novenber 17, 2013
Mathematical analysis helps untangle bacterial chromosomes
This image is a mathematical representation of a replication link.
The replication of circular chromosomes in E. colli bacteria
produces these interlinked DNA circles.
The arrowheads indicate the recombination sites and the ball
represents the enzymes responsible for separating the DNA circles.
(Image prepared by Rob Scharein).
When an E. coli cell divides, it must replicate its circular chromosome and pull the resulting circles apart to take up residence in two new cells. It sounds easy enough -- like a magician's trick with rings -- but actually involves a complicated process of unknotting and unlinking of tangled DNA.
In a new study, published online this week in the journal Proceedings of the National Academy of Sciences, SF State Associate Professor of Mathematics Mariel Vazquez and an international team of scientists offer a mathematical analysis of how these chromosomal rings are unlinked by XerCD recombination enzymes.
Antibiotics like ciprofloxacin, prescribed for E. coli infections, target topoisomerases, another type of enzyme involved in DNA unlinking. When treated with these drugs, bacterial cells may find other modes of unlinking like the one presented in Vazquez' study, thus giving the cells a chance for survival. Understanding this unlinking process in E. coli, Vazquez noted, "could also lead to the design of better antibacterial drugs, with a clear positive effect on human health."
Infections by pathogenic E. coli and other bacteria pose a high risk to human health. According to the Centers for Disease Control and Prevention, each year in the United States at least 2 million people become infected with bacteria that are resistant to antibiotics. At least 23,000 people die each year as a direct result of these infections. In order to understand bacterial infections, it is essential to study how cells such as E. coli divide.
Biological experiments had given Vazquez and her colleagues some clues as to how the interlinked E. coli chromosomes separate prior to cell division. But the experiments could not provide a clear picture of the steps along the way to separation.
To fill in this picture, the researchers proposed a rigorous mathematical analysis that used the tangle method to model the changes that take place during the separation. In this case, the "tangle" represents two specific sites along the chromosome bound together by the recombination enzymes. They confirmed that the separation takes place in a stepwise fashion. Chromosomes interlinked after replication are converted into knots, then links again, then knots, until two free circles remain.
The researchers mention that further biological experiments can help justify the assumptions in the mathematical model, but acknowledge that those experiments would be extremely challenging to carry out. "In their absence, the mathematical analysis makes a clear-cut advance over previous biological studies," Vazquez said.
Vazquez stressed that mathematics, physics, computer science and statistics all have a role to play alongside biology in understanding DNA topology.
"It is important for people to know that DNA is not just a sequence of letters. It is a very long molecule that can adopt a complex three-dimensional structure when packaged inside a cell nucleus," she said. "Every biological process that involves DNA will be affected by its topology, and topological changes can have important biological implications."
In 2011, Vazquez was awarded a National Science Foundation CAREER grant to carry out DNA topology studies. As part of the grant, Vazquez works with local elementary schools in the San Francisco Math Circles program. In 2012, she received the Presidential Early Career Award for Scientists and Engineers (PECASE) for her work.
The study "FtsK-dependent XerCD-dif recombination unlinks replication catenanes in a stepwise manner," was published in the November 11 2013 issue of the journal Proceedings of the National Academy of Sciences.
SF State is the only master's-level public university serving the counties of San Francisco, San Mateo and Marin. The University enrolls nearly 30,000 students each year and offers nationally acclaimed programs in a range of fields -- from creative writing, cinema and biology to history, broadcast and electronic communication arts, theatre arts and ethnic studies. The University's more than 219,000 graduates have contributed to the economic, cultural and civic fabric of San Francisco and beyond.
FOR IMMEDIATE RELEASE:
Novenber 17, 2013
A vexing math problem finds an elegant solution
A famous math problem that has vexed mathematicians for decades has met an elegant solution by Cornell researchers. Graduate student Yash Lodha, working with Justin Moore, professor of mathematics, has described a geometric solution for the von Neumann-Day problem, first described by mathematician John von Neumann in 1929.
Lodha presented his solution at the London Mathematical Society’s Geometric and Cohomological Group Theory symposium in August, and has submitted the work to a journal. “People were very excited by this,” Lodha said. “[The solution] is natural and compelling enough to study for its own sake.”
Lodha works in the field of geometric group theory. A group is a mathematical construct that describes the notion of the symmetries of an object, whether it’s a physical object or a theoretical space. For example, a polygon has rotational as well as reflectional symmetries, all of which, together with the operation of composition, form what’s called a finite group, because the polygon can be described as a finite sequence of operations that reflect its symmetries.
Formally, a group can be described as words in an alphabet together with a set of rules that are called “relations.” Group theorists, Lodha said, are like biologists who classify species; mathematicians try to categorize groups that have properties A, B or C – but is there one that has A but not C?
The inspiration for Lodha’s work originated in the early 20th century, when mathematicians first proved that a ball that exists in three-dimensional space can be chopped into a finite number of pieces – “like tearing up a piece of paper without stretching or squeezing,” Lodha explained – and can be reassembled, like a jigsaw puzzle, into two balls, each the size of the original ball. This is known as the Banach-Tarski paradox.
von Neumann, in studying this paradox, was the first to describe the reason behind it: He attributed it not to the geometry of 3-D space, but to the algebraic properties of the symmetries inherent to the sphere. He was the first to isolate this property, which mathematicians today call “non-amenability.”
von Neumann further observed that if a group contains free groups, which are groups that have a finite alphabet and no rules, then it must be non-amenable. He posed the question of whether the opposite is true – are there groups that do not contain free groups and are also non-amenable? The problem, later popularized by M.M. Day, waited another 40 years before mathematician Alexander Olshanskii cracked it, although Olshanskii’s group had an infinite set of rules.
Another two decades went by before Olshanskii and Mark Sapir supplied another solution in response to the von Neumann-Day problem. This time, their example was governed by a finite, but astronomically large set of rules – close to 10200. It also lacked a natural geometric model. So mathematicians probed further for a group with a finite set of rules, that is non-amenable and does not contain free groups.
For the first time, Lodha describes a group that has only nine rules, a natural geometric model, is non-amenable and does not contain free groups.
Advances in mathematics are almost always incremental and build upon previous work, Lodha said. To complete this work, among his most valuable insights was one first described by the late Bill Thurston, Fields medalist and Cornell’s Jacob Gould Schurman Professor of Mathematics, which involved a way of expressing the group in a different light, as a “continued fractions model.”
Lodha’s work also builds heavily on work by Nicolas Monod, who constructed a geometrically oriented, but not finitely presented, counterexample to the von Neumann-Day problem. Lodha and Moore’s contribution was to isolate a finitely presented subgroup, with only nine relations, of Monod’s example.
Further work on the group, which doesn’t yet have a name, could make the solution to the von Neumann-Day problem even stronger: by isolating stronger finiteness conditions for proving that the group has a finite number of rules.
The research was supported by the National Science Foundation.
Novenber 17, 2013
Mathematical modelling provides insights into the origins and evolution of folk tales
This image shows a maximum clade credibility tree returned by the Bayesian phylogenetic analysis of the tales. Major groupings are labelled by region and/or ATU international type and indicated by the coloured nodes.
Numbers beside the edges represent the percentage of trees in the Bayesian posterior distribution of trees in which a given node occurred. The scale bar indicates the average number of changes per character along a given edge.
Credit: Tehrani JJ (2013) The Phylogeny of Little Red Riding Hood. PLoS ONE 8(11): e78871. doi:10.1371/journal.pone.0078871
New insights into the origins and development of folk tales such as Little Red Riding Hood are being provided by the application of scientific analysis more commonly used by biologists to produce an evolutionary tree of life diagram.
In the scientific journal, PLOS ONE, published today, Dr Jamie Tehrani, an anthropologist at Durham University, England, resolves a long-running debate by demonstrating that Little Red Riding Hood shares a common but ancient root with another popular international folk tale The Wolf and the Kids, although the two are now distinct stories.
"This is rather like a biologist showing that humans and other apes share a common ancestor but have evolved into distinct species," explained Dr Tehrani, who found that The Wolf and the Kids probably originated in the 1st century AD, with Little Red Riding Hood branching off 1,000 years later.
The Wolf and the Kids, popular in Europe and the Middle East, is a story about a wolf who impersonates a nanny goat and devours her kids, whereas Little Red Riding Hood is about a wolf who devours a young girl after impersonating her grandmother. Variants of the story are common in Africa and Asia, for example, The Tiger Grandmother in Japan, China and Korea.
Little Red Riding Hood was told by the Brothers Grimm 200 years ago but that version was based on an earlier, 17th century, story written by the Frenchman Charles Perrault, which itself derived from an older, oral tradition of storytelling in France, Austria and northern Italy.
Dr Tehrani subjected 58 variants of the folk tales with phylogenetic analysis, a method more commonly used by biologists for grouping together closely-related organisms to form a tree of life diagram, mapping out the various branches of evolution from the earliest life forms.
The analysis focused on 72 plot variables, such as the character of the protagonist (for example male or female, single child or group of siblings); the character of the villain (wolf, ogre, tiger or other creature), the tricks used by the villain to deceive the victim and whether the victim is eaten, escapes or is rescued.
Phylogenetics involves a mathematical modelling process that compares similarities between the plot variables and scores them according to the probability that they have the same origin. This enables a tree to be constructed showing the most likely paths, or branches, of the evolution of the story.
Novenber 17, 2013
An intersection of math and biology: Clams and snails inspire robotic diggers and crawlers
November 11, 2013
Philadelphia, PA—Engineering has always taken cues from biology. Natural organisms and systems have done well at evolving to perform tasks and achieve objectives within the limits set by nature and physics.
That is one of the reasons Anette Hosoi, professor of mechanical engineering at the Massachusetts Institute of Technology, studies snails. Snails can move in any direction—horizontally, vertically, and upside down—on various surfaces, be it sand, shells, tree barks or slick walls and smooth glass. One of the reasons for this is the sticky substance on their underbellies, which acts as a powerful lubricant and reduces friction during movement.
By studying and adapting the biological properties of the snail to robotic devices, Hosoi’s group has been able to create a “RoboSnail,” which can climb walls and stick to overhead surfaces much like its living counterpart. Such a device can have potential uses in invasive surgery and oil well drilling, among other applications.
Another organism of interest to Hosoi is the razor clam, which has an amazing ability to dig and wedge itself; it can burrow up to 30 inches in the sand. Hosoi’s “RoboClam” has been developed with the intention of understanding the organism’s behavior and mechanics as well as to explore the possibility of automated digging devices that use less energy than current technology and equipment .
The researchers found that while digging, the clam’s up-and-down movement accompanied by opening and closing of its shell turns sand into the consistency of liquid quicksand. This in turn allows the clam to move quickly through the sand. Similar to the human version, the RoboClam vibrates, changing the solid seabed into fluid, allowing a worm-like foot to push down.
Clam-inspired robotic diggers could find use as automatic tethers and lightweight low-cost anchoring devices for small robotic submarines and even large ships and oil platforms. Devices that burrow into the seabed could also potentially be used as detonators for underwater mines.
Hosoi is not alone in looking to biology to instruct robotics development. Engineers around the world are turning to natural organisms like insects, fish and turtles to inspire the design of robots capable of performing specific tasks that automated devices have traditionally been unable to achieve. Mimicking natural organisms can also aid in improving the efficiency of many applications that are energetically expensive, since biological entities perform the same tasks with much higher efficiency.
It is important to not only copy the animals, but also to understand the biology of their mechanisms in order to take away the key features that allow them to do what they do. These types of biomechanical studies have led to a mutually beneficial partnership between mathematicians and biologists. Biologists can inform mathematical scientists as a goldmine of data is emerging as biology becomes more and more quantified. Mathematicians, in turn, can employ the tools of engineering and computation to analyze this data and offer new insights into the way animals move.
Novenber 17, 2013
Flipping pancakes with mathematics
Jacob Goodman, who likes maths
with his pancakes
Tomorrow marks the 80th birthday of Jacob E Goodman, a mathematician at the City College of New York. If Professor Goodman's name does not ring any bells, then maybe you know him better by his pseudonym, Harry Dweighter. No? Confused? Wondering why a mathematician needs to disguise his identity? Allow me to explain.
In around 1975, Goodman was at home folding towels for his wife. The final pile was somewhat messy, so he decided to restack the folded towels in order of size, smallest folded towel at the top, biggest at the bottom. The problem was that there was no room for a second pile, so he was forced to flip over the top few towels, reassess the situation, then flip over a few more from the top, and so on.
He recalls how a curious problem crossed his mind: "How many flips would I need in the worst case? I thought it was interesting enough to send to the American Mathematical Monthly, but a more 'natural' setting seemed to be pancakes."
Thus the so-called pancake sorting problem was born. How many flips are required to turn a disordered stack of pancakes into an ordered stack?
Goodman was still building his reputation as a mathematician and did not want others to think that he was only interested in trivial pancake puzzles, so he adopted a false identity: "It was easy enough to come up with the jocular pseudonym Harry Dweighter ("harried waiter"), but what if the American Mathematical Monthly wanted to contact me in connection with publishing the problem? I told the secretaries at the department that any calls for Harry Dweighter should be diverted to me. I decided that if I ever had to call anyone back at the Monthly, I would pronounce Dweighter as Dwight-er, so as to deflect suspicion."
This is how the problem was posed: The chef in our place is sloppy, and when he prepares a stack of pancakes they come out all different sizes. Therefore, when I deliver them to a customer, on the way to the table I rearrange them (so that the smallest winds up on top, and so on, down to the largest at the bottom) by grabbing several from the top and flipping them over, repeating this (varying the number I flip) as many times as necessary. If there are n pancakes, what is the maximum number of flips (as a function of n) that I will ever have to use to rearrange them?
To understand the problem, we can try to work out the number of flips required for three pancakes. The three pancakes can arrive in one of the following six arrangements, and alongside each stack is the number of flips required to put them in the right order.
The first stack is trivial, because it arrives in the correct order, hence zero flips. The second stack requires just one flip, with a spatula placed above the lowest pancake. The sixth stack requires three flips – try and work out the flipping protocol – and this is worst case scenario. Hence, the pancake number for three pancakes is said to be three, ie, P3 = 3.
As the pile of pancakes grows, the problem becomes increasingly difficult as there are more and more possible starting arrangements, and an increasing number of possible flipping procedures; P17 = 19, P18 = 20, P19 = 22 and – as yet – nobody knows the number of flips required to reorder 20 pancakes in the worst case scenario.
The problem is simple to pose, yet tough to solve. Indeed, nobody is close to developing a general formula that predicts the number flips required for any given number of pancakes.
Mathematicians always like a deceptively difficult problem. Moreover, such flipping puzzles also have some interesting real world applications. For example, biologists are interested in flipping in the context gene sequences. Some organisms share the same genes, but in a different order, so scientists are keen to know how many flips would have been necessary to transform one sequence into the other.
Similarly, computer scientists are intrigued by the problem, because it has parallels with reordering disorganised data. Indeed, Bill Gates's only academic paper (pdf) is on the topic of pancake flipping. His co-authored paper proved an upper limit to the number of flips required to reorder n pancakes, namely (5n + 5) / 3 flips. Therefore, if you receive 100 pancakes, you will always be able to order them correctly in 166 or fewer flips.
My own interest in the problem was sparked when I was interviewing the writers of The Simpsons for my book The Simpsons and Their Mathematical Secrets. Many of the writers of The Simpsons have mathematical backgrounds and they frequently smuggle mathematics into the series. When talking to David X Cohen (a Simpsons writers and co-creator of Futurama), he casually mentioned that he had co-authored a paper titled, On the Problem of Sorting Burnt Pancakes.
In this version of the problem, the pancakes not only have to be in the correct size order, but they also need to be in the right orientation, with the burnt side facing down. Cohen's paper set the lower and upper bounds for burnt pancake flipping between 3n / 2 and 2n – 2. Hence, for 100 burnt pancakes, we know that the number of flips required to orient and order them in the worst case scenario is between 150 and 198.
It is worth noting that this is not Professor Goodman's only mathematical problem based on a household chore: "I was peeling potatoes with a swivel-bladed vegetable peeler. If you do this rapidly, you wind up with a convex body. So it's natural to ask what proportion of the original volume you can capture with an inscribed convex body." This inspired his 1981 paper titled, On the Largest Convex Polygon Contained in a Non-convex n-gon, Or How to Peel a Potato.
So, in the wake of his 80th birthday celebrations, perhaps we can expect new mathematical problems concerning the geometry of torn wrapping paper, the variability of birthday cake crumb size and the maximally efficient arrangement of cards on a finite horizontal surface.
• Simon Singh is the author of The Simpsons and Their Mathematical Secrets
Novenber 17, 2013
Math + juggling = better problem-solving tools for ISU students
AMES, Iowa – Steve Butler casually tosses a ball from his left hand to his right to demonstrate his point that anyone can juggle. With just one ball it’s easy, until he changes it up and adds a second and a third ball to the mix. Still, the assistant professor of mathematics at Iowa State University says the secret to juggling is simple – it’s all about patterns.
“Anyone can juggle,” Butler said. “There are certain juggling patterns that everyone has mastered, they just don’t realize it. They are so simple that people overlook them as juggling, but they are the basic building blocks to form more interesting patterns.”
Those patterns provide the foundation for Butler’s class this fall about the mathematics of juggling. The two topics have a lot in common because mathematics is the science of studying patterns and juggling is the art of controlling patterns, Butler said. The purpose of the class is to help students understand the different patterns involved in juggling using math.
“There are different kinds of throws and different throws have numbers attached to them. The idea is you want to keep track of how far the object has travelled or how long it’s in the air before you have to catch it again. So a ‘one’ throw is very quick and a ‘two’ throw is essentially twice as long,” Butler said.
Applying math to the sport allows jugglers to explore the limits of what is possible, calculate the number of different patterns and find connections between patterns. Butler says students can use a formula to figure out, for example, all the juggling patterns using four balls that repeat after five throws. In case you’re wondering – there are 2,101 such patterns.
Most juggling tricks involve some variation of the three-ball cascade, Butler explained. It is a fairly basic pattern, in which the juggler throws three balls to the same height on a consistent beat. Other patterns or sequences, such as the 4-4-1, are not necessarily more complex but require different coordination, he said. The 4-4-1 is a pattern that Butler only recently learned. One that didn’t exist until mathematicians entered the equation.
“That’s the kind of pattern that people weren’t sure was possible, but mathematically we said, ‘Yes, you can do that.’ That’s just one of an infinite variety of patterns,” Butler said. “Mathematicians came in and decided to study this and came up with a systematic approach of assigning numbers to juggling to understand what’s possible and what’s not possible.”
The rules and limits of juggling
Butler started juggling when he was 10 years old. He credits Ron Graham, a well-respected mathematician at the University of California -San Diego and former president of the International Juggler’s Association, for helping develop his skill and teaching him how to juggle four balls. Regardless of skill or level, Butler says there is a basic rule that all jugglers must follow.
“You only want to have to deal with one ball at a time. If you have two balls coming to your hand at one time you’re in trouble; that’s dangerous,” Butler said.
Juggling has continued to evolve because of mathematics and online programs that demonstrate the endless sequences. But despite the countless possibilities, jugglers are limited by the fact they can only keep so many objects in the air at one time. A fact that, Butler says, makes juggling impossible to completely master because jugglers are always trying to add one more ball or one more ring.
Most students in Butler’s class have some experience juggling. His hope is that they will be able to juggle three balls by the end of the semester, but ultimately it’s about the math. By exploring the properties of juggling sequences, students are learning skills they can apply to other areas of mathematics.
“Being able to juggle is nice, but what’s really important about this class, are the mathematical tools,” Butler said. “The point in attacking the problem and trying to understand the problem, is we get better tools, and those tools in turn we can take and apply to other problems. That’s really what’s important in mathematics is developing a good set of tools.”
Novenber 17, 2013
Data dump: what new mathematical methods mean for social sciences
November 15, 2013
Thanks to the work of popular social science authors like Steven Levitt of Freakonomics and Nate Silver of FiveThirtyEight, data analysis is a hot new trend in social science. Unfortunately, not everyone can be a Silver or a Levitt.
Objective, data-driven research can help to clarify much in the social sciences, but scientists who jump onto these new methods with little statistical training or rigor do their disciplines a disservice. Objective study is very important to social science, but so is traditional, subjective observation and we must remember that many of the social sciences were founded the development of regression analysis.
Academics, and we who encounter their work, must be careful to receive statistical information with a health skepticism.
“Correlation does not imply causation!” Anyone who has taken a statistics class or any data-driven course, has heard this phrase (often from a professor who is constantly peeved by people conflating the two concepts). Depending on the professor’s exasperation level, it is possible that she just finished reading a social science journal.
Granted, it’s relatively rare that an article will openly claim causation where there’s none to be found, but implicit claims often lurk. And even in instances where causation is not claimed, some researchers dive no further into a topic after determining correlation. Correlation can tell us quite a lot, but we cannot pretend to understand an issue without determining the causes behind it.
Correlative relationships are powerful rhetorical tools, and everyone from self-styled Facebook pundits to Ph.Ds use them to try to prove points. One familiar example is the oft-repeated claim that areas with high rates of gun ownership have comparably lower crime rates than those with lower gun ownership rates. This is true. But is it a causal relationship? Doubtfully.
Areas with high gun ownership tend to be rural areas that would see low crime regardless of the size of its weapons cache—there are, after all, few multinational drug cartels in central Kansas. Just because the causal relationship is dubious doesn’t mean that it’s not great rhetoric. Correlative relationships provide fantastic material for argument, but many of these arguments demonstrate just why simple correlation shouldn’t be trusted as proof in academic research.
Causal relationships are shown by revealing the relationship between correlated phenomena. Laboratory experiments are of limited value in the social sciences, so such relationships are explored by examining the effect of one thing on another in real world context.
In many cases, researchers will accept statistical correlation as causation if there is a theoretical or cultural rationale for it—though they may sometimes do so to their own detriment. For instance, I recently read an academic paper detailing the relationship between a Paraguayan’s native language and her educational and economic achievement. The paper claimed that speaking Guaraní, the country’s most widely-spoken language, has measurable effect on—not just correlation with—achievement.
This phenomenon is culturally possible—the Guaraní language has the stigmatized reputation as being backwards and less value than Spanish, the dominant language in Paraguay’s economy. However, the researchers failed to control for their subjects’ socioeconomic backgrounds. Socioeconomic background and language are no doubt strongly correlated, but both are also show strong correlation with achievement. Without controlling for that variable, among others, it is impossible to know whether Paraguayans’ mother tongues truly influence their economic or educational success.
Cases like this bolster the argument that nuanced social and economic issues are perhaps better examined through more qualitative analyses than with complex mathematical models.
Western social and political thought is to this day heavily influenced by the discoveries of the ancient Greeks and Romans; the great minds of these societies used little more than description and allegory to illustrate psychological and philosphical insights that remain relevant to this day. Political and social thinkers still cite Socrates’s allegory of the cave, for example, as an impressively illustrative of the way that distorted or incomplete information can create a gap between perception and reality.
Émile Durkheim, who foundmodern sociology and shaped the structure of many modern social sciences, made these contributions to human understanding before mathematical analysis of huge data troves was de rigeur. Today’s social scientists would do well to remember that some of the best work done in their disciplines was completed without the use of sohpisticated mathematical models.
Tim Groseclose, a professor at UCLA, observed that the social scientists who were mose effectively using quantitative methods often had a background in economics. I agree with Groseclose, but would expand this category to include all scientists with rigorous statistical or mathematical training.
However, the researchers best at analyzing troves of data will not necessarily be those who produce the best results in social science. The social sciences need academics to interpret our world through logical analysis and thoughtful case studies and number-crunchers to filter through huge swaths of data and conduct rigorous analysis. What the fields do not need is flawed statistical study that contributes little to humanity’s understanding of itself.
Novenber 17, 2013
Prof compares Steve Jobs and mathematical pioneer
Stanford professor Keith Devlin speaks in OU’s Memorial Auditorium Tuesday
Photo by Nick Bolin.
By David DeWitt
What does Steve Jobs have in common with a 13th century Italian from Pisa named Leonardo?
Turns out, quite a lot, as was explored by Stanford professor Keith Devlin in the first and only Kennedy Lecture Series speech of the fall semester Tuesday night at Ohio University's Memorial Auditorium.
While researching the work of Leonardo of Pisa, also known as Leonardo Fibonacci, Devlin said that he began to discover remarkable similarities between Leonardo and Jobs.
"These are two people who deserve credit for changing the world through new technology, or, more precisely, a new representation," Devlin told the audience.
In fact, the similarities were so many, he said, that in addition to writing the book he was originally researching, "The Man of Numbers," on Leonardo, he wrote a second, "Leonardo & Steve."
Devlin began his lecture by presenting a brief history on Apple and Jobs. He explained how computers went from the province of "nerds" writing code in the early 1980s to something much more with the advent of the Macintosh.
"All of the sudden, computers were for everyone else, who wanted to draw, who wanted to paint, who wanted to do creative work," he said. "All of that was made available. The whole spectrum of human creativity suddenly was about to be captured in this device."
While Jobs did not invent the personal computer, Devlin said, he recognized how it could be used to transform the world and had the marketing savvy to take advantage of that opportunity.
Leonardo did something quite similar with Arabic numerals, Devlin said.
Leonardo of Pisa was the son of an international businessman. At that time, in Western culture, Roman numerals were still used. This made arithmetic laborious, and it made multiplication and division nearly impossible. In fact, aside from the use of the abacus for simple calculations, advanced calculations often had to be contracted out.
That is, until Leonardo observed Muslim traders using a unique calculation method, the same arithmetic we use today. The system of Arabic numerals, Devlin explained, had been developed in India during the first seven centuries and spread via traders who added to it and developed algebra.
Recognizing the utility and efficiency of this system, Leonardo wrote his magnum opus the "Liber Abaci," a 600-page expository instruction manual on this new method of calculation composed in 1202.
"What Leonardo did was write a book that changed the world," Devlin said. "And Jobs eventually was able to create the MacBook. But both of them produced something as a single artifact that changed the world in a very dramatic way, not because of what it contained but because everybody was able to use it and access it."
Devlin drew another parallel between the early cumbersome computers and Leonardo's early, cumbersome book. This gave way to a later, simpler version, a "Leonardo for Dummies," as Devlin put it that was accessible and marketed to the masses. From that text, which Devlin himself aided in the eventual discovery of the world's only surviving copy, arithmetic manuals began to spread throughout Western civilization, spurring the financial revolution.
"When this hit Italy it changed the world and we now live in that world," he said.
This made Leonardo of Pisa famous, and he eventually was contracted to produce works for the emperor of the time. He went on to become quite prolific, penning further books on practical geometry and more.
In fact, Devlin said, while the Gutenberg Bible is well known as the first book to be printed via the press, the second was a book on commercial arithmetic.
Steve Jobs used a very similar process, Devlin said, with Apple first producing a machine that was big and expensive but established credentials by proving its technological bona fides.
"But he then built a cheaper consumer version called a Macintosh," he said. "He went through the same process as Leonardo: Establish it, prove it and then bring out a popular version. The top is for the pros, the bottom is for the people. And that was the trick."
Devlin said that it's this rare combination of being able to do the clever part as well as the marketing that binds Leonardo and Steve.
"Leonardo did it, and that's why his book is the one that did it," he said. "And Steve Jobs did it, and he was a showman."
Novenber 17, 2013
Week in Death: The Woman Who Cracked Hitler’s Codes
Mavis Batey, who has died aged 92, was one of the leading female codebreakers at Bletchley Park, cracking the Enigma ciphers that led to the British Royal Navy’s victory at Matapan in 1941.
She was the last of the great Bletchley “breakin” experts, those codebreakers who found their way into new codes and ciphers that had never been broken before.
Mavis Batey also played a leading role in the cracking of the extraordinarily complex German secret service, or Abwehr, Enigma code machine. Without that break, the Double Cross deception plan which ensured the success of the D-Day landings could never have gone ahead.
Mavis Lilian Lever was born in Dulwich, south London, on May 5, 1921, the daughter of a postal worker and a seamstress. The family always went on holiday to Bournemouth, but after passing her German O Level, Mavis persuaded her parents to take her to the Rhineland.
It was this that encouraged her interest in the German Romantic poets. She was reading German at University College, London, when war broke out, and decided to break off her studies and become a nurse; but she was told that the country could make more use of her as a German linguist.
“So I thought, great,” she recalled. “This is going to be an interesting job, Mata Hari, seducing Prussian officers. But I don’t think either my legs or my German were good enough because they sent me to the Government Code & Cipher School.”
She initially worked in London, checking commercial codes and perusing the personal columns of The Times for coded spy messages. After showing promise, she was plucked out and sent to Bletchley to work in the research unit run by Dilly Knox.
Knox had led the way for the British on the breaking of the Enigma ciphers, but was now working in a cottage next to the mansion on new codes and ciphers that had not been broken by Hut 6, where the German Army and Air Force ciphers were cracked.
“It was a strange little outfit in the cottage,” Mavis said. Knox was a true eccentric, often so wrapped up in the puzzle he was working on that he would absentmindedly stuff a lunchtime sandwich into his pipe rather than his tobacco: “Organization is not a word you would associate with Dilly Knox. When I arrived, he said: ‘Oh, hello, we’re breaking machines, have you got a pencil?’ That was it. I was never really told what to do. I think, looking back on it, that was a great precedent in my life, because he taught me to think that you could do things yourself without always checking up to see what the book said.”
“That was the way the cottage worked. We were looking at new traffic all the time or where the wheels or the wiring had been changed, or at other new techniques. So you had to work it all out yourself from scratch.”
Although only 19, Mavis began working on the updated Italian Naval Enigma machine and, in late March 1941, broke into the system, reading a message which said simply: “Today’s the day minus three.” “Why they had to say that I can’t imagine,” she recalled. “It seems rather daft, but they did. So we worked for three days. It was all the nail biting stuff of keeping up all night working. One kept thinking: ‘Well, would one be better at it if one had a little sleep or shall we just go on?’— and it did take nearly all of three days. Then a very, very large message came in.”
The Italians were planning to attack a Royal Navy convoy carrying supplies from Cairo to Greece, and the messages carried full details of the plans for attack: “How many cruisers there were, and how many submarines were to be there and where they were to be at such and such a time, absolutely incredible that they should spell it all out.”
The intelligence was phoned through to the Admiralty and rushed out to Admiral Andrew Cunningham, commander of the Royal Navy’s Mediterranean Fleet. “The marvellous thing about him was that he played it extremely cool,” Mavis said. “He knew that they were going to go out and confront the Italian fleet at Matapan but he did a real Drake on them.”
The Japanese consul in Alexandria was sending the Germans reports on the movement of the Mediterranean Fleet. The consul was a keen golfer, so Cunningham ostentatiously visited the clubhouse with his clubs and an overnight bag. “He pretended he was just going to have the weekend off and made sure the Japanese spy would pass it all back,” Mavis recalled. “Then, under cover of the night, they went out and confronted the Italians.”
In a series of running battles over March 27-28 1941, Cunningham’s ships attacked the Italian vessels, sinking three heavy cruisers and two destroyers. Without radar, the Italians were caught completely by surprise, and 3,000 of their sailors were lost.
“It was very exciting stuff,” Mavis recalled. “There was a great deal of jubilation in the cottage and then Cunningham himself came to visit us to congratulate us in person.” She and another of the young women working in the cottage rushed out to the local pub to buy some wine to celebrate the victory with the admiral: “The cottage wall had just been whitewashed. Now this just shows how silly and young and giggly we were. We thought it would be jolly funny if we could talk to Admiral Cunningham and get him to lean against the wet whitewash and go away with a white stern.”
The battle ensured that the Italians never sailed close to the Royal Navy again until Cunningham took their surrender in 1943. It remains the last fleet action to have been fought by the Royal Navy.
The unusual training techniques adopted by Knox - he would ask new arrivals which way the hands of a clock went round and when they said clockwise, reply: “Not if you’re inside the clock”- ensured that Mavis Batey and the other leading woman codebreaker working for Knox, Margaret Rock, had the ability to think laterally.
Mavis Batey recalled how she reconstructed the wiring of one of the wheels from the updated Italian Enigma system from a mistake by an Italian operator who was sending a dummy test message.
The main flaw of the Enigma machine, seen by the inventors as a security-enhancing measure, was that it would never encipher a letter as itself: “I picked up this message and thought: ‘There is not a single L in this message.’ My chap had been told to send out a dummy message and he had just had a fag and pressed the last key of the middle row of his keyboard, the L. So that was the only letter that didn’t come out.”
Arguably her most important role, however, was in the collaboration with Knox and Margaret Rock on the breaking of the Enigma cipher used by the German secret service, the Abwehr.
MI5 and MI6 had captured most of the German spies sent to Britain, and those in the neutral capitals of Lisbon and Madrid, and turned them back against the Germans, feeding them false information designed to deceive them in an operation known as the Double Cross system.
But they had no idea whether or not the Germans believed this intelligence, as the Abwehr Enigma was so complex that Hut 6 had been unable to break it. It had four rotors instead of the standard three, and unlike other machines they rotated randomly with no predictable pattern.
Knox took over the task of breaking it, using Mavis Batey and Margaret Rock as his assistants, to test out every possibility. On December 8, 1941 Mavis Batey broke a message on the link between Belgrade and Berlin, allowing the reconstruction of one of the rotors.
Within days Knox and his team had broken into the Abwehr Enigma, and shortly afterwards Mavis broke a second Abwehr machine, the GGG, adding to the British ability to read the high level Abwehr messages and confirm that the Germans did believe the phoney DoubleCross intelligence they were being fed by the double agents.
This allowed the XX Committee, which was running the double agents, to send a stream of small pieces of false intelligence that would build up a complete picture of a fictitious First US Army Group, which was forming up in East Anglia and Kent to lead the main Allied invasion force.
The false intelligence led the Germans to believe that the main force would land on the Pas de Calais rather than in Normandy. As a result Hitler insisted that two key armored divisions were held back in the Calais area.
Brigadier Bill Williams, Montgomery’s chief intelligence officer, said that without the break into the Abwehr Enigma the deception operation could not have been mounted. The forces in Calais would have moved to Normandy and could well have thrown the Allies back into the sea.
Mavis fell in love with her future husband, Keith Batey, himself one of the Bletchley “breakin” experts, after he helped her with a particularly difficult code breaking problem: “I was alone on the evening shift in the cottage and I sought the help of what Dilly called ‘one of the clever Cambridge mathematicians in Hut 6.’ We put our heads together and in the calmer light of logic, and much ersatz coffee, solved the problem. Dilly made no objections to my having sought such help and when I told him I was going to marry the ‘clever mathematician from hut 6’ he gave us a lovely wedding present.”
After the war Mavis Batey brought her indefatigability to the protection of Britain’s historical gardens. Her interest began in the late 1960s, when her husband was appointed the “Secretary of the Chest,” the chief financial officer of Oxford University. They lived in a university-owned house on the park at Nuneham Courtenay and she set about ensuring that the overgrown gardens were restored to their original landscaped state.
She became the driving force behind moves by the Campaign for the Protection of Rural England, English Heritage and the Garden History Society to protect historical gardens. Working with the Historic Buildings Council, she instigated the formal recording of historic gardens which led to the publication of English Heritage’s Register of Parks and Gardens of Special Historic Interest in England in 1984. She had taken a leading role in the Garden History Society since 1971 when she became its Secretary, and was its president from 1985 until her death.
In 1977 Mavis Batey lobbied successfully for the National Land Fund, which became the National Heritage Memorial Fund, to grant-aid historic landscapes. She also led the Garden History Society’s campaign on the plight of urban parks.
She was awarded the Veitch Memorial Medal of the Royal Horticultural Society in 1985, and in 1987 was appointed MBE for services to the preservation and conservation of historic gardens. Her books included Jane Austen and the English Landscape (1996); Alexander Pope: Poetry and Landscape (1999); and an affectionate biography of Knox, Dilly: The Man Who Broke Enigmas (2011).
Keith Batey, with whom she had two daughters and a son, died in 2010.
Novenber 17, 2013
William Wright Kuhn, 75, teacher, mathematician
BY JOHN F. MORRISON, Daily News Staff Writer firstname.lastname@example.org, 215-854-5573
POSTED: November 06, 2013
IF YOU'RE one of those nervous people who watch the skies for the first sign of a meteor or comet that will hit the Earth and wipe out life as we know it, you might be interested in the work of William Wright Kuhn.
In 1999, Bill Kuhn became a consultant to Blue Origin, the company started by Jeff Bezos, founder of Amazon.com and recent purchaser of the Washington Post, to explore space privately.
(Someday, if you have the cash, Jeff will load you into one of his spacecraft and shoot you off to a nice vacation among the stars.)
Meanwhile, his company is also working on the problem of Earth being bombarded by astronomical drifters bent on destruction. That's where Bill Kuhn came in.
Bill, who was a math professor at St. Joseph's and Temple universities in the '60s and '70s, was a consultant to Blue Origin from 1999 to 2006.
His primary work was to help develop a sunlight-powered spacecraft whose purpose was to prevent asteroids or comets from hitting Earth.
If this, and private space travel, ever come about, a share of the credit has to go to Philly-born Bill Kuhn, a mathematician, teacher and man whose expertise as a consultant was called on by federal agencies and private corporations engaged in a wide variety of activities.
He died Oct. 24 of cancer. He was 75 and had lived in Columbia, S.C., since 2008.
Bill was a professor of mathematics at St. Joseph's University from 1961 to 1979, and was an adjunct faculty member of Temple University.
From 1979 to 1987, he was manager of the executive staff for price systems analysis for RCA/GE, and principal member of engineering with GE/Price Systems in Moorestown, N.J.
The majority of Bill's consulting work was through MITRE Corp. from 1988 to 2006, during which he was involved in many diverse projects, ranging from the Ballistic Missile Defense Organization to the Federal Aviation Administration's work toward privatization of major airports in Egypt and Panama.
Much of his consultation involved cost analysis.
Bill was born in Philadelphia to John G. and Helen Wright Kuhn. He graduated from West Catholic High School in 1956, and went on to earn a bachelor's degree from what is now Drexel University in 1961. He received a master's degree in mathematics in 1963, and a Ph.D. in applied mathematics in 1971 from the University of Pennsylvania.
In 1961, he married the former Joan Corley.
"He was a very caring man, always looking to help people," said his son Eric M. Kuhn. "He was always available."
After moving to South Carolina, Bill became involved in a number of community activities, and was a math tutor at W.A. Perry Elementary School.
Besides his wife and son, he is survived by a daughter, Julia K. Mikell; another son, Joel M. Kuhn; two brothers, John G. Kuhn III and Francis X. Kuhn; two sisters, Nancy K. Kirkpatrick and Mary L. Kuhn; and four grandchildren.