March 31, 2009
Russian Mathematician Wins Abel Prize
Abel winner. Mikhail Gromov has made a long and distinguished career out of the triangle inequality.
CREDIT: GÉRARD UFERAS
By Barry Cipra
ScienceNOW Daily News
26 March 2009
A Russian-born mathematician whose work has influenced fields from physics to biology has won this year's Abel Prize, the math field's counterpart to the Nobel. The $950,000 prize, first awarded in 2003 by the Norwegian Academy of Science and Letters, goes to Mikhail Gromov of the Institut des Hautes Études Scientifiques (IHES) in Bures-sur-Yvette, France.
Gromov, 65, won the award "for his revolutionary contributions to geometry," says Abel Committee Chair Kristian Seip. The mathematician, who also holds a position at the Courant Institute of Mathematical Sciences in New York City, is credited with making advances in the fields of symplectic and Riemannian geometry, which are closely tied to areas of mathematical physics such as general relativity and string theory. He is also credited with founding the modern study of "geometric group theory," which injects notions of distance and curvature into the study of finite algebraic structures. Gromov's work "has had a tremendous impact on geometry and has reached from there into major applications in analysis and algebra," says George Andrews, president of the American Mathematical Society in Providence. "One cannot imagine a more worthy recipient."
In the 1980s, Gromov showed how to treat the collection of all Riemannian geometries as a geometric space in its own right: Each point in the meta-space is a Riemannian metric, with the distance between two metrics determined by how similar or different the corresponding Riemannian spaces are. The entire subject of metric spaces grows out of a simple observation in classical geometry called the triangle inequality, which states that the length of any one side of a triangle is always shorter than the sum of the lengths of the other two sides (in other words, the shortest distance between two points lies along a straight line). In group theory, Gromov's work on metrics led to the study of "hyperbolic" groups, which have properties closely connected with non-Euclidean hyperbolic geometry. An admiring Dennis Sullivan at the City University of New York once enthused, "It is incredible what Mikhail Gromov can do just with the triangle inequality."
IHES Bures-sur-Yvette Director Jean-Pierre Bourguignon says Gromov's "driving force" has also been instrumental in making unconventional connections at the institute between mathematics and biology. In a 2001 paper co-authored with Alessandra Carbone, now at the Université Pierre et Marie Curie in Paris, for example, Gromov proved a theorem on the virtual certainty of unicellular life in pond water: Within a mathematical framework that stipulates what it takes for a random arrangement of chemicals to be considered living, they found, the "live" states vastly outnumber the "dead" ones. "Misha is often radical in his judgments, but it certainly takes nonconventional minds to make significant steps in this very challenging field," Bourguignon says.
March 31, 2009
John Wrench, 97; mathematician set record on calculating pi to 100,000 digits
By Matt Schudel
Washington Post / March 27, 2009
WASHINGTON - John W. Wrench Jr., 97, a retired mathematician with the US Navy's Applied Mathematics Laboratory who once set a record by calculating the value of pi to more than 100,000 digits, died Feb. 27 of pneumonia at Northampton Manor Health Care Center in Frederick, Md.
Before the use of computers became widespread, Dr. Wrench had an international reputation for computation. He later became a pioneer in the use of computers for mathematical calculations.
He came to Washington in 1939 to teach at George Washington University, but spent most of his career doing theoretical research and developing high-speed computational methods for the Navy Department.
During World War II, he did classified research for the National Defense Research Council under contract at GWU and Catholic University. From 1945 to 1953, while working for the Navy, he worked on secret projects concerning the movement of underwater sound waves and the response of structures to underwater explosions.
In 1953, Dr. Wrench became deputy head of the applied mathematics laboratory at the Navy's David W. Taylor Model Basin. He developed high-speed numerical methods for use in structural design, hydrodynamics, aerodynamics, and data analysis. He became head of the laboratory about two years before he retired in 1974.
Wrench had long been fascinated by the mathematical concept of pi. In 1948, he and fellow mathematician Levi Smith used a primitive gear-driven calculating device to compute the value of pi to more than 1,000 digits. In 1961, Dr. Wrench and Daniel Shanks used an IBM 7090 computer to calculate pi to 100,265 places. They presented their printout to the Smithsonian Institution, and their feat was recognized in the Guinness Book of Records. With more advanced computers, the value of pi has been calculated to more than 1 trillion digits.
John William Wrench Jr. was born in Westfield, N.Y., and was a summa cum laude graduate of what is now the State University of New York at Buffalo. He received master's and doctoral degrees in mathematics from Yale University in 1935 and 1938, respectively.
© Copyright 2009 Globe Newspaper Company.
March 31, 2009
Mathematician receives award to fund collaborative research
Maria Gordina, associate professor of mathematics.
Photo by Daniel Buttrey
by Cindy Weiss - March 30, 2009
Maria Gordina, associate professor of mathematics in the College of Liberal Arts and Sciences, will spend a semester in residence at Cornell University as the winner of the Ruth I. Michler Memorial Prize awarded by Cornell and the Association for Women in Mathematics.
The award will fund her residency to study infinite dimensional spaces, a research project that involves collaborations with a physicist and two other mathematicians, one of whom was her Ph.D. adviser at Cornell.
Among the questions she will study are the geometry of infinite dimensional curved spaces and random processes in these spaces, such as Brownian motion, a mathematical model used to describe the random movement of particles suspended in a liquid or gas.
A typical example of an infinite dimensional curved space is a "loop group," which can be visualized as many loops drawn around a sphere or any other curved object.
The Michler prize is named in memory of a young mathematician at the University of North Texas who died in 2000 in a tragic accident when she was a visiting scholar at Northeastern University.
Ruth Michler's parents established the prize at Cornell because Michler was born in Ithaca when her father, a German mathematician, was visiting the university.
The $45,000 prize honors mid-career faculty women in mathematics and provides an opportunity for them to focus on research, with no teaching obligations.
Currently, Gordina is teaching a graduate class in CLAS on Lie algebras and Lie groups, one of the topics that she studied for her Ph.D. at Cornell under Professor Leonard Gross.
She came to UConn in 2003 after postdoctoral fellowships at McMaster University in Canada and the University of California at San Diego.
Before coming to the U.S. from Russia in 1992, Gordina, who is also known as Masha, was an assistant professor at the Leningrad Electro-Technical Institute.
While at UConn, she has been a Humboldt Research Fellow for three summers. Alexander von Humboldt Foundation fellows are chosen to conduct research at a university in Germany. Gordina worked with Professor Michael Röckner at Bielefeld University.
Gordina's husband, Alexander Teplyaev, who is also an associate professor of mathematics at UConn, is currently a Humboldt Fellow.
Gordina's research has been funded by the National Science Foundation.
March 31, 2009
Wanted: Better mathematicians to help solve financial crisis
By Kevin Horrigan | Commentary | Story updated at 7:02 pm on 3/26/2009
Last week we read a lot about those guys at AIG who were paid $165 million in bonuses largely because (we were told) that, having screwed up the company, they were the only guys who could unscrew it.
Many of these people were bond traders. Others were "quants," an occupation of whose existence most of us were blissfully unaware until last year, when the financial markets began to come unraveled. Stop any 20 people on any street in America not named Wall, and odds are 19 of them couldn't tell you what a quant does.
In that last sentence, you'll note a question of probability - a 1-in-20 probability of knowing what a quant does, a figure I made up off the top of my head.
What a quant would do is create a formula testing that probability, using complex mathematics, and then tell bond traders how to make money off of it.
Quant is short for quantitative analyst. In the '80s and '90s they began to be all the rage on Wall Street. "A quant designs and implements mathematical models for the pricing of derivatives, assessments of risk or predicting market movements," said Mark Joshi, a former quant at the Royal Bank of Scotland who has written the book (actually several of them) on quants and how to become one.
At the risk of oversimplifying things, quants are the math nerds who you used to beat up in high school. The world economic crisis is their revenge.
In my quest to understand quants, I am comforted by the fact that I was an A student in math in college - I made an A in freshman calculus some 40 years ago, having guessed right on the first test of the semester, whereupon the school burned down and no further tests were administered. I quit while I was ahead; I never took math again.
Still, I was able to fight my way through a fascinating piece by Felix Salmon in the Feb. 23 edition of Wired magazine in which he lays the blame for the financial collapse at the feet of a mathematician named David X. Li. In 2000, while working at JPMorgan Chase, Li published a paper titled "On Default Correlation: A Copula Function Approach," in a financial journal.
I would explain his formula in all of its complexities except I know that many readers weren't A students in college math. So here's Felix Salmon's explanation:
"For five years, Li's formula, known as a Gaussian copula function (in math, "copula" describes relationships between variables; Gauss was a 19th-century mathematician), looked like an unambiguously positive breakthrough, a piece of financial technology that allowed hugely complex risks to be modeled with more ease and accuracy than ever before. With his brilliant spark of mathematical legerdemain, Li made it possible for traders to sell vast quantities of new securities, expanding financial markets to unimaginable levels.
"His method was adopted by everybody from bond investors and Wall Street banks to ratings agencies and regulators. And it became so deeply entrenched - and was making people so much money - that warnings about its limitations were largely ignored."
The beauty of Li's formula was that it appeared to take the risk out of risk. Traders no longer had to look at historic default patterns. Instead, they just had to consider how prices of financial bets correlated over time. As prices boomed in the early 2000s, default correlations stayed low. All bonds could be rated Triple A. The number of credit default swaps - in effect, bets on whether bonds would perform or not - went from $920 billion in 2001 to $62 trillion in 2007.
Nobody stopped to consider what would happen if the mortgages on which all these derivatives were based suddenly went south at the same time. Wall Street was like a horse player betting on a 10-race card filled with 20-to-1 favorites. Who could imagine all 10 favorites breaking down on the same day? This is why quants say they shouldn't be blamed for the financial collapse: They just did the math. They didn't say it was foolproof. Besides, if just a few traders had used Li's formula, we wouldn't be in this mess. The trouble started when it was adopted almost universally. Bankers and brokers were making so much money that no one wanted to stop.
What we need, then, are not fewer quants but better quants, people who can write a formula unraveling this mess without creating a bigger one. They're the guys who ought to get the bonuses.
• Kevin Horrigan is a columnist for the St. Louis Post-Dispatch.
March 31, 2009
The right way to use mathematical models
During my time as a former semiconductor engineer, part of my job was to examine the past reliability data of chip equipment and try to fit that data into statistical models that would try to predict when the next equipment failures would occur.
The assumption was that the future would behave like the past; that if the machines acted that way in the past, it would probably act in the same way in the future. Technical people call these types of data "stochastic" data, meaning that they may be random values, but which we believe follow certain mathematical and statistical distributions which allow us to make some predictions out of them.
In my case, I would take the occurence of equipment failures in machine parts like bearings, motors, circuit boards, etc. and put the occurrence of these failures in mathematical models with esoteric names like Weibull distributions and R squared fit, to do what mathematicians and engineers call "fitting the data to a model." The idea was to try to predict the future based on past history. Not that I am trivializing this; in fact this is a useful exercise, but smart engineers and scientists (at least during my time) always knew that it was only one tool, and we were expected to also keep our eyes open for anything that the model could not predict. In other words, we would always take what the computer would spew out, at least with a little grain of salt.
In the back of my mind, I always thought that there were some events that could not be predicted. There was always something I could not observe, or perhaps my data had some error, and this gave me what smart scientists have, a healthy skepticism. I knew there could always be freak events. Events brought about by human nature. If some insane guy wielding a hammer for example, smashed my machine, then there was no mathematical model that could predict that. And there are examples all around us. All engines failing on an aircraft because of a multiple bird strike? That was considered highly unlikely until Capt. Sully Sullenberger's US Air Flight 1549 ditched into the Hudson. Subprime mortgage borrowers defaulting en-masse? Highly unlikely until the debacle of 2008. Risk with insuring risky investments with credit default swaps? Highly unlikely, until AIG came along.
The situation is somewhat akin to visiting a restaurant and sampling their food, and having been pleased with the meal, recommending it to your friends and colleagues. And so at that point, you begin to form a mental model of the place. But on subsequent trips to the restaurant, you accidentally pass by the open door of the kitchen and are shocked to find out what creatures are crawling about the area. At that point, you will suddenly have a drastic change in your mental model of the place, because you have discovered that your previous paradigm was not quite correct. It is similar to what we are experiencing now with this financial crisis worldwide.
All over the world, universities are arming their students with the capability to do mathematical modeling. Computers have become so cheap and powerful, and the knowledge is so widespread, that using computers and mathematical models has now become the rule, rather than the exception. We are now so comfortable with computers that sometimes, we forget that we as humans need to perform an oversight function on what these computer models are spewing out. And I am afraid that the business community, in particular, needs this reminder.
There is one thing that needs to accompany that advanced mathematical training that our universities give, particularly to business majors. And that is the nagging feeling at the back of one's mind, that although the mathematical model might seem to fit, there might be something that the tool is not telling us or it might not quite fit reality as we think it does, and we have to watch out for it. Nassim Nicolas Taleb, author of the New York Times best-seller The Black Swan, argues this mindset with a passion.
That nagging feeling seems to have disappeared in the finance community, particularly on Wall Street. Newly-minted quants from Ivy League schools, thinking that their mathematical models can calculate the risk, ran the shots until they were brought down by this latest crisis. And this is not the first time this has happened. Nicolas Taleb, in his book, says that in 1998, a hedge fund run partially by Nobel economists called Long Term Capital Management went bust, because of some events that their risk models had not predicted. Now we are repeating the lesson, with the likes of AIG who tried to insure every risky investment, thinking that they had figured out the risks in credit default swaps.
Hopefully, future generations of business students and quants will remember not to dismiss the old timers with their wisdom and experience outright, by preferring instead to rely solely on their computers and their mathematical models. These mathematical models work, but they work best if one understands their limitations and if they are used to complement human intelligence, not to replace it.
Dennis Posadas' new book Jump Start A Technopreneurship Fable, will be published by Pearson Education Asia this Summer 2009. For questions and comments, please send to firstname.lastname@example.org
March 31, 2009
Ants really are random wanderers
By Jeremy Hsu
updated 3:04 p.m. ET March 30, 2009
Ants don't march in predictable patterns to search for crumbs, as you might have thought by watching them. Instead, new research suggests they roam randomly.
© 2009 LiveScience.com. All rights reserved.
March 31, 2009
Computer simulations explain the limitations of working memory
[PRESS RELEASE, 31 March 2009] Researchers at Karolinska Institutet (KI) have constructed a mathematical activity model of the brain´s frontal and parietal parts, to increase the understanding of the capacity of the working memory and of how the billions of neurons in the brain interact. One of the findings they have made with this "model brain" is a mechanism in the brain´s neuronal network that restricts the number of items we can normally store in our working memories at any one time to around two to seven.
Working memory, which is our ability to retain and process information over short periods of time, is essential to most cognitive processes, such as thinking, language and planning. It has long been known that the working memory is subject to limitations, as we can only manage to "juggle" a certain number of mnemonic items at any one time. Functional magnetic resonance imagery (fMRI) has revealed that the frontal and parietal lobes are activated when a sequence of two pictures is to be retained briefly in visual working memory. However, just how the nerve cells work together to handle this task has remained a mystery.
The study, which is published in the journal PNAS, is based on a multidisciplinary project co-run by two research teams at KI led by professors Torkel Klingberg and Jesper Tegnér. Most of the work was conducted by doctors Fredrik Edin and Albert Compte, the latter of whom is currently principal investigator of the theoretical neurobiology group at IDIBAPS in Barcelona.
For their project, the researchers used techniques from different scientific fields, applying them to previously known data on how nerve cells and their synapses function biochemically and electrophysiologically. They then developed, using mathematical tools, a form of virtual or computer simulated model brain. The computations carried out with this "model brain" were tested using fMRI experiments, which allowed the researchers to confirm that the computations genuinely gave answers to the questions they asked.
"It´s like a computer programme for aircraft designers," says Fredrik Edin, PhD in computational neuroscience. "Before testing the design for real, you feed in data on material and aerodynamics and so on to get an idea of how the plan´s going to fly."
With their model brain, the team was able to discover why working memory is only capable of retaining between two and seven different pictures simultaneously. As working memory load rises, the active neurons in the parietal lobe increasingly inhibit the activity of surrounding cells. The inhibition of the inter-neuronal impulses eventually becomes so strong that it prevents the storage of additional visual input, although it can be partly offset through the greater stimulation of the frontal lobes. This leads the researchers to suggest in their article that the frontal lobes might be able to regulate the memory capacity of the parietal lobes.
"The model predicts, for instance, that increased activation of the frontal lobes will improve working memory," continues Dr Edin. "This finding was also replicable in follow-up fMRI experiments on humans. Working memory is a bottleneck for the human brain´s capacity to process information. These results give us fresh insight into what the bottleneck consists of."
March 31, 2009
Statistical road safety
18th century math, 21st century road safety
What possible connection could there be between an eighteenth century British Presbyterian minister and preventing road traffic accidents in Hartford, Connecticut. Everything, according to a report in the International Journal of Intelligent Systems Technologies and Applications.
Thomas Bayes (1702-1761) was not only a church minister but also the mathematician who formulated a probability theorem that can be used to solve problems that stymie conventional statistics. The crux of his theorem can be stated as follows:
"The probability of any event is the ratio between the value at which an expectation depending on the happening of the event ought to be computed, and the value of the thing expected upon its happening."
This inverse probability, which hinges not only on the concept of "heads-or-tails", but on expectations has been used previously to develop powerful email spam filters, to improve medical diagnostics, and most recently to improve road safety in Hartford.
Clara Fang at the Department of Civil Engineering, University of Hartford, and colleagues Joseph Rimiller and Najib Habesch of Urban Engineers, Inc, also in Hartford, explain how Hartford was the first US city to develop and implement a comprehensive citywide traffic calming masterplan. Included in the plan were the building of mini-roundabouts, curb extensions, speed tables, parking chicanes, bicycle lanes and the implementation of "road diets" in which excess lanes were eliminated from particular streets.
The team has now analyzed the before and after crash data for the traffic calming devices to determine which were most effective in reducing accidents. Rather than using conventional data analysis to look at the before and after statistics, the team used an empirical Bayesian method. This allowed them to predict the expected crash rate at specific sites around the city assuming that no safety and traffic-calming measures had been put in place. Study site crash rates with and without treatment for the same time period were compared. They then also compared this with a conventional analysis to weed out any analytical biases.
Their analysis revealed that all implemented safety features led to fewer crashes, but also that the higher the "before" crash rate and the greater the traffic demands, the greater the reduction. In other words, those sites that had few crashes before, also had few crashes after. However, there were significant reductions in crash rates on three streets, Franklin, Main and Wethersfield, which the team explains was due significantly to the reduction in speed on these roads achieved by putting them on a road diet.
"Speeds at the study sites were reduced by up to six miles per hour, with an average reduction of three to four miles per hour," the team explains, "The road diets also prevented aggressive drivers from passing more prudent drivers, thus, eliminating one potential source of conflict."
It would seem that a seventeenth century clergyman has plenty to say about solving a twenty-first century problem.
"A simplified Empirical Bayesian method to safety evaluation of traffic calming treatment for urban road systems"
March 31, 2009
Simple Method Devised To Predict Rises In Ebro River Level
ScienceDaily (Mar. 31, 2009) — A team of researchers from the University of Castilla-La Mancha (UCLM) has developed a new mathematical method to easily predict rises in the level of the Ebro River in Zaragoza based on water flow recorded in Castejón (Navarre). The system has a 97.5% success rate for 20-hour predictions.
"The objective of this work is to prevent flooding in Zaragoza, giving sufficient advance warning to minimise damage and improve the management of the river," Diego J. Pedregal, a co-author of the study and a professor at the UCLM's Higher Technical School of Industrial Engineering, tells SINC.
The study, to be published in April in the journal Environmental Modelling & Software, looks at the problem of how to model and predict the flow and levels of the Ebro in the regional capital of Aragon, based on analysis of a 110km section that starts in the town of Castejón, in Navarre.
Pedregal tells SINC that there are other predictive models, "but these are much more complex, determinist, and include a wide range of variables", while the new model developed, while being less technically conventional, "is more flexible, and is so easy to use that it can be done on any PC".
The researchers developed the predictive system in two stages. First of all they used a mathematical "transfer" function to determine the scale of the model and get an approximate idea of the time delay before the water flowing through Castejón has an impact on water levels in Zaragoza.
"But the river's dynamics don't behave in a linear way," points out Pedregal. The professor explains, for example, that the water flows more quickly when it is at an average or low level, that storms do not have the same effect on a dry river bed (which absorbs more water) as a wet one, and that rapid thawing of snow introduces slushy water into the river, which behaves in a different way than liquid water.
All these factors, along with the physics of fluid itself, means that the dynamics of rivers are "not linear", resulting in the researchers having had to use mathematical systems to reflect these factors and produce a specific algorithm to incorporate the impact of all these variables.
The predictive ability of the model has been contrasted with information from the historical database of the Hydrographic Confederation of the Ebro River, resulting in an average error rate of 2.5% over a 20-hour prediction timescale.
March 22, 2009
A Mathematician's Journey
Richard Karp, the 2009 Kyoto Prize Laureate in Advanced Technology,
shared how he became interested in algorithms and the problems they solve
during this year's Kyoto Prize Symposium.
By Lorena Nava Ruggero
More than 600 people listened as the 2009 Kyoto Prize Laureate in Advanced Technology, Richard Karp, shared the story of how he first became interested in mathematics and computer science.
Growing up as the son of a math teacher in a Jewish enclave of Dorchester, Mass., Karp remembered his initial inspiration – his father drawing freehand a perfect circle on a chalkboard. From this moment on, Karp would find his intellectual curiosity continuously drawn towards mathematics and the problems that could be solved with it.
Throughout his presentation, Karp shared many memories with the audience, including his experiences teaching at University of California, Berkeley in the 1960s.
During that turbulent time, Karp recounted holding classes in his home because student protests forced the campus to close. He also recalled bailing out another faculty member who had been arrested for participating in the protests.
'The Mysteries of Algorithms'
But his presentation, titled "The Mysteries of Algorithms," wasn't just a simple recollection of stories; he and other speakers also shared the important role played by mathematics and algorithms.
"Algorithms underlie all the technology that enriches our 21st century lives," said Tom Scott, SDSU vice president of research and graduate dean, in his introduction of Karp.
While Karp's theory of NP-completeness has influenced applications as varied as the computer on a Lexus automobile tothe Human Genome Project, algorithms fuel the technology used everyday – from the Internet to cell phones, according to Andrew Viterbi, co-founder of QUALCOMM and the invited scholar at today's event.
While the math of everyday living is hardly considered as one makes a phone call or Googles a phrase, after today's presentation, there's no denying its inherent utility.
About the Kyoto Prize Symposium
San Diego's Kyoto Prize Symposium is a three-day celebration of the lives and works of those receiving the Kyoto Prize, a lifetime achievement award presented annually to individuals and groups worldwide. Two other laureates, honored for their work in the basic sciences and in the arts and philosophy, speak at University of California San Diego and University of San Diego, respectively.
This event is part of SDSU Month, an annual celebration of SDSU and its "Minds that Move the World." For more SDSU Month events, fun giveaways and other information, please visit SDSUMonth.com.
A Mathematician's Journey
March 22, 2009
The human brain is on the edge of chaos
Cambridge-based researchers provide new evidence that the human brain lives "on the edge of chaos", at a critical transition point between randomness and order. The study, published March 20 in the open-access journal PLoS Computational Biology, provides experimental data on an idea previously fraught with theoretical speculation.
Self-organized criticality (where systems spontaneously organize themselves to operate at a critical point between order and randomness), can emerge from complex interactions in many different physical systems, including avalanches, forest fires, earthquakes, and heartbeat rhythms. According to this study, conducted by a team from the University of Cambridge, the Medical Research Council Cognition & Brain Sciences Unit, and the GlaxoSmithKline Clinical Unit Cambridge, the dynamics of human brain networks have something important in common with some superficially very different systems in nature. Computational networks showing these characteristics have also been shown to have optimal memory (data storage) and information-processing capacity. In particular, critical systems are able to respond very rapidly and extensively to minor changes in their inputs.
"Due to these characteristics, self-organized criticality is intuitively attractive as a model for brain functions such as perception and action, because it would allow us to switch quickly between mental states in order to respond to changing environmental conditions," says co-author Manfred Kitzbichler.
The researchers used state-of-the-art brain imaging techniques to measure dynamic changes in the synchronization of activity between different regions of the functional network in the human brain. Their results suggest that the brain operates in a self-organized critical state. To support this conclusion, they also investigated the synchronization of activity in computational models, and demonstrated that the dynamic profile they had found in the brain was exactly reflected in the models. Collectively, these results amount to strong evidence in favour of the idea that human brain dynamics exist at a critical point on the edge of chaos.
According to Kitzbichler, this new evidence is only a starting point. "A natural next question we plan to address in future research will be: How do measures of critical dynamics relate to cognitive performance or neuropsychiatric disorders and their treatments?"
PLEASE ADD THIS LINK TO THE PUBLISHED ARTICLE IN ONLINE VERSIONS OF YOUR REPORT: http://dx.plos.org/10.1371/journal.pcbi.1000314 (link will go live upon embargo lift)
March 22, 2009
High-powered mathematicians take on free will
Mondays, March 23 through April 27, 2009, 8 p.m. · A02
Conway, a major figure, to give lectures on ideas formed with Kochen
Ten years ago, Princeton mathematician John Conway wowed standing-room-only crowds with a series of public math lectures. Among many things, he spoke about ancient Greek geometers and his modern discovery of surreal numbers. He threw in some math tricks, too. Audiences flocked to hear the joys of math recounted by one of its masters and left enthralled by Conway's intellectual wizardry.
On Monday, March 23, Conway -- who has fought his way back to health from a 2006 stroke -- will launch another lecture series that will once again place his mind and legendary personality squarely in the spotlight.
This intellectual journey promises to be different.
This time, the presentations will have one focus. Working with his longtime colleague, Princeton mathematician Simon Kochen, Conway is set on explaining to the University community and the public over six weeks the tenets of their "Free Will Theorem."
The gist of it is this: They say they have proved that if humans have free will, then elementary particles -- like atoms and electrons -- possess free will as well.
"You want to know how the world works -- there's this sense that the present state is somehow derived from the moment before it," said Conway, an extrovert's extrovert who, in addition to studying fundamental mathematical subjects like symmetry, excels in card tricks and memory games. "We were trying to understand how it happens, and we suddenly realized there was no way of explaining successive states because the previous state could give rise to two different positions."
By saying "two different positions," Conway means that a particle is free to zip one way or another as he is equally unbound in deciding whether or not to drop a cup he is clasping.
The lectures on consecutive Mondays at 8 p.m. will start on March 23 and end on April 27 in A02 McDonnell Hall.
"Given that John and Simon have been squeezing in work devising their Free Will Theorem between games of Go and tennis (respectively) for more than a decade, I should think they are very happy and relieved to now get it out into the world and generating debate," said Siobhan Roberts, a Toronto-based author who is writing a book about Conway. She views the theorem as an important addition to Conway's work, but noted that it is unlike anything Conway has done before.
These high-powered mathematicians may occasionally act like grownup children when they are tackling math mysteries or tangling with undergraduates in the University's mathematics lounge (Conway's unofficial office). They are, however, deadly serious about this topic. They have packaged their arguments in an airtight mathematical theorem that rests on what they say are three unassailable axioms which happen to rhyme -- spin, fin and twin.
What's more, they believe their case can be made to an intelligent public. Unlike Conway's previous set of lectures, which touched on random subjects under the aegis of math, each of the forthcoming lectures will build on the previous ones in the series, much the way a theorem demonstrates truth through a logical chain of reasoning.
"It's not about theories anymore -- it's about what the universe does," said Kochen, a professor of mathematics and the associate chair of the Department of Mathematics. "And we've found that, from moment to moment, nature doesn't know what it's going to do. A particle has a choice."
The public should find their work interesting because the question of whether the behavior of particles is predetermined is linked to the centuries-old debate over the existence of free will, they said.
Conway was to have presented the theorem to the Princeton community in 2006 but was felled by a stroke a week before the presentations were scheduled to start. He and Kochen came up with the full theorem in 2004 while they were working on a geometry problem on a chalkboard in Kochen's fifth-floor office in Fine Hall. They have been collaborating for decades and have long enjoyed the give and take.
"As he and Simon were getting more and more into their free will discussions, sometimes they resembled two kids," said Conway's wife, Diana Conway, who observed first hand the evolution of the idea. "There were phone calls all day and evening, they'd rush to each other's offices and talk excitedly about whatever today's thought was."
Her husband, she said, is looking forward to the lecture series and the inevitable interaction with the audience. "When an idea of his is well received, it is like a shot of adrenaline for him," she said. "I really think he was born to be an educator."
The mathematicians are working on a forthcoming book describing the proof to be published by Princeton University Press. Kochen convinced Conway to do the lectures this year.
The work grows out of an idea first conceived by Kochen and Ernst Specker in 1965.
The smallest particles inside an atom have a property known as "spin." And scientists have found that the spin of some particles may be related to the spin of other particles, something known as "entanglement" and also known as the "twin" axiom. However, an experimenter's choice of spin direction to measure cannot be communicated faster than the speed of light. (This is the "fin" axiom.)
Looking at this sequence of facts, the mathematicians have been able to extract the conclusion that, if a human experimenter can make decisions independently of past events, then the particle can also make a free choice.
Conway, the John von Neumann Professor in Applied and Computational Mathematics, has been a Princeton faculty member since 1987. His fascination with games and puzzles has led him to important discoveries in areas of mathematics from number theory to geometry. He also is well known as the inventor of the Game of Life, in which checker-like objects replicate themselves and form patterns on a grid.
Kochen, whose special interests are mathematical logic and applying techniques of logic to other fields, joined the Princeton faculty in 1967. In particular, he has explored the connections between logic and algebra and number theory. He has also published papers in quantum mechanics.
March 22, 2009
Retired Winthrop professor solves Sudoku
By Mary Jo Talbot Balasco - email@example.com
Melanie Brandon enjoys doing Sudoku puzzles. "The way I did it the first time was way harder than it needs to be," said Brandon, a data entry clerk and receptionist at Winthrop University.
Retired Winthrop University computer professor J.F. Crook has developed an algorithmic method for solving any Sudoku puzzle. The method is discussed in his paper "A Pencil-and-Paper Algorithm for Solving Sudoku Puzzles," which appears in this month's Notices of the American Mathematical Society.
"The puzzle Sudoku has become the passion of many people the world over in the past few years," Crook, who declined to be interviewed about his finding Monday, wrote in the paper.
Brandon reviewed Crook's method for solving the puzzle on Monday. "This is how I do the puzzle," Brandon said. "I just didn't know it was considered an algorithm."
Sudoku is a numbers puzzle based on logic. The puzzle, which has nine sub-squares of three columns and three rows of squares, is commonly found in newpapers, books and online.
The challenge is to use the digits 1 through 9 so that no number is repeated in any row, column or sub-square.
In the puzzle, some numbers are given and the rest must be filled in.
What Crook has done in his paper is to codify what people do unconsciously when they do the puzzle, said M. Ram Murty, professor of mathematics and Queen's Research Chair at Queen's University in Ontario, Canada. He has analyzed the steps, Murty said.
"It's an important mathematical contribution," he said.
To complete the puzzle, a person must use a process of elimination and deductive reasoning, Murty said.
A Sudoku puzzle uses different skills from those of a crossword puzzle, Murty said. In a crossword puzzle, a person must have cultural, factual and verbal knowledge. To complete a Sudoku puzzle, a person must only know the numbers — no mathematical knowledge is necessary, he said.
The puzzle was invented in 1979 by Howard Garns, an Indianapolis architect. In the early 1980s, a Japanese puzzle magazine company introduced the puzzle to Japan, and it was named Sudoku. In Japanese, "Sudoku" is loosely translated to mean "single number."
The puzzle became an international sensation in 2005, when Wayne Gould, a judge from New Zealand, convinced the Times of London to publish it.
Gould developed a computer program to generate the puzzle.
Murty said the puzzle is popular because it gives the brain a quick mental workout.
"I see a lot of people doing them in airports and train stations," said Murty, who in 2007, along with a colleague, wrote a paper on the mathematical analysis of the Sudoku puzzle.
People solve Sudoku puzzles because it's human nature to like a challenge, said Will Shortz, crossword editor for the New York Times.
"If you are a puzzle person and you see empty squares, you will find yourself compelled to fill them in before you can turn the page," said Shortz, an author of many Sudoku puzzle books.
Shortz also said some people find the puzzle fascinating because they can spend months on it and never master the technique.
"The mystery of figuring it out keeps you solving," Shortz said.
Some of the puzzles are more difficult than others.
A puzzle of average difficulty takes 20 to 30 minutes to complete, while a more difficult one could take an entire day, Murty said.
Doing the puzzle could have psychological benefits, Murty said.
"It energizes the brain," he said. "Any time you give the mind a challenge or a problem and try to work it out, neural pathways in the brain are activated." The puzzle also can be a confidence booster.
Over time, as a person works more puzzles, they begin to find patterns and start filling in numbers.
"They make progress in little steps, and that is encouraging," Murty said. "It has a positive psychiatric dimension."
Mary Jo Talbot Balasco | 803-329-4067 | firstname.lastname@example.org
March 22, 2009
Random Network Connectivity Can Be Delayed, But With Explosive Results,
March 12, 2009
Barbara Liskov wins Turing award
Professor Liskov is the second woman
to be awarded the prize
Professor Liskov was the first US woman to be awarded a PhD in computing, and her innovations can be found in every modern programming language.
She currently heads the Programming Methodology Group at the Massachusetts Institute of Technology.
The award, often referred to as the "Nobel Prize for computing", includes a $250,000 (£180,000) purse.
Professor Liskov's design innovations have, over the decades, made software more reliable and easier to maintain.
She has invented two computer progamming languages: CLU, a forerunner of modern object-oriented ones and Argus, a distributed programming language.
Liskov's groundbreaking research underpins virtually every modern computer application, forming the basis of modern programming languages such as Java, C# and C++.
One of the biggest impacts of her work came from her contributions to the use of data abstraction, a method for organising complex programs.
The prize, named after British mathematician Alan Turing, is awarded annually by the Association for Computing Machinery.
ACM president Professor Dame Wendy Hall said of Liskov: "Her elegant solutions have enriched the research community, but they have also had a practical effect as well.
"They have led to the design and construction of real products that are more reliable than were believed practical not long ago," she added.
Professor Liskov will be presented with the award in June.
March 12, 2009
ACM Turing Award Goes to Creator of Influential Innovations
March 12, 2009
DOE's Mathematical Ineptitude Could Set Carbon Capture Back 10 Years
Jay Yarow| Mar. 11, 2009
Last year the Energy Department withdrew support from funding FutureGen, a near zero emission coal plant that focused on carbon capture and sequestration. The logic at the time was that the cost of the plant has grown too large, nearly doubling. Too bad that logic was based on bad math.
NY Times: According to the report, in calculating the costs of the project, the Energy Department mistakenly compared two numbers that should not have been used together. One cost estimate was made in so-called constant dollars, reflecting the purchasing power of a dollar in 2005, and the other in dollars as they would have been spent over the following few years, worth less each year because of inflation.
The Bush administration said the projected cost had nearly doubled, to $1.8 billion from $950 million; the auditors said it had gone to $1.3 billion, up 39 percent.
The government was going to pick up the 74% of the tab on the project, with private and foreign investors handling the rest. Now that the correct numbers are in, Energy Secretary Steven Chu says he'll consider looking into the project once again.
Canceling the project, which already has $174 million in government funding, could hold carbon capture technology back by 10 years.
March 12, 2009
Math is omnipresent and beautiful
March 12, 2009
Prominent Mathematician to Give Annual Sulski Memorial Lecture at Holy Cross
WORCESTER, Mass. – Rev. Paul A. Schweitzer, S. J., a member of the College of the Holy Cross class of 1958 and a professor of mathematics at the Pontificia Universidade Catolica (PUC), Rio de Janeiro, Brazil, will give the 16th annual Leonard C. Sulski Memorial Lecture in Mathematics on Tuesday, March 24 at 8 p.m. in room 519 of the Hogan Campus Center at the College. The lecture, titled "Surfaces and 3-dimensional Manifolds: How Geometry Comes to the Aid of Topology," is free and open to the public.
After graduating from Holy Cross, Fr. Schweitzer earned his Ph.D. in mathematics from Princeton University and has been a member of the faculty at PUC since 1971. In 1983, Fr. Schweitzer was awarded an honorary degree from Holy Cross for his distinguished research in topology, and his work in developing the mathematics program at PUC.
Fr. Schweitzer will discuss the recent work of the Russian mathematician G. Perelman in proving Thurston's geometrization conjecture. A consequence of Thurston's conjecture is one of the most famous problems in mathematics, the Poincaré conjecture for which Perelman has received much acclaim. Fr. Schweitzer's lecture will describe this problem and its solution in a manner accessible to a general mathematical audience.
The annual lecture series is a tribute to Professor Sulski, who taught in the mathematics department at Holy Cross from 1965 until his untimely death from leukemia in 1991.
The lecture will be preceded by a dinner co-sponsored by the department of mathematics and computer science and the Mathematical Association of America in the Hogan Campus Center. For more information about the dinner or the event contact Tom Cecil, professor of mathematics at Holy Cross and organizer of the event, at 508-793-2719 or e-mail email@example.com before March 16.
For additional information contact Cristal Perriello at 508.793.2419
March 12, 2009
Rise of the robogeeks
03 March 2009 by Michael Brooks
N December, philosopher and artificial intelligence expert Aaron Sloman announced his intention to create nothing less than a robot mathematician. He reckons he has identified a key component of how humans develop mathematical talent. If he's right, it should be possible to program a machine to be as good as us at mathematics, and possibly better.
This is no mad quest, insists Sloman, of the University of Birmingham in the UK. "Human brains don't work by magic, so whatever it is they do should be doable in suitably designed machines," he says.
Sloman's creature is not meant to be a mathematical genius capable of advancing the frontiers of mathematical knowledge: his primary aim, outlined in the journal Artifical Intelligence (vol 172, p2015), is to use such a machine to improve our understanding of where our mathematical ability comes from.
Nevertheless, it is possible that such a robot could take us beyond what mathematicians have achieved so far. Forget robot vacuum cleaners and android waitresses; we're talking about a machine that could spawn a race of cyber-nerds capable of creating entirely new forms of mathematics.
The field of artificial intelligence has promised much before, of course. Early researchers thought it might open a fast-track to understanding consciousness, and there were claims that artificially intelligent computers and robots would change the world. The truth has been more prosaic. AI has done some clever things, such as give us great chess players and voice recognition software, but it hasn't delivered a revolution.
But when it comes to mathematics, we can't rule one out yet, says Alison Pease, who researches the philosophy of mathematics at the University of Edinburgh, UK. Pease teaches computers to do mathematics using AI programs, and thinks a computer really could astonish its programmer with a new mathematical insight. "Ours hasn't yet, but there is no reason why one shouldn't in the future," she says.
The first concrete step towards this scenario came with a program written by Simon Colton, now at Imperial College London. The program was named HR, in honour of the mathematicians Godfrey Harold Hardy and Srinivasa Ramanujan. It looked for "interesting" sequences of numbers (New Scientist, 24 February 2001, p 13).
Some of HR's discoveries have even been published - and HR, rather than Colton, got the credit. Though they might not look like cutting-edge advances, they could yet prove important. "I always refer to HR's work in number theory as recreational mathematics, but things that look insignificant can end up being hugely significant and interesting," Colton says.
Pease and her colleagues Alan Smaille and Markus Guhe have recently taken things further. In their Edinburgh computing laboratory they have been running virtual mathematics conferences, populated entirely by digital mathematicians (see "Reinventing the conjecture"). So where might that lead?
All the way to significant new mathematics, Sloman hopes. His idea is that our key mathematical capabilities are formed in childhood. So rather than engineering a fully fledged mathematician's brain, Sloman thinks we should build a robot with a child-like brain and let it grow into its mathematical destiny.
There's just one problem. How do we know which of our childhood capabilities equip us for a life of juggling numbers?
Sloman is busy gathering clues. The answer, he reckons, lies in the spatial awareness skills that children must acquire in order to negotiate their world: skills such as knowing that a toy train pushed into a tunnel will come out the other side. Or that a jigsaw puzzle piece fits its gap only when correctly oriented. Or that the number of toys on the sofa does not depend on the order in which you count them.
From the minds of babes
You might be surprised to learn, for instance, that you grasped the topological concept called "the transitivity of containment" when you were still a toddler. Stacking cups, one inside the other, you learned that the small cup would fit not only in the medium-sized cup, but also inside the big one.
Transitivity of containment, like other geometrical and topological concepts, is learned through experience. "There are hundreds, if not thousands more examples of things a child learns empirically, that are later seen to be theorems in topology, geometry and arithmetic," Sloman says.
At some point, children make that jump for themselves. As toddlers, we soon translate our experiences into general theorems which we use to make predictions.
Take the train-through-a-tunnel example. By repeated experiences like this, toddlers learn the basic properties of rigid rods. That's why a 3-year-old carrying a long broom handle can negotiate a narrow corridor, turn a corner at the end without getting the broom handle caught in the vertical bars of a stair-gate, then make adjustments so that the handle will go through the next doorway. "There is a switch from learning empirically to realising it has 'simply got to be like that'," Sloman says.
And here is the key to the emergence of the mathematical mind. "The mechanisms that make that possible in a child are related to what makes it possible for them to go on to become a mathematician," Sloman says. "A lot of abstract maths has its roots in our ability to think about space and time, processes, and interactions between processes and structures."
Sloman has gone back to basics, to watch how children learn to navigate the world around them. He is building an archive of observations of children performing pseudo-mathematical tasks. These navigational and object manipulation skills - or at least the ability to acquire them quickly - must be encoded in the genome, Sloman reckons. And that means they could be encoded in a machine.
Sloman is still a long way from designing his robot toddler. Once he has catalogued the abilities of children at various stages of development, he still has to work out how to understand the mathematical implications of those abilities, then represent them in some form of computer code. "Information needs to be encoded in some form in order to be usable," he says. The gargantuan scale of the task means his aims are necessarily modest: at this stage he is simply trying to show a link between spatial manipulations and the basics of mathematics. Anything more would be a bonus. But just how big could that bonus be? Could a robot mathematician really do something interesting?
"In principle, yes, absolutely," Pease says. But, she adds, the story-so-far tempers her optimism. "Of all the scientific and mathematical discovery programs I've looked at, nothing has yet made a big discovery." At the very least, she says, that means there is a long way to go.
Colton thinks there is every reason to believe computers could produce something interesting to mathematicians. "Software is already producing theorems of value to maths," he points out. "Not of huge value, I admit - but then the average student or mathematician isn't producing anything of huge value either."
He and his team are convinced that computers can be genuinely creative. "Creativity is a very loaded word: people like to think it's a uniquely human attribute," he says. "The fact is, computers doing maths are more likely to be creative than, say, an undergraduate student, in many ways."
Others are sceptical of this view. Computers are a useful tool, says Rafael Núñez an expert on mathematical cognition at the University of California, San Diego, but the sense that computers can invent mathematics is an illusion. Though it looks like we can make progress by programming machines to do mathematics, he reckons there can be nothing in these machines that isn't pre-ordained by human mathematical concepts. "For me, it's like computing the decimal places of pi," Núñez says. "Once we have decided what the right rules are, we're just using the computer to crunch numbers."
Sloman thinks Núñez's view is too narrow. He points to "evolutionary algorithms" as a reason for optimism. This innovation allows a computer to evolve its own programs by producing lots of them, testing them against a goal criteria, and then selecting and "interbreeding" the best ones. It has allowed computers do things that nobody programmed them to do. "In some cases no human even knows how they do what they do," Sloman says. Aerospace and automobile designers have been using evolutionary algorithms since the late 1980s to optimise aircraft parts and streamline their designs. Even city traders are using them to buy and sell shares (New Scientist, 28 July 2007, p 26).
Evolution has a few million years head start on us in developing brilliant mathematicians, of course, but at least we're now in the race. "Our big discovery would be how do we do mathematics, rather than how do we write a program that can generate really new mathematics," says Pease. "But hopefully one would lead on from the other."
Reinventing the conjecture
The traditional view of mathematics sees it as a set of some eternally existing rules that describe the universe. Doing maths involves exploring this abstract, ethereal domain.
Though appealing to many, this notion of mathematicians as intrepid explorers is nothing more than a romantic myth, according to Alison Pease of the University of Edinburgh, UK. "Maths is not discovery," she says. "It's a thing that we invent."
It is something that her computers can invent too, she insists. Pease runs an AI program called HRL, which puts together "agents" in a student-teacher relationship.
The students are programmed to take some input information, make inferences from it and try to assess just how "interesting" those inferences are. If sufficiently interesting, the teacher gets involved, calling a group brainstorm designed to develop the ideas further.
One of HRL's early successes was the independent invention of a mathematical proposition called Goldbach's conjecture. One of the students was given the concept of integers and divisors, and instructed to use these to play around with the integers 1 to 10, looking for interesting relationships. A second student had the same concepts and instructions, but played with the integers 11 to 20.
Student two generated two new concepts: "even numbers" and "the sum of two primes". Then it generated a conjecture: that all even numbers can be expressed as the sum of two primes. It thought this was interesting, and sent its work to the teacher to be placed on the agenda for discussion.
The response was positive. "The teacher sent a request for modifications to this conjecture, and student one found the counterexample," Pease says. That counterexample is the number 2: the conjecture was modified to "all even numbers except 2 are the sum of two primes".
The fact that Christian Goldbach came up with this still unproven conjecture in 1742 makes it a little less impressive, but the point is made. Even if computers are a few centuries behind, it seems that machines really can do what human mathematicians do.
Michael Brooks is writer based in Lewes, UK, and author of 13 Things That Don't Make Sense (Profile)
March 01, 2009
Algorithms run modern life
It's all done by algorithms — jealously guarded mathematical recipes that increasingly dictate how we lead our lives
By Bobbie Johnson
THE GUARDIAN, LONDON
Sunday, Mar 01, 2009, Page 9
When you last used your supermarket loyalty card, or pulled up just as the traffic lights turned red, you probably weren't thinking much about math. Why would you? Most of us see mathematics as baffling school algebra that has no application in real life. But you probably meet hundreds of bits of complex math every day without realizing it: the discounts on your loyalty card are the result of deep data analysis, and those traffic signals run on an algorithm that determines when the lights will switch to green.
In fact, large bits of modern lives are secretly underpinned by complex algorithms — the mathematical equivalent of cookery recipes, which take a piece of information and turn it into an action or decision. Algorithms are like computer programs or flowcharts — a sequence of steps that examines what is happening and comes to a conclusion. Take traffic lights: The computer controlling them asks a series of related questions. What time of day is it? When did the lights last go red? Has a pedestrian pushed the button at the crossing? The algorithm guides the computer, step by step, to change the lights.
Algorithms are pervasive, even controlling, in our lives. Book a low-cost flight and an algorithm will determine how much the tickets will cost, depending on supply and demand; arrive at the airport and air-traffic algorithms will determine which place in the queue your plane gets. Watch the weather forecast on TV and the predictions will have been fine-tuned by an algorithm; listen to the radio and the playlist may have been generated by one. Supermarkets, in particular, are a hotbed of algorithms. Almost every aspect of their operation — from deciding the order that products are stacked on the shelves to picking which special offers to run — is determined by a computer.
And since computers are increasingly dominant in our lives, algorithms are increasingly important — and nowhere is this more apparent than on the internet. In the online world, mathematical analysis isn't just important: the algorithm is king. Everywhere you turn online, companies are using algorithms in their quest for success. From Google's search results and Apple's music recommendations to Amazon telling you that "customers who bought this item also bought..." algorithms are at work.
"There is no way, with the size of the internet, that one can do exhaustive searches," says Marcus du Sautoy, professor for the public understanding of science at Oxford University, and one of Britain's leading mathematicians. "So you have to rely on mathematics to give you clever and fast ways to get information."
The British online supermarket Ocado, for instance, analyzes every detail of every activity — from the choices shoppers make to warehouse movements — to make sure there is enough stock in the right places, and to help the company plan future strategies.
"We're using complicated forecasting algorithms to predict demand," says Jon Rudoe, head of retail. "The customer sees nothing. It all looks very simple, but it's actually governed by complex mathematics in the background" — mathematics we can put to use because we now have technology that didn't exist a decade ago.
Since these recipes have helped internet companies cook up billions in profits, the precise details of the mathematical mechanisms are jealously guarded as among the companies' most valuable assets. Forget the recipe for Coke or the Colonel's blend of herbs and spices, these are the trade secrets of the 21st century. And wherever there are secrets, there are people desperate to unlock them. Around the world, countless hours and millions — perhaps billions — of dollars are spent trying to unravel the inner workings of the web's most powerful algorithms.
Among those searching for answers is Russell Davies, an advertising consultant from London. His fascination started when he decided to try to understand how to lift his book — a guide to Britain's greasy-spoon cafes called Eggs, Bacon, Chips and Beans — up Amazon's rankings. On the surface, the Amazon charts seem straightforward. They are largely based on sales, but there is also analysis of buying activity so that a sudden rush of orders can push a book, CD or DVD up the chart. The rewards for chart-topping are enormous: it can drive thousands more sales. But there are other benefits too, including (crucially) a shot at appearing in the algorithmically generated "customers who bought this also bought..." section, which is hugely popular. Davies thought he could try a variety of tricks to shift his book up the bestseller list and — if he succeeded — work out the secret ingredients behind Amazon's recipe.
"I started to think about what makes the number change and realized that it's a big secret, which made it more interesting," he says.
Over the course of several months it became a minor obsession; after all, for writers and publishers, even a small insight into Amazon's algorithm could be like discovering the Rosetta Stone.
"The Amazon ranking is the only feedback the average author gets on how their book is doing, and you're desperate for feedback," Davies says.
He wrote about the book in newspapers and online; he asked friends to buy copies and even bought a few himself, all the time closely monitoring whether his actions were reflected by his place in the rankings. But although he had a few moments when he seemed close to a breakthrough, he hasn't cracked the code.
"I learned that you can't really influence it," he concludes. "Thinking about it, it's not that surprising — you're in such a massive pool of data that a few sales here or there are just invisible."
Amazon is not alone in working hard to keep its methods confidential. EBay's reputation system is largely based on user feedback, but the company constantly works on adaptations aimed at stopping scammers from getting status they don't deserve. Apple, similarly, has mathematical secrets that are increasingly important to its business, among them a recent addition to the iPod, the new "Genius" function that creates playlists of similar songs.
On the surface, Genius looks like hi-tech wizardry. It takes a song you own and works out similar music that you might like to hear: Stick in a shiny happy song and you'll get 25 tracks of sunshine back; give it a mournful dirge and you'll end up with a mixtape to die to. Underneath all the whizziness, though, the Genius function is really about number crunching. Apple analyzes the song choices of millions of other iTunes users around the world and, based on this, is able to take a stab at which songs match your seed track. And it's profitable, too: Genius can encourage users to hear and buy songs from iTunes that they knew nothing about.
Another aspect of Apple's business it keeps quiet about is the way its iTunes charts are put together. While the charts appear largely based on sales and the number of times people listen, iTunes is also believed to use a so-called "decay algorithm" to give more weight to very recent activity. The company refuses to confirm how the system works, although little pieces of information have slipped out in the past. Its podcast chart, apparently, is "driven purely by an algorithm that looks at new subscriptions during the past week," for example.
Nobbling the mathematics behind iTunes is almost impossible, but some people have learned how to subvert it the old-fashioned way. Little-known group Hit Masters rode high in the charts earlier this year with a karaoke cover of the recent Kid Rock hit All Summer Long. They managed a modern twist on the age-old tactic of riding somebody else's success, when they noticed that the original version was not available to buy on the US iTunes store. Their speedily produced cover appeared high in the search for the Kid Rock song, making them more likely to get downloaded; more downloads meant a higher placing in the chart, which itself generated more sales. It's a self-fulfilling cycle that eventually pushed Hit Masters to a chart high of 19 in the US Billboard Hot 100.
But the efforts of Amazon, eBay and Apple pale in comparison to the most famous algorithm on the Web: Google's algorithm is the mathematical engine that drives the web's most powerful company.
"It's not that Google is smarter at the maths, it just had a better recipe," says Danny Sullivan, editor of the SearchEngineLand.com Web site. "It had ingredients that other people didn't."
Given Google's dominance of web search, getting a high ranking there is a guarantee of more clicks — and more clicks means more money. With so much at stake, decoding Google's algorithm has become an industry in itself. Experts in search engine optimization know the basics of Google's operations — they are well documented — but the California company now spends vast amounts of time and energy trying to keep its formula secret.
"Large sites such as Amazon and Google are tweaking the internals of their systems almost constantly," says Jon Kleinberg, a professor of computer science at Cornell University in New York state. "The front end of Google looks the same to us, but behind the scenes they can be busily swapping out ideas."
Kleinberg believes the internet is making so much data available about how we relate to each other that we could soon be able to create algorithms for the social interactions we always thought were too complicated for math, such as controversy, disagreement or fame.
"They've always been fleeting, ephemeral, invisible and essentially unknowable," he says. "Now we can try to get in there and understand why, at a microscopic level."
All of which will have potential applications, both corporate and individual. Your PC could use algorithms to recognize exactly what document you are looking for, or to predict which news stories you might be interested in when you log on. Your mobile phone could recognize you are in a bad mood and screen your calls automatically, allowing only people that it has determined are your closest friends to get through.
Learning to sift through the vast amount of information being sent across the internet every second to divine people's feelings or intentions could, Kleinberg believes, be the next great technological leap. Mathematicians rule!
March 01, 2009
Lovely 'snowfakes' mimic nature, advance science
Computer modeled snowflake
by Madeline Fisher
Exquisitely detailed and beautifully symmetrical, the snowflakes that David Griffeath makes are icy jewels of art.
But don't be fooled; there is some serious science behind the UW-Madison mathematician's charming creations. Although they look as if they tumbled straight from the clouds, these "snowfakes" are actually the product of an elaborate computer model designed to replicate the wildly complex growth of snow crystals.
Four years in the making, the model that Griffeath built with University of California, Davis, mathematician Janko Gravner can generate all of nature's snowflake types in rich three-dimensional detail. In the January issue of Physical Review E, the pair published the model's underlying theory and computations, which are so intensive they are "right on the edge of feasibility," says Griffeath.
"Even though we've artfully stripped down the model over several years so that it's as simple and efficient as possible, it still takes us a day to grow one of these things," he says.
In nature, each snowflake begins as a bit of dust, a bacterium or a pollutant in the sky, around which water molecules start glomming together and freezing to form a tiny crystal of ice. Roughly a quintillion (one million million million) molecules make up every flake, with the shape dictated by temperature, humidity and other local conditions.
How such a seemingly random process produces crystals that are at once geometrically simple and incredibly intricate has captivated scientists since the 1600s, but no one has accurately simulated their growth until now. Griffeath and Gravner's model not only gets the basic shapes right, including fern-like stars, long needles and chunky prisms, but also fine elements such as tiny ridges that run along the arms and weird, circular surface markings.
Griffeath considers himself part of a long tradition of scientists, starting with famed mathematician and astronomer Johannes Kepler, who have marveled at snowflakes and simply wanted to understand them. But on the practical side, the model could help researchers better predict how various snowflake types in the clouds affect the amount of water reaching earth. Griffeath is now exploring that possibility with a UW-Madison meteorologist.
In the meantime, the project has given him a newfound appreciation for water, whose one-of-a-kind properties are what make snowflakes possible.
"Water is the most amazing molecule in the universe, pure and simple," he says. "It's just three little atoms, but its physics and chemistry are unbelievable."
March 01, 2009
Acclaimed Austrian mathematician Hlawka dies at 92
The Associated Press
Published: February 27, 2009
VIENNA: Acclaimed Austrian mathematician Edmund Hlawka, a major influence in modern number theory, has died at age 92.
Hlawka's family says the renowned mathematician died on Feb. 19 in Vienna. A cause of death was not given, and it was unclear why the announcement came more than a week after his death.
Scholars have hailed Hlawka as one of the most important mathematicians of the 20th century. He was a leading number theorist who did groundbreaking work on the geometry of numbers and analytic number theory.
Hlawka was born on Nov. 5, 1916. He was a visiting professor at Princeton University and the Sorbonne in Paris, and did most of his work at Vienna's Technical University.
Information on survivors and funeral arrangements was not immediately available Friday.
March 01, 2009
Young IU mathematician receives Sloan fellowship
BLOOMINGTON, Ind. -- A newly arrived mathematician to Indiana University has been named among 118 young scientists, economists and mathematicians as an Alfred P. Sloan Research Fellow.
Ciprian Demeter, 33, will receive a $50,000 research grant from the Sloan Foundation, which since 1955 has made awards to young researchers having gone on to win a cumulative 38 Nobel Prizes and 14 Fields Medals, considered the highest honor in mathematics. Demeter had letters of support from two previous Fields Medal winners, Jean Bourgain of the Institute of Advanced Study and Terrence Tao of UCLA.
"Ciprian is a rising star in analysis and already a leader of his field," said Demeter's nominator, IU Department of Mathematics Chair James Davis. "I expect his impact only to increase as time goes by and I am convinced he will become a leader in the department. He is highly deserving of the Sloan Fellowship."
Established in 1934 by Alfred P. Sloan Jr., then president and chief executive officer of General Motors, the foundation makes grants in support of original research and education in science, technology, engineering, mathematics and economics. The research fellowships specifically support the work of exceptional young researchers early in their academic careers who are "often at pivotal stages in their work," said foundation president Paul Joskow.
Demeter's research is in analysis and ergodic theory and he studies the long-term behavior of random systems. If you pump gas into an enclosed vacuum, Davis explained, ergodic theory studies the distribution of the gas particles in the enclosure over time.
"My research lies at the interface between harmonic analysis, ergodic theory and additive combinatorics," Demeter said. "I am mostly interested in using time-frequency methods to investigate multi-linear processes, such as singular integral operators and multi-linear ergodic averages."
Demeter was hired by IU in 2007 after receiving a doctorate degree in mathematics from University of Illinois at Urbana-Champaign and while doing post-doctoral work at UCLA. He immediately spent one year on leave to the Institute for Advanced Study in Princeton, N.J., where Albert Einstein was a faculty member for more than 22 years. Demeter began teaching and working in the IU College of Arts and Sciences' Department of Mathematics as an assistant professor last fall.
"In addition to performing groundbreaking research, Ciprian is also very generous to younger researchers getting started in his and related subjects," Davis said. "Here at IU, he has already been spending considerable time with some of our graduate students in analysis and he has already organized an analysis seminar."
This summer Demeter will be among three researchers to lead a weeklong mathematics summer school at Snowbird Resort, Utah, for graduate and postdoctoral students on the subjects of harmonic analysis, Carleson theorems and multilinear analysis. The school is funded by the National Science Foundation and the American Mathematical Society.
March 01, 2009
MATHEMATICIAN ANSWERS SUPREME COURT PLEA
By Julie Rehmeyer
Web edition : Tuesday, February 24th, 2009
In 2003, Republicans in the Texas state legislature proposed a bill that would redistrict the state to increase the likelihood of Republican victories. The Democratic representatives, lacking the votes to defeat the measure, fled the state to deny a quorum. After two standoffs (one lasting 45 days), a Democrat broke down and returned to work, and Republicans pushed the measure through. In the next election, Texas Republicans gained six seats in the U.S. House of Representatives, for a total of 21 seats out of 32.
Democrats sued. The Republicans argued that the new districting was only redressing past wrongs, as Republicans had held fewer than half of the Texas congressional seats, even though they had 57 percent of the vote. In 2006, the case reached the Supreme Court.
"Because there are yet no agreed upon substantive principles of fairness in districting, we have no basis on which to define clear, manageable, and politically neutral standards," Justice Anthony Kennedy had written two years earlier in a similar case in which the judges upheld the redistricting of Pennsylvania. "If workable standards do emerge … courts should be prepared to order relief."
In the intervening two years, no such standards had presented themselves. The Texas redistricting was upheld.
The next time a redistricting case goes before the Supreme Court, a mathematician says he can provide a method that may satisfy the court. The solution, says Zeph Landau of the University of California, Berkeley lies in cutting cake.
Politicians figured out the power of redrawing district boundaries back in 1812, when Governor Elbridge Gerry lumped most of the Massachusetts Federalists into a single district, allowing his own part to take control of all the other districts in the state. Newspapers mocked the strange, salamander shaped districts, saying he had "gerrymandered" the state. Oddly shaped congressional districts are now common across the country.
By arranging the boundaries to lose big in a few districts and win the rest by small but safe margins, a party can as much as double its percentage of seats. So if, for example, 40 percent of people in the state voted Democratic, redistricting could in theory make 80 percent of the congressional seats Democratic. If, on the other hand, the Republicans drew the boundaries when they had 60 percent of the vote, they might be able to almost double their percentage and get every last seat, although these theoretical maximums often can't be realized because of geographical constraints.
So what's fair?
An entire field of mathematics is devoted to answering just this kind of question. For example, take the classic "I cut, you choose" method of dividing cake: If I cut a cake into two pieces I'd be equally happy with, and you pick which of the two you like better, then neither of us will prefer the other person's piece to the one we have. The division will be fair in that sense even if our priorities are different. For example, I might really want the rose made of frosting, while you might care only about the size of your piece.
Landau and his collaborators, students Ilona Yershov and Oneil Reid of the City College of New York, realized that the mathematics of fair division could be used to solve the redistricting problem. They used a variation on another cake-cutting method: A third party wields the knife, moving left to right across the cake until one of us calls out, "Stop!" when it seems that both sides are equally good. Then the person who called out gets the left piece and the other gets the right one.
The researchers proposed that a variation of this method be used to divide the state into two regions such that neither political party preferred the other's region. From there, each party would divide up its own region however it liked.
At first blush, this plan doesn't seem to solve the problem at all. After all, if one party has only 40 percent of the vote, why should it get a full half of the control of the process of dividing the state into districts?
But the mathematicians showed that equally shared control will lead to about the right outcome even if the parties get very different proportions of the votes. If Democrats get only 40 percent of the vote, they can divide up their half of the state to get at most 80 percent of the seats in that region. If the Republicans get all the seats in their half, that means the Democrats would get about 40 percent of the total seats, which corresponds to their percentage of the total vote anyway.
"The idea is to set up the rules of the game so that cheating isn't really possible," Landau says.
Landau points out that any restrictions ordinarily applied to the entire state would continue to be applied to the two half-states. So, for example, districts would continue to be required to have approximately equal populations, and the Voting Rights Act would continue to require that for both half-states, the majority of the population in some districts be ethnic minorities.
This fair division method offers the alluring possibility that each party may feel it got the better deal. The reason goes back to the cake: If I care most about the rose made of frosting and you care most about the size of your piece, we each may think our piece superior to the other's. Similarly, Landau points out, one political party might particularly want to be able to win the district with a stadium in it, while the other party cared more about a district with an important donor.
The team presented its findings in January at the Joint Mathematics Meetings in Washington, D.C., and the research will appear in an upcoming issue of Social Choice and Welfare.
Political scientist David Epstein of Columbia University praised the approach as innovative, but said it's unlikely to be politically feasible. "The idea that any subset of people is going to have 100 percent dictatorial control of any portion of any state is totally incompatible with the democratic process," he says. Still, he believes the idea could be useful in other settings, such as perhaps for sharing power within a corporation.
Landau points out that in the current scheme, the ruling party has nearly dictatorial control already, and his scheme assures that that control can't be used unfairly. "The problem is that the underpinnings of its fairness aren't quite transparent," he says. "It requires a paper to explain it."
What is clear in any case is that a solution is urgently needed. In the 2004 Pennsylvania case, Justice David Souter remarked, "The increasing efficiency of partisan redistricting has damaged the democratic process to a degree that our predecessors only began to imagine."
March 01, 2009
Mathematics and spirituality
February 26, 1:17 PM
The infinite pretzel - by oskay under CC license
Most people today see mathematics as being the furthest thing away from spirituality, but this has not always been the case. In the past mathematics was intimately linked with spirituality and mysticism. As far back as Plato and Pythagoras mathematics was considered a spiritual activity. Though the perceived disconnect now is wider than it was then, there are mathematicians and non-mathematicians alike who continue to see these connections.
One of the more interesting case examples of a mathematician finding a spiritual connection to their work is with Srinivasa Ramanujan (1887-1920). Ramanujan was a mathematician from India who showed from very early on a profound ability in mathematics. By age 13, he was already discovering theorems on his own and by 17 was doing original work in very advanced areas of mathematics. He eventually caught the attention of G. H. Hardy, a mathematician at Cambridge, by sending him samples of his work. Ramanujan showed a rare and unusual brilliance, and is now considered one of the great mathematicians of all time. He had profound intuitive capabilities, and could within the blink of an eye see reveal remarkable patterns and relationships. All the while, he credited these abilities to his family goddess, Namarigi, and also said that "An equation for me has no meaning, unless it represents a thought of God."
One of the most powerful direct connections between mathematics and the divine is in the matter of infinity. For many, the essence of God is synonymous with the infinite. For a long time, mathematics concerned itself primarily with the essence of the finite. The infinite was thorny and whenever mathematicians did consider it, they had a tendency to encounter strange paradoxes. For this reason, the matter of the infinite was relegated to a matter for spirituality alone. This all changed when a mathematician named Georg Cantor (1845-1918) began to muck around in the depths of the infinite.
Cantor was inspired in his work by two factors. Within mathematics, calculus was beginning to gain rigor, but this was a tricky process without directly confronting the nature of infinity. Meanwhile, Cantor was a deeply religious man and felt that God put him on Earth in order to explore the infinite. What Cantor discovered was that there are (assuming a certain way of thinking about the size of a collection) different sizes of infinity. He came to the conviction that if he could find the biggest size of infinity that it would be God. What he did find is that there are so many infinitely many sizes of infinity that there is no one size to describe how many sizes of infinity there are, nor is there one size of infinity that is larger than all others. What does this mean with respect to Cantor's quest? Did he find what he was looking for even if he couldn't recognize it?
Perhaps what Cantor's work points to is that the true nature of God transcends conceptual encapsulation. It certainly paints a beautiful portrait of the structure of infinity, but this structure itself transcends it's own language for describing the infinite. That is, one can not find what Cantor was looking for within his work, but can by stepping outside of it. By flushing out a beautiful and crystal clear mathematical vision of what can be said using logic and the conceptual mind, we gain a clearer sense of what the the infinite is in a larger context. Cantor expected that the answer to his question would fit within the mathematics he had constructed. What he found was that it didn't, and this is perhaps a far more profound conclusion.
March 01, 2009
New models question old assumptions about how many molecules