May 16, 2013
First proof that infinitely many prime numbers come in pairs
Mathematician Yitang Zhang has outlined a proof
of a 'weak' version of the twin prime conjecture.
14 May 2013
It’s a result only a mathematician could love. Researchers hoping to get ‘2’ as the answer for a long-sought proof involving pairs of prime numbers are celebrating the fact that a mathematician has wrestled the value down from infinity to 70 million.
“That’s only [a factor of] 35 million away” from the target, quips Dan Goldston, an analytic number theorist at San Jose State University in California who was not involved in the work. “Every step down is a step towards the ultimate answer.”
That goal is the proof to a conjecture concerning prime numbers. Those are the whole numbers that are divisible only by one and themselves. Primes abound among smaller numbers, but they become less and less frequent as one goes towards larger numbers. In fact, the gap between each prime and the next becomes larger and larger — on average. But exceptions exist: the ‘twin primes’, which are pairs of prime numbers that differ in value by 2. Examples of known twin primes are 3 and 5, or 17 and 19, or 2,003,663,613 × 2195,000 - 1 and 2,003,663,613 × 2195,000 + 1.
The twin prime conjecture says that there is an infinite number of such twin pairs. Some attribute the conjecture to the Greek mathematician Euclid of Alexandria, which would make it one of the oldest open problems in mathematics.
The problem has eluded all attempts to find a solution so far. A major milestone was reached in 2005 when Goldston and two colleagues showed that there is an infinite number of prime pairs that differ by no more than 16 (ref. 1). But there was a catch. “They were assuming a conjecture that no one knows how to prove,” says Dorian Goldfeld, a number theorist at Columbia University in New York.
The new result, from Yitang Zhang of the University of New Hampshire in Durham, finds that there are infinitely many pairs of primes that are less than 70 million units apart without relying on unproven conjectures. Although 70 million seems like a very large number, the existence of any finite bound, no matter how large, means that that the gaps between consecutive numbers don’t keep growing forever. The jump from 2 to 70 million is nothing compared with the jump from 70 million to infinity. “If this is right, I’m absolutely astounded,” says Goldfeld.
Zhang presented his research on 13 May to an audience of a few dozen at Harvard University in Cambridge, Massachusetts, and the fact that the work seems to use standard mathematical techniques led some to question whether Zhang could really have succeeded where others failed.
But a referee report from the Annals of Mathematics, to which Zhang submitted his paper, suggests he has. “The main results are of the first rank,” states the report, a copy of which Zhang provided to Nature. “The author has succeeded to prove a landmark theorem in the distribution of prime numbers. … We are very happy to strongly recommend acceptance of the paper for publication in the Annals.”
Goldston, who was sent a copy of the paper, says that he and the other researchers who have seen it “are feeling pretty good” about it. “Nothing is obviously wrong,” he says.
For his part, Zhang, who has been working on the paper since a key insight came to him during a visit to a friend’s house last July, says he expects that the paper’s mathematical machinery will allow for the value of 70 million to be pushed downwards. “We may reduce it,” he says.
Goldston does not think the value can be reduced all the way to 2 to prove the twin prime conjecture. But he says the very fact that there is a number at all is a huge breakthrough. “I was doubtful I would ever live to see this result,” he says.
Zhang will resubmit the paper, with a few minor tweaks, this week.
1. Goldston, D. A., Pintz, J. & Yildirim, C. Y. Ann. Math. 170, 819–862 (2009).
May 16, 2013
Quantum Mechanical Words and Mathematical Organisms
By Joselle Kehoe | May 16, 2013 |
Are thoughts more fundamental to our reality than particles? “Well, how can you talk if you haven’t got a brain?” Dorothy asked the scarecrow. And after a moment’s glance toward the sky, he replied honestly, “I don’t know.”
Gottfried Wilhelm Leibniz, an influential mathematician and philosopher, whose work spanned the late seventeenth and early eighteenth centuries, argued methodically that the building blocks of reality are actually dimensionless units that have more the character of a mathematical point than an elementary particle. Leibniz imagined a universe of matter built up from non-material substances. These fundamental substances resemble thoughts more than the atomic building blocks of matter conceived in 19th century physics. While modern quantum conceptions have shattered the character of these tiny material elements, they continue to influence the popular understanding of fundamental particles such as the recently discovered Higgs boson.
Another influential pioneer, this time in the study of learning and perception, Johann Friedrich Herbart, was of the opinion that ideas are not passively pushed around by experience, but rather that they struggle to gain expression in consciousness. He even used the term ‘self-preservation’ to describe their action. Leibniz proposed a similar notion. Very recently, MIT cosmologist Max Tegmark suggested that the reason mathematics is so effective in describing reality may simply be that reality is a mathematical thing. In a recent interview for Science he describes mathematical structure as “abstract entities with relations between them.” “They don’t exist in space and time,” he explains, rather “space and time exist in them.”
What most people learn about mathematics in high school, and even in college, provides a very dull image of the discipline. Mathematics looks like a difficult and tedious set of rules, a painfully learned toolkit, required by scientists to do the work of science. But mathematics may very well be the human activity that best reflects the working relationship between mind and matter, or between thought and material. It has already facilitated a striking reconsideration of the fundamental nature of our physical reality in modern physics. But its completely symbolic worlds, which are only discovered with very careful introspection, may also have much to say about the significance of our thoughts.
Two very recent efforts demonstrate novel extensions of the use of mathematics. They each reveal unexpected parallels – one between physical processes and language processes, and the other between mathematics and organic activity. The first is from Bob Coecke, Professor of Quantum Foundations at Oxford University, who has observed that the way words interact with each other to bring meaning to a sentence, is very similar to the way particles interact in quantum mechanical processes. The other is the work of Gregory Chaitin, mathematician and prolific author, who has invented a mathematical life form that can maintain itself, and that can evolve. This is Chaitin’s first step in what he hopes will be a new approach to biology and the study of evolution.
Let’s look at Coecke’s work first. He has pioneered the use of a graphical mathematics to simplify the calculations of quantum physics. The work is based on category theory, a branch of mathematics that focuses less on mathematical objects themselves and more on the maps that transform them – on the things you do to mathematical objects. In the mathematics familiar to most, category theory concerns actions like summing, multiplying, projecting or translating, rather than objects like numbers, spaces or vectors on which these actions operate. One of the consequences of this shift is that category theory can find relationships even among the different branches of mathematics.
Coecke uses the structure of category theory to give meaning to the graphics he has designed. The figures involved are not complicated. They are lines and boxes. Lines represent systems, or the things one has chosen to look at. These may be quantum mechanical systems or classical physics systems. Boxes represent processes, or operations that can take a system of type A, for example, to a system of type B (Figure 1). The processes represented by the boxes could be, Coecke says, either cosmological or ones produced by experiment. It is the arrangement of boxes and lines (which Coecke calls wires) that tell the story. For example, a cup-shaped line indicates a system with two outputs. When a pair of cups with inverted orientations is combined, a single output system remains.
Once these abstractions are given particular definitions (whether mathematical or physical) their behavior, which is read in a very intuitive fashion, accurately reflects what is known within that setting. While the statements are diagrammatic, by virtue of their relation to category theory, they capture quantitative content, as well as qualitative content. The computing is in the movement of boxes along the lines (or wires) of the diagram. It is a calculus, Coecke says, that can express quantum action more simply than the one line symbolic statements of equations.
Figure 2 (with permission from Bob Coecke)
Figure 2 shows a diagrammatic expression of quantum teleportation – the process by which quantum information can be transmitted, exactly, from one location to another without moving through the intervening space. Alice and Bob are common names given to the bearers of quantum information.
With generalities as broad as the ones in category theory, Coecke’s diagrammatic calculus has application to biological processes as well. But it is its application to linguistics that is perhaps the most unexpected. In a podcast produced in April 2012 by the Foundational Questions Institute, Coecke characterized both quantum mechanical systems and language systems as “things that flow in wires.” The state of a physical system flows in a wire and the meaning of a word flows in a wire. A transitive verb, for example is thought of as a process with three wires, one requiring an object, one a subject, and one producing the sentence.
Coecke makes the argument that his methods are more effective than current linguistic models because, while current models use either the meaning of individual words or the grammatical rules that govern their combination, they cannot combine the effects of word meaning and word order. It should not be overlooked that the success of this approach to language relies on the observation that the words are interacting.
Lets now look at Chaitin’s mathematical organisms as he describes them in his most recent book, Proving Darwin: Making Biology Mathematical. His work begins with the observation that “After we invented software we could see that we were surrounded by software. DNA is a universal programming language and biology can be thought of as software archeology – looking at very old, very complicated software.”
Chaitin starts with the simplest case – a single software organism that has no body, no population, no environment and no competition – what he calls a toy model of evolution. The organism is assigned the task of naming extremely large positive whole numbers. To do this effectively, this mathematical life form needs to invent addition, multiplication, and exponentiation. If you have the large number N, for example, and you want to find a larger one, it will be necessary to consider N + N, or N times N, or N to the nth power, or N to the N to the nth power, and so on.
By design, successfully finding a number larger than the last one found increases the fitness of the organism/program. A mutation is an algorithmic modification, a computer program. The original organism produces the mutated organism as its output. Because this mathematical problem can never be solved perfectly, the ongoing evolution of these organisms is assured.
These toy models, as Chaitin calls them, provide a way to measure evolutionary progress and biological creativity. It is not surprising, therefore, that Chaitin claims that biological creativity equals math creativity. And this leads to a potentially important shift in perspective. He refutes social Darwinism with social metabiology according to which, “the purpose of life is creativity, it is not preserving one’s genes. Nothing survives, everything is in flux…everything flows.”
Chaitin admits that it’s too early to tell if the ideas in metabiology will bear the fruit for which he hopes. But this conceptual shift in evolutionary theory will no doubt stimulate questions about biological creativity, evolution, and mathematics. As he discusses what mathematics might yet achieve, Chaitin says this:
As a living language of science, that can also be an effective model of language, mathematics may yet help uncover the way that thought resides in nature.
When the scarecrow finally meets the wizard, the straw in his head remains, but he gets a diploma. The representative of thought that emerges is mathematics. He tries to state the Pythagorean theorem, as if the diploma alone can produce this kind of thought. But he doesn’t get it right.
About the Author: Joselle DiNunzio Kehoe is a writer and Lecturer of Mathematics at the University of Texas at Dallas. She earned a Master of Science degree in mathematics from NYU’s Courant Institute of Mathematical Sciences and has been teaching mathematics at the university level for more than 25 years. She has been published in the journal Isotope at Utah State University and, for the past few years has been most involved in a book project that considers a biological view of mathematics. It is this pursuit that also guides the choice of subjects found in her blog at Mathematics Rising.
May 16, 2013
Google and NASA buy controversial quantum computer
17:06 16 May 2013 by Jacob Aron
Are quantum computers about to revolutionise our lives? You might think so now that Google, together with NASA, has bought one from the only organisation that sells them, D-Wave of Burnaby, Canada. However, questions remain over just how D-Wave's quantum computers work and whether they would beat ordinary computers in like-on-like tests.
D-Wave's first client was the aerospace giant Lockheed Martin, which bought an early model, the D-Wave One, in 2011. Today's sale of the more recent D-Wave Two to Google and NASA is the first to a company providing computing services to the general public.
"We believe quantum computing may help solve some of the most challenging computer science problems, particularly in machine learning," wrote Google's Hartmut Neven in a blog post following the announcement. Machine learning involves training a computer to recognise patterns in data so it can classify new information. It is core to many of the algorithms that drive various Google offerings.
Google's plans for the machine are broad. "We hope it helps researchers construct more efficient and more accurate models for everything from speech recognition, to web search, to protein folding," writes Neven.
One area where D-Wave's device might be helpful is searching the web for images. In 2009, Google collaborated with D-Wave to develop a machine-learning algorithm that allowed an older version D-Wave's hardware to recognise pictures of cars. The newer machine could build on this, perhaps improving Google's novel search by image feature, which allows users to search using an image as the starting point – much harder than finding images using keywords.
Though Google will have access to the machine, the newly purchased D-Wave Two will be installed at the NASA Ames Research Center in Moffett Field, California. It will be operated by the Universities Space Research Association (USRA), which will give a fifth of the machine's time to outside researchers.
D-Wave's computers have attracted controversy because they use a non-mainstream technique called adiabatic quantum computing. It is still an open question whether this leads to the speed-up long promised by harnessing quantum bits, or qubits, which can take the values 0 and 1 at the same time and thus theoretically compute much faster.
Before finalising their purchase, Google, NASA and USRA set their machine a series of tests, which D-Wave says it passed. D-Wave also recently commissioned a series of benchmarks to prove the performance of its machine. The D-Wave Two was able to solve a problem 3600 times faster than a conventional algorithm running on a high-end desktop PC – but the company has yet to show that the increase is down to the machine's quantum capabilities.
Scott Aaronson of the Massachusetts Institute of Technology, a long-standing critic of D-Wave, points out that the PC algorithms had not been optimised to solve the problem, unlike the D-Wave device. "It's not the slightest bit surprising the D-Wave machine is going to win, but that's really not informative," he says.
He also points to results posted online from a team led by Daniel Lidar at the University of Southern California, Los Angeles, which operates the D-Wave computer owned by Lockheed Martin.
The team compared a D-Wave One device, which has 108 qubits, with two different high-end conventional processors running simulation software that has been optimised to perform the computations in the same way as D-Wave. In this case, although the D-Wave One showed some evidence of quantum behaviour, it took longer – 15 microseconds – to solve a problem than the conventional processors, which took 4 and 0.8 microseconds.
Lidar is now conducting similar tests with the upgraded D-Wave Two, which has 503 qubits and so might perform better on larger problem that ordinary, or classical, computers struggle with. "The really interesting question is whether a quantum processor exhibits better scaling with problem size than all classical processors it is compared with," says Lidar. "If this is the case, then it validates the promise of quantum computing."
What if it doesn't? "I believe it should, otherwise it would be hard to justify Google's investment," says Lidar.
Aaronson disagrees. "I don't think this is going to be directly used for anything relevant to Google's business," he says. "Google is a very wealthy company and they do have money to throw around on things that just might be fun to play with."
Neither D-Wave nor Google has given the exact price involved – but Lockheed Martin is thought to have paid about $10 million for its machine.
May 16, 2013
The secret lives of bubbles: Mathematicians describe
May 16, 2013
MATHEMATICIANS HELP TO UNLOCK BRAIN FUNCTION
Wednesday 1 May 2013
Mathematicians from Queen Mary, University of London will bring researchers one-step closer to understanding how the structure of the brain relates to its function in two recently published studies.
Publishing in Physical Review Letters the researchers from the Complex Networks group at Queen Mary’s School of Mathematical Sciences describe how different areas in the brain can have an association despite a lack of direct interaction.
The team, in collaboration with researchers in Barcelona, Pamplona and Paris, combined two different human brain networks - one that maps all the physical connections among brain areas known as the backbone network, and another that reports the activity of different regions as blood flow changes, known as the functional network. They showed that the presence of symmetrical neurons within the backbone network might be responsible for the synchronised activity of physically distant brain regions.
Lead author Vincenzo Nicosia, said “We don’t fully understand how the human brain works. So far the focus has been more on the analysis of the function of single, localised regions. However, there isn’t a complete model that brings the whole functionality of the brain together. Hopefully, our research will help neuroscientists to develop a more accurate map of the brain and investigate its functioning beyond single areas.”
The research adds to the recent findings published in Proceedings of the National Academy of Sciences in which the QM researchers along with the Department of Psychiatry at University of Cambridge analysed the development of the brain of a small worm called Caenorhabditis elegans. In this paper, the team examined the number of links formed in the brain during the worm’s lifespan, and observed an unexpected abrupt change in the pattern of growth, corresponding with the time of egg hatching.
“The research is important as it’s the first time that a sharp transition in the growth of a neural network has ever been observed,” added Dr Nicosia.
“Although we don’t know which biological factors are responsible for the change in the growth pattern, we were able to reproduce the pattern using a simple economical model of synaptic formation. This result can pave the way to a deeper understanding of how neural networks grow in more complex organisms.”
The Complex Networks group at Queen Mary is headed by Professor Vito Latora. Aside from theoretical research about the structure and function of complex networks, the group is working on the characterisation of multi-layer brain networks, aimed at reconciling and integrating different brain signals to produce a more informative picture of the human brain.
May 16, 2013
UL researcher develops mathematical technique for predicting contagion in networks
UL researcher develops mathematical technique for predicting contagion in networksPictured:
Prof James Gleeson
A new mathematical technique to predict social phenomena in large populations has been developed by University of Limerick research, Prof James Gleeson.
Details of the research have been published in Gleeson’s paper Binary-State Dynamics on Complex Networks: Pair Approximation and Beyond, which is published in the open-access journal Physical Review X and is available for free download from the American Physical Society.
“Contagion can happen in many different contexts, from disease spread to viral marketing,” said Gleeson. “Mathematical modelling is important to help understand the mechanisms that drive contagions on networks. The increasing availability of data from social online networks can now give a lot of information about how humans influence each other, but fast and accurate mathematical techniques are crucial to help process the flood of data.”
Understanding human interaction is crucial to formulating social trends, he said. The correct mathematical model can be used to predict a range of scenarios from voting models to infectious disease spread across populations.
Current analytical methods for tackling models for everything from voting trends to disease spread are few and often not accurate. Some models can achieve high accuracy but at the cost of computational complexity. In his paper, Gleeson presents a low-complexity approach, called pair approximation, and demonstrates that for certain classes of local decision rules, this formula can achieve results as accurate as the traditional high-complexity approach.
To facilitate the spread of its use, MACSI has made the computational code freely available to download.
Prof Gleeson is the co-director of the Mathematics Applications Consortium for Science and Industry (MACSI) at the University of Limerick. This research is funded by Science Foundation Ireland.
May 16, 2013
Electrical Brain Stimulation Helps People Learn Math Faster
BY GREG MILLER05.16.1312:00 PM
Just about everyone wishes they were better at math. But studying and practicing is so difficult and boring that very few people do it. If only there were an easier way.
Now there may be, suggests a new study in which scientists stimulated volunteers’ brains with mild electric current while they learned new arithmetic operations based on made-up symbols. People who received brain stimulation during training sessions on five consecutive days learned two to five times faster than those who received sham stimulation, and they retained a 30 to 40 percent performance edge six months later.
The study is not the first to show improvement in mathematical cognition with brain stimulation. In 2010, scientists reported that people can learn a new set of numbers based on arbitrary symbols more quickly when a mild current is applied to the right parietal lobe of the brain, a region implicated in previous number-comprehension studies.
The new research goes a step farther by showing that electrical stimulation can also improve the ability to perform calculations, says cognitive neuroscientist Roi Cohen Kadosh of the University of Oxford, who led both studies.
The new study also uses a different type of stimulation. In the 2010 study, Cohen Kadosh and colleagues used transcranial direct current stimulation (tDCS), which delivers a weak but constant current to the brain via electrodes placed on the skull. In the new study, they used transcranial random noise stimulation (TRNS): current that fluctuates randomly within certain bounds. People sometimes feel a slight tingling on the scalp with tDCS, Cohen Kadosh says, but with TRNS they usually feel nothing.
They may also get a different kind of cognitive boost.
The researchers applied TRNS to a different brain region thought to play a role in mathematical cognition, the left dorsolateral prefrontal cortex. When people received TRNS during training sessions spread across five days, they memorized new “facts” more quickly (such as 4 # 12 = 17, an arbitrary equation that had to be learned by rote). Compared to subjects who got sham stimulation, those who received TRNS also learned more quickly to do calculations with novel operands (the symbols like + and – that tell you what to do with the numbers on either side, but in this case the new symbols required somewhat more complicated operations).
The researchers also monitored blood flow in the stimulated region of cortex with a non-invasive method called near-infrared spectroscopy. TRNS appeared to make metabolism more efficient, co-author Jackie Thompson wrote in an email to Wired: “That is, metabolic levels in the TRNS group were actually lower whilst doing the same mental calculations (the same amount of “work”) as the sham group.”
That physiological change, as well as the improved calculation performance, persisted 6 months after training, the researchers report today in Current Biology. (The improved memory performance did not).
“If I put my sci-fi hat on, what I can imagine coming down the road is even more sophisticated combinations of stimulation and cognitive training,” said Peter Reiner, a neuroscientist and neuroethicist at the University of British Columbia in Vancouver. Reiner sees the performance improvements reported in the new study as a nice incremental advance over what’s been shown previously, but he predicts that bigger things are on the way. ”There’s a huge amount of potential there.”
But before you run off to RadioShack and fire up the soldering iron in an attempt to build a TRNS kit, Cohen Kadosh has a few words of caution. “Do not try this at home,” he said. Although tDCS is fairly simple, TRNS requires more sophisticated equipment and protocols. The electrodes have to be attached in just the right place and the cognitive training has to be done right too, Cohen Kadosh says.
Also, although his team didn’t see any adverse effects in this study, they recently discovered that tDCS can cause cognitive impairments as well as benefits in some cases.
Study and practice is still the surest and safest way to kick your brain into a higher math gear. At least for now.
May 16, 2013
The mathematical menace
By Martin Hutchinson
Far from being tools to increase knowledge and understanding, mathematical models are tools of obfuscation.
The brouhaha about the spreadsheet error in Carmen Reinhart and Kenneth Rogoff's 2010 paper "Growth in a time of debt" brings home an important economic truth. Not that Reinhart and Rogoff were in error; their overall conclusion is clearly true, not to say obvious, and correction of the error in their spreadsheet merely softened the conclusion without invalidating it. However the economic truth is that the invention of computer modeling has for the last 40 years allowed charlatans to peddle spurious models in the service of their political agendas, and policymakers and the mathematically complex scientific and sociological interactions is popularly thought to have begun with the computer model of nuclear interaction used in the 1942-45 Manhattan Project, but the techniques and thought processes involved go back well beyond this. Perhaps the most significant pre-computer use of model theorizing came from Rev Thomas Malthus, who postulated that the increase over time in food supply was arithmetical, that in population geometrical, and therefore population would always outrun the food supply.
The fate of Malthus' theory illustrates both the value and the downside of mathematical modeling. On the one hand, a neat mathematical demonstration can make a theory infinitely plausible to voters and policymakers. (Malthus later became a key advisor to the great Lord Liverpool, helping in the design of the Corn Laws.) On the other hand, outside factors, not contained in the model, can make its conclusions false - in Malthus' case, his otherwise plausible conclusion (which may well turn out prescient in the very long run, if global population is not controlled) was at least for 200 years falsified by the Industrial Revolution, which hugely increased the productivity of agricultural labor and, through crop improvements, agricultural land.
The first misguided economic forecast to use a computer was the Club of Rome's effort in 1971. ("The Limits to Growth" was published in 1972, but the model was showcased in the autumn of 1971, when I attended a presentation thereof.) The presentation described an econometric model of the world economy, including such factors as environmental problems and the possibility of starvation through overpopulation, which was then projected iteratively 40 years forward, to about today.
The Club of Rome made one huge error compared with their climate change successors; they made apocalypse inevitable. Every simulation, including those that were run with completely unrealistic assumptions like an immediate 80% decrease in pollution or resource usage, ended with the collapse of the global economy and eco-system within 40 years. There was thus no expensive program of redemption that we could undertake; whatever we did, however ecological we became, we were doomed anyway. Unsurprisingly, the Club of Rome had little effect on practical politics, even in the 1970s.
Its model was in any case erroneous. When I saw it at the presentation, I realized that the modelers had made the same mistake I had struggled with in Cambridge's first, embryonic computer modeling course six months earlier: they had extrapolated a set of equations containing exponential terms forward through 40 iterations, without taking care of the rounding errors in the simulation (in those days models were limited to six or seven significant figures, owing to constraints on computer capacity).
Pushed 40 times through a simulation containing exponentials, the error terms exploded in size, forcing the graph catastrophically off the page, in one direction or another. (I tried to explain this in the presentation's question period, but without success - bringing the light of truth to a distinguished professor's model and his prejudices simultaneously was beyond me.)
Thus the Club of Rome's multiple, inevitable disasters were purely the result of computer errors. Had they fixed the errors, they might have produced a more plausible (though doubtless still erroneous) result in which simulations where pollution decreased by 80% or population growth stopped failed to produce economic collapse, while only those with "naughty" policies resulted in disaster. For the Club of Rome's backers, that would have been a much more useful outcome, giving them license to nag policymakers for the next decade about the evils of the unconstrained free market.
"Value at risk" had the advantage over the Club of Rome's model that it wasn't faulty in its execution, as far as I know. However its underlying premise was flawed, that financial instruments obey strictly the laws of Gaussian random motion, in particular that their returns have the extremely thin "tails" typical of Gaussian distributions.
When Goldman Sachs chief financial officer David Viniar wailed in August 2007 that he was seeing "25-standard deviation events, day after day" it should have caused everyone using value-at-risk models to bin them, because under Gaussian theory 25-standard deviation days are effectively impossible, being 1 million to 1 against in the entire life of the universe. However, extraordinarily, it was later revealed that JP Morgan was still using value at risk at the time of the London Whale trading fiasco four years later.
Value at risk's prevalence reflects another problem with computer models: their results reflect the prejudices and economic interests of the modelers. In the case of value at risk, traders and mid-level managers want the apparent risk of positions to be minimized to top management and especially to regulators in order that they can take the largest positions possible and thereby maximize their profits and bonuses.
Furthermore, they like a system that undervalues the risk of "exotic" products such as credit default swaps and collateralized debt obligations, as well as highly engineered options positions, because those products are generally more profitable than "vanilla" products such as bonds, futures and interest rate and currency swaps. When banks are "too big to fail", top management's risk/reward profile is aligned with those of their traders, since failure means only a taxpayer bailout. Needless to say, with flawed models such as value at risk available, that situation has an exceptionally unfavorable risk profile for taxpayers.
Global warming models suffered from the problems of both the Club of Growth model and value at risk: they were attempting to describe a poorly understood system with forward extrapolation over a long period, and they were being designed by scientists with both a philosophical and an economic interest in the outcome (since additional global warming fears brought them increased resources).
Professor Michael Mann's notorious "hockey stick curve", for example, was designed to demonstrate that global warming in the 20th century was more extreme than in the entire previous millennium; it suffered both from faulty data and from a skewed algorithm designed to produce a hockey stick curve out of almost anything.
In all three of the above cases, the most surprising factor was the ability of a discredited model to remain salient in the argument as a whole. As a former mathematician, I would naively imagine that faulty mathematics would immediately get my work discredited, and that a model whose underlying assumptions or methodology had been demonstrated to be wrong would be effectively useless.
In practice this appears not to be the case; constructing a faulty mathematical model of something is a useful activity, since even after its faults have been discovered and demonstrated it remains salient in the argument. The reality of course is that few of us are comfortable discussing the arcana of mathematical models, and so continue to be convinced by them even after they have been proved to be erroneous.
In the world of mathematical models, Reinhart and Rogoff were thus mere innocents. Their mistake was both accidental and elementary, and was easily discovered by another researcher with an axe to grind. Then, because their error was so easy to understand, it discredited their model more thoroughly than much more egregious errors discredited the Club of Rome, value at risk and hockey stick models. After all, even after the Reinhart/Rogoff error was corrected, the model continued to show their conclusion to be generally valid, which was not true in the other cases.
The conclusion to be drawn is thus a depressing one. The output from mathematical models depends crucially on the assumptions used to construct them, so even when no error is involved those assumptions color the models' results to reflect the policy preferences or economic interests of their designers.
To take a simple example, gross domestic product (GDP), as designed by Simon Kuznets in 1934, includes government spending at full cost, even when it produces no economically useful output. Thus Maynard Keynes' economic recommendation to cure a recession, of using the unemployed to dig holes and fill them in, is a self-fulfilling prophecy. It will automatically increase GDP because of the definition of GDP, since the useless government spending will be counted as output.
Yet, except for any health benefits for the unemployed forced to spend all day digging holes, no increase in welfare has resulted; indeed welfare has decreased because the government has incurred more debt, the unemployed presumably have other things they'd rather do than dig holes, and some of them might have found self-employment that produced genuine economic output.
In short, mathematical models, far from being tools to increase knowledge and understanding, are tools of obfuscation. They take propositions that would be rejected by intelligent observers based on qualitative reasoning, and add a dense fog of error, producing spurious results that even an intelligent observer cannot easily deconstruct.
Keynesian economics, expensive environmental boondoggles and economically destructive trading activities all rely on mathematical models for their justification. Until we have invented software that can deconstruct other people's models and find their flaws, we should thus disbelieve any proposition that is bolstered by such spurious artifacts.
Martin Hutchinson is the author of Great Conservatives (Academica Press, 2005) - details can be found on the website www.greatconservatives.com - and co-author with Professor Kevin Dowd of Alchemists of Loss (Wiley, 2010). Both are now available on Amazon.com, Great Conservatives only in a Kindle edition, Alchemists of Loss in both Kindle and print editions.
(Republished with permission from PrudentBear.com. Co
May 16, 2013
It always comes down to math
About a year ago, a 20-something rising star product guy at a hot startup turned to me and asked, “So are you a startup guy too?” As someone who has started several businesses but not in the Internet space, I answered, “I’m not really sure what that means to be a startup guy. I’ve been a brick and mortar entrepreneur for years but not really on the internet.” Without any hint of irony, he replied, “Yeah, the internet is just a different skill set than brick and mortar.” And with that dismissal, he lost interest in having any further conversation with me.
When I tell this story to my friends who have been around the block a few times, including the ones who have started successful Internet companies, they always shake their heads in disbelief. Of course there are different technology requirements and some of the techniques used for things such as user acquisition are different, but at the end of the day, regardless of what business you’re in, everything comes down to the same math problem. Revenue minus expenses must equal a positive number. Anyone who tells you business works in some other way or the rules are different on the Internet or any other claim of exceptionalism is an idiot.
Yet so much of the venture-fueled startup environment is based on questionable or even nonexistent math. The problem extends far beyond simply calling out individual startups for not having viable business models. With so many VCs and angels writing checks to startups without the slightest clue as to how they’ll ever make the math work, it is literally the entire ecosystem that has lost sight of the basic math equation.
I saw a great example of this “no math” startup thinking at a dinner a few months ago. The group got into a discussion about what defines a startup, when a founder whose startup had failed explained, “A startup stops being a startup when they figure out how they’re going to make money.” My friend Ben Nelson exclaimed, “That’s the most ridiculous thing I’ve ever heard! By that definition Snapfish was never a startup, because we always knew how we were going to make money, and yet clearly we were a startup at some point!”
I thought the entire scene was hysterical and it’s probably worth noting that the guy who understood the importance of math sold Snapfish to HP for $300 million.
As bad as it is that both VCs and founders are willing to ignore the realities of math, I wonder how many entrepreneurs even know how to do basic math. The most egregious example of an entrepreneur who didn’t understand math can be found in the tragic story of Ecomom.
As the details of its failure came to light, its former controller Philip Prentiss described a conversation he had with its CEO, Jody Sherman, as the company was collapsing. Prentiss explained, “At the end of December when things were getting truly desperate, he said to me, ‘Phil, just bring me a forecast that shows how much we need to sell to break even.’ He did not understand, after three years of negative margin, that increased sales resulted in increased losses.”
Contrary to popular belief, raising venture capital is not a business model and the rules of math aren’t different on the Internet. You may have a little more running room because of VC funding, and you might get lucky with an exit before the reality of math comes calling, but if you’re trying to build an actual business and not a house of cards, I can assure you the math is always the same.
As Brian Lee, co-founder of ShoeDazzle, The Honest Company, and LegalZoom explained at last week’s PandoMonthly event, “Business is business.” While this statement seems blatantly obvious, it’s a concept that is often lost in the world of venture-funded Internet startups.
May 16, 2013
A Business Perspective: Curiosity Is Crucial In Math And Science
Posted: Thursday, May 16, 2013 11:24 am
People who understand and enjoy math and science have a great curiosity for why and how things work. That curiosity drives them not only to discover and learn, but also to innovate and create. I fear that many kids are losing a sense of curiosity too early in their development. On a Saturday morning, rather than take apart a toaster or the remote control to see why and how it works, millions of kids play video games for hours at a time. Video games are extremely complex technologies, but how many of those kids have the desire, or the curiosity, to know how the video games work?
We need to capture and sustain the curiosity needed to create the next generation of mathematicians and scientists. Let’s be honest, many math and science concepts can be difficult to understand—even for many adults. But when our kids are fueled by a desire to know why something works, their curiosity will drive them through the more difficult concepts.
One way to grab a child’s interest in math and science early on is with cool concepts that children can relate to—which is where physics comes into play: the application of cool math and science concepts in real world scenarios. For example, how do rollercoasters work? How do accelerometers work? What is g-force and how does it work? The real world application of these theories makes math and science interesting and fun for kids.
This is a solid argument for offering a physics course in ninth grade, which in most high schools is typically not introduced until 11th or 12th grade. I believe that before you potentially frustrate students with challenging and tedious subjects that may turn them off from math and science early in their high school career—why not start with rollercoasters and g-force, or physics? Show the kids that not only are math and science concepts applicable in the real world, but they are also fun. (The Loudoun County Academy of Science began introducing physics in ninth grade, and has seen tremendous outcomes.)
As I’ve said many times, I am a firm believer that there is an intrinsic link between the arts and the math and science disciplines. Not only does the incorporation of art into a good science and math curriculum help build the foundation for creative innovation down the road, but arts can also make the ‘boring’ disciplines more fun. We need to engage our children early on with the arts—and use that excitement and curiosity to enhance their STEM coursework throughout their K-12 education.
A great example of how this could be achieved nationally is the Wolf Trap Institute for Early Learning through the arts and its Science, Technology, Engineering and Math initiative. This initiative develops, evaluates and disseminates arts-based STEM teaching strategies for pre-K and kindergarten. As it notes on its website: STEM education must start in the earliest years at the time that children’s interests, desires and abilities are formed. This important beginning leads to students’ success and choices in elementary through high school math and science.
A lack of curiosity will stifle innovation in our nation. After all, engaging kids in math and science may improve the quality of my future employee pool—but this issue runs much deeper than finding talented software engineers for our company. Cyber is a war-fighting domain. Cyberwarriors are in high demand in our country, and without highly skilled software engineers and security professionals with strong backgrounds in math and science our national security is at risk.
[John B. Wood is Chairman and CEO of Telos Corporation in Ashburn, founder of the CEO Cabinet and a past chairman of the county Economic Development Commission. Follow John on Twitter at twitter.com/john_b_wood.]