MATH NEWS


a cura di Umberto Cerruti -- MATH NEWS ARCHIVE and LINKS -- HOME
Please send me any news you like about mathematics, mathematical models and mathematicians!
If this is of general interest, I'll publish it in these "Math News". Please write "mathnews" in the object.

June 30, 2014

With Prizes Like This, Who Needs a Nobel?

www.scientificamerican.com

UCLA mathematician Terence Tao, one of five
winners of the inaugural Breakthrough
Prize in mathematics, a $3 million award.
Credit: Terence Tao


Five mathematicians, working in a field spurned by the Nobel academies as a matter of course, will receive $3-million awards of their own

Jun 23, 2014 |By Ben Fogelson

It started with a simple message from Internet billionaire Yuri Milner: Let’s meet up.
 
Before responding to that e-mail in April, Jacob Lurie, a Harvard University mathematics professor, decided to look up Milner and found that the venture capitalist, along with Facebook founder Mark Zuckerberg, had started awarding a $3-million prize in mathematics this year. Lurie figured Milner wanted his advice on whom to pick. “I was surprised,” Lurie says, “when he offered me the prize.”
 
University of California, Los Angeles, mathematician Terence Tao got a similar shock and tried to convince Milner there had been a mistake. “I don’t feel like I’ve done enough yet,” says Tao, a highly regarded mathematician and former child prodigy who is famous for being the youngest full professor ever at UCLA. In all, five mathematicians have been awarded the 2014 Breakthrough Prize in Mathematics. (A similar physics prize was started in 2012 and one in life sciences was first awarded last year.) An awards ceremony and scientific conference are set for November 9–10 in Silicon Valley.
 

Jacob Lurie
Jacob Lurie

The winning researchers all work at the leading edges of some of the most fundamental and abstract areas of mathematics. Jacob Lurie, for instance, explores the intersection of topology and algebraic geometry. He studies “invariants” of shapes, which are properties of shapes that do not change even when the shapes do.
 
Tao works in several areas of math, but currently he is most excited to be studying basic equations that describe water and air flow. These equations can reproduce violent, turbulent flow in ways that are poorly understood. They are of interest to engineers and also to meteorologists, who use it for weather prediction. For instance, mathematicians do not yet know whether the equations sometimes result in a phenomenon called a singularity—a theoretical but physically impossible circumstance in which fluid moves at infinite speed. Tao is investigating a new approach to studying these equations that uses math to construct little self-replicating robots made entirely of water, which he hopes will tell him whether singularities exist. If successful, Tao’s technique could also be used to study equations from Einstein’s general relativity that describe how black holes form.
 
The other winners are Simon Donaldson from the Simons Center for Geometry and Physics, who studies four-dimensional shapes; Maxim Kontsevich from the Institute of Advanced Scientific Studies in France, a mathematical physicist who won the Fundamental Physics Prize in 2012; and Richard Taylor from the Institute for Advanced Study in Princeton, N.J., a number theorist who in 1995 helped prove Fermat’s last theorem, a famous 300-year-old problem.
 

(Left to right) Maxim Kontsevich, Richard Taylor, and Simon Donaldson


The $3-million payouts dwarf any other scientific prize—including the $1.2-million Nobel, from which mathematicians are excluded (The prizes are awarded in chemistry, economics, literature, physiology or medicine, peace and physics.) Breakthrough Prize ceremonies are black-tie affairs festooned with A-list celebrities. The last two hosts were actors Kevin Spacey and Morgan Freeman. Milner’s hope is that lavish payouts and star treatment will turn the winning researchers into household names and inspire future generations to pursue math and science as prestigious careers.
 
The new prizes have garnered criticism from some scientists, as reported in The Guardian, for lavishing so much money on a few researchers  rather than spreading the money more widely, and for downplaying collaboration. Milner’s goal, however, is to increase the popularity of science by celebrating the scientists. “Dividing [money] in small pieces and distributing it widely has been tried before and it works,” Milner says. “I think the idea behind this initiative is to really focus on raising public awareness.”
 
It will take more than buckets of money, however, to match the prestige and fame conferred by Nobels. The Breakthrough Prize will be a success, Tao says, to the extent that he and the other winners make discoveries that stand the test of time. “Einstein and Dirac and all the other [laureates] really did make huge contributions to science,” he says. “I just hope we can live up to the prize.”
 
Nominations for 2015 Breakthrough Prizes in Fundamental Physics and in Life Sciences are open through June 30 and can be submitted here.
 

With Prizes Like This, Who Needs a Nobel?

June 30, 2014

Mathematical patchwork

newsoffice.mit.edu

Alice Guionnet, an authority on random matrix theory, aims to make sense of huge data sets.

Helen Knight | MIT News Office
June 27, 2014

From the increasing information transmitted through telecommunications systems to that analyzed by financial institutions or gathered by search engines and social networks, so-called “big data” is becoming a huge feature of modern life.

But to analyze all of this incoming data, we need to be able to separate the important information from the surrounding noise. This requires the use of increasingly sophisticated techniques.

Alice Guionnet, a professor of mathematics at MIT, investigates methods to make sense of huge data sets, to find the hidden correlations between apparently random pieces of information, their typical behavior, and random fluctuations. “I consider things called matrices, where you have an array of data,” Guionnet says. “So you take some data at random, put it in a big array, and then try to understand how to analyze it, for example to subtract the noise.”

The field of random matrix theory, as it is known, has grown rapidly over the last 10 years, thanks to the huge rise in the amount of data we produce. The theory is now used in statistics, finance, and telecommunications, as well as in biology to model connections between neurons in the brain, and in physics to simulate the radiation frequencies absorbed and emitted by heavy atoms.

Mathematics as patchwork

A world-leading researcher in probability, Guionnet has made important theoretical contributions to random matrix theory. In particular, she has made recent advances in understanding large deviations — the probability of finding unlikely events or unusual behavior within the array of data — and in connecting the theory with that of topological expansion, in which random matrices are used to help solve combinatorial questions.

“It’s a bit like when you make a patchwork quilt,” Guionnet says. “So you have all of your pieces of patchwork, and then you go to sew them together so that they make a nice pillow with no holes, and you have many possibilities for how to lay them out,” she says.

Random matrices can be used to calculate the number of ways in which this “patchwork” can be sewn together, Guionnet says. She also considers several of these random arrays simultaneously, to help solve problems in the field of operator algebra.

Guionnet was born in Paris. She completed her master’s degree at the Ecole Normale Superieure Paris in 1993, and then moved to the Universite Paris Sud to undertake her PhD. The focus of her PhD was the statistical mechanics of disordered systems, a branch of mathematical physics in which the world around us is modeled down to the level of microscopic particles. In this way, researchers attempt to determine how microscopic interactions affect activity at the macroscopic level.

In particular, Guionnet was interested in objects called spin glasses — disordered magnetic materials that are similar to real glass, in that they appear to be stationary, but which are actually moving, albeit at an incredibly slow rate. “If you looked at the windows of your house millions of years from now, they may be shifting downward as a result of gravity,” she says. “I was attempting to analyze the dynamics of these kinds of systems.”

Before she had completed her PhD, Guionnet was offered a position within the French National Center for Scientific Research (CNRS), and moved to Ecole Normale Superieure (ENS) Lyon, where she continued to focus on the spin glass model, before branching out into random matrices. “I initially wanted to work in applied mathematics,” Guionnet says. “But as I started to consider questions in random matrix theory, I moved into purer and purer mathematics.”

While at ENS Lyon, she was made a director of research for CNRS, and was given the opportunity to build her own team of top researchers in probability theory.

Making connections

She moved to MIT in 2012, where she continues her work in random matrix theory. In the same year, Guionnet was chosen as one of 21 mathematicians, theoretical physicists, and theoretical computer scientists named as Simons Investigators. Awarded by the Simons Foundation, a private organization that aims to advance research in math and the basic sciences, Simons Investigators each receive $100,000 annually to support their work.

“What I like about my work is that it crosses over into different fields — probability theory, operator algebra, and random matrices — and I’m trying to advance these three theories at the same time,” Guionnet says. “These different fields are all merging and connecting with each other, and that is what I try to understand in my work.”

The opportunity to work with people from different mathematical fields, and to learn new ideas from them, is one of the things Guionnet loves most about the subject. “When you work with people from different fields you begin to make new connections, and get a new point of view on the object you are studying, so it’s kind of exciting,” she says.

What’s more, the math itself is always evolving and progressing, she says: “Mathematics is beautiful.”

Mathematical patchwork

June 30, 2014

Minimal surfaces, maximal impact

newsoffice.mit.edu

MIT mathematics professor William Minicozzi in his office at Building E17. Minicozzi studies the theory of surface tension in solutions.
Photo: Dominick Reuter

MIT mathematician William Minicozzi unleashes ‘a wave of new results’ in geometric analysis.

Helen Knight | MIT News correspondent
June 4, 2014

It’s something children do every day when blowing bubbles: Stick a circular wire in a pot of soapy water, pull it out, and behold the film forming across it.

But it’s not only children who are amused by this phenomenon — which has also kept mathematicians occupied since the 18th century, says William Minicozzi, a professor of mathematics at MIT.

That is because the film that forms across the wire pulls itself as tight as possible in order to minimize its surface tension. This results in a surface that has the least possible area for that fixed boundary. Even if you bend the wire, this so-called “minimal surface” will still form.

Mathematicians have studied minimal surfaces theoretically since the 1700s. Then in the 1880s, Belgian physicist Joseph Plateau began experimenting physically with these soapy films. He questioned whether for every possible curve that could be made to the wire, a minimal surface would form with that shape as its boundary. Although intuition would tell you that it should do this, there is no way to physically test the infinite number of possible variations that could be made to the shape of the wire in order to provide mathematical proof, Minicozzi says.

A top geometric analyst

Answering Plateau’s question — and addressing subsequent conjectures on the properties of complex minimal surfaces — has kept mathematicians busy ever since. The most notable of these researchers in recent years have been Minicozzi and his colleague Tobias Colding, the Cecil and Ida Green Distinguished Professor of Mathematics at MIT. Together, Minicozzi and Colding are widely considered to be the world’s leading geometric analysts of their generation.

In 2004 the duo jointly published a series of papers in the Annals of Mathematics that resolved a number of longstanding conjectures in the field; this earned them the prestigious Oswald Veblen Prize in Geometry.

Of particular interest to Minicozzi and Colding was whether it is possible to describe what all minimal surfaces look like. “Of course there are infinitely many possible minimal surfaces, because there are infinitely many possible ways of bending the curve, so you couldn’t list them all,” Minicozzi says. “But could you describe the way that they are all made? Could you give a recipe for building any minimal surface?”

The pair proved that all so-called “embedded minimal surfaces” — those that are not self-intersecting — can be cut into a collection of simple pieces where each is very flat (like a plane) or part of a helicoid. Helicoids are shaped like double-spiral staircases; the structures can also be found in many parking garages, Minicozzi says.

“If you drive into a parking garage and go up a level, that spiral ramp is part of a helicoid,” he says. “And one of the things we were able to show was that every embedded minimal surface could be built out of these things. So the minimal surface either looks like a nice flat thing where the area is bounded, or it looks exactly like one of these double spiral staircases.”

A wave of new results’

Awarding Minicozzi and Colding the Veblen Prize in 2010, the American Mathematical Society said the “profound” work had yielded a “remarkable global picture” for bounded minimal surfaces, and had “initiated a wave of new results.”

Minicozzi graduated from Princeton University in 1990, and moved to Stanford University to complete his PhD. He first began working with Colding in 1994, when the researchers were both at the Courant Institute at New York University. Together they solved a conjecture of Shing-Tung Yau that had been open since the 1970s concerning Riemannian manifolds, or curved spaces.

They continued collaborating after Minicozzi joined Johns Hopkins University in 1994, where he became a professor of mathematics in 2000 and a Krieger-Eisenhower Professor in 2007.

In 2012 Minicozzi joined MIT, where he was reunited with Colding; the researchers have recently been investigating how surfaces change over time, a process known as mean curvature flow. In particular, they have been looking at the formation of singularities, or conelike bumps where the smoothness of a surface breaks down.

They have been able to prove, he says, that of the infinite number of singularities that could possibly affect a surface through this curvature flow, only two types are stable enough to survive in reality. “If you were to wiggle your surface ever so slightly, in fact only two are stable of the infinitely many that are possible,” Minicozzi says. “So if you are trying to understand [mean curvature flow] and you have to deal with all of these cases, it’s much better to have to deal with two cases than an infinite number.”

This year Minicozzi and Colding have answered another open question in curvature flow: whether a given singularity — known as a “shrinker” — will appear different when viewed at different levels of magnification. “If you look at it under a more powerful microscope you may see an entirely different shrinker,” Minicozzi says.

Still fascinated by the field, the two researchers have already moved on to attempting to solve another longstanding conjecture in curvature flow.

Minimal surfaces, maximal impact

June 30, 2014

Of Dinosaurs and Mathematics

www.oist.jp

Professor Robert Sinclair

Dinosaurs and mathematics do not seem like an obvious pair, but for Professor Robert Sinclair and his Mathematical Biology Unit, they are a logical match. Sinclair was part of a team that recently published a paper in Alcheringa: An Australasian Journal of Palaeontology that reexamined the classification of a dinosaur bone found in Australia. Using his expertise in mathematics, Sinclair was able to help the paleontologists reclassify a single arm bone as belonging to a dinosaur family previously believed not to have existed in the Southern Hemisphere. Sinclair contributed to an international, interdisciplinary collaboration that may lead to revisions in the current thinking about how continents were connected in the ancient world.

The bone in question, an ulna, or arm bone, was found in southern Australia. The researchers named this new species of dinosaur Serendipaceratops arthurcclarkei, and classified it as belonging to the Neoceratopsia family, which includes the famous dinosaur Triceratops. Not long after that paper was published, another research group published a paper saying that the bone could not be ceratopsian, partly because that family of dinosaurs existed only in the Northern Hemisphere and the land masses had already split, therefore there was no way that a bone from that family could be found in Australia. One caveat to this logic is that the data used to determine when the continents split is based on fossil data. “It becomes a chicken and egg scenario,” says Sinclair. If the data used to establish a theory is then refuted by finding something unexpected, that theory should be challenged, which is not an easy thing to do.

This is where Sinclair can use mathematics to provide solid evidence for one theory or another. He is interested in using mathematics to solve difficult problems in fields of research where current methodologies are not sufficient. He was attracted to paleontology for this reason, particularly in his native Australia. After an invitation to speak at OIST, Dr. Thomas Rich, one of the Australian paleontologists on the paper, asked Sinclair for help in showing that the bone he had analyzed belonged to Ceratopsia.

Sinclair went about investigating whether the dimensions and characteristics of the bone matched other members of the Ceratopsia family, or whether they matched a different family. Sinclair said the challenge was to “use mathematics in a field where it is not commonly used or well understood and utilize it in a way that is understandable to those in the field.” First he had to find a characteristic that could be measured on the bone in question and the same type of bone in other species and families of dinosaurs, in this case, the flatness of the bone. He had to mathematically account for variability in the bones since fossils tend to become broken or deformed over time. Some paleontologists were still skeptical of what the mathematics really meant. This is where the hard work began, and Sinclair had to find other measurements to make and used several different mathematical techniques to show that they all reached the same conclusion.

In the end, he showed mathematical data of three different types in order to convince certain paleontologists that the bone in question belonged to the Ceratopsia family. The driving point for Sinclair is using mathematics to tackle difficult questions when conventional methods in the field are not sufficient. His statistical analysis, combined with other analyses provided by the co-authors, was convincing enough to put the bone back into the Ceratopsia family.

Sinclair says he is excited about people finding more dinosaur bones in Australia to see how it challenges the current thinking about what did and did not exist on the continent. As for his other endeavors, he looks forward to working in new fields and figuring out the “dance of what you would do as a mathematician and what is accepted in that community.” With that goal in mind, it is easy to see how dinosaurs and mathematics formed a logical pair for Sinclair.

You can read the article here:

http://www.tandfonline.com/doi/full/10.1080/03115518.2014.894809#.U3MMH62SwmY

By: Kathleen Estes

Of Dinosaurs and Mathematics

June 30, 2014

Strange physics turns off laser (Nature Communications)

blogs.princeton.edu

An electron microscope image shows two lasers placed just two microns apart from each other.
(Image source: Turecki lab)

By Steve Schultz, School of Engineering Office of Communications

Inspired by anomalies that arise in certain mathematical equations, researchers have demonstrated a laser system that paradoxically turns off when more power is added rather than becoming continuously brighter.

The finding by a team of researchers at Vienna University of Technology and Princeton University, could lead to new ways to manipulate the interaction of electronics and light, an important tool in modern communications networks and high-speed information processing.

The researchers published their results June 13 in the journal Nature Communications.

Their system involves two tiny lasers, each one-tenth of a millimeter in diameter, or about the width of a human hair. The two are nearly touching, separated by a distance 50 times smaller than the lasers themselves. One is pumped with electric current until it starts to emit light, as is normal for lasers. Power is then added slowly to the other, but instead of it also turning on and emitting even more light, the whole system shuts off.

“This is not the normal interference that we know,” said Hakan Türeci, assistant professor of electrical engineering at Princeton, referring to the common phenomenon of light waves or sound waves from two sources cancelling each other. Instead, he said, the cancellation arises from the careful distribution of energy loss within an overall system that is being amplified.


Manipulating minute areas of gain and loss within individual lasers (shown as peaks and valleys in the image),
researchers were able to create paradoxical interactions between two nearby lasers.
(Image source: Turecki lab)

“Loss is something you normally are trying to avoid,” Türeci said. “In this case, we take advantage of it and it gives us a different dimension we can use – a new tool – in controlling optical systems.”

The research grows out of Türeci’s longstanding work on mathematical models that describe the behavior of lasers. In 2008, he established a mathematical framework for understanding the unique properties and complex interactions that are possible in extremely small lasers – devices with features measured in micrometers or nanometers. Different from conventional desk-top lasers, these devices fit on a computer chip.

That work opened the door to manipulating gain or loss (the amplification or loss of an energy input) within a laser system. In particular, it allowed researchers to judiciously control the spatial distribution of gain and loss within a single system, with one tiny sub-area amplifying light and an immediately adjacent area absorbing the generated light.

Türeci and his collaborators are now using similar ideas to pursue counterintuitive ideas for using distribution of gain and loss to make micro-lasers more efficient.

The researchers’ ideas for taking advantage of loss derive from their study of mathematical constructs called “non-Hermitian” matrices in which a normally symmetric table of values becomes asymmetric. Türeci said the work is related to certain ideas of quantum physics in which the fundamental symmetries of time and space in nature can break down even though the equations used to describe the system continue to maintain perfect symmetry.

Over the past several years, Türeci and his collaborators at Vienna worked to show how the mathematical anomalies at the heart of this work, called “exceptional points,” could be manifested in an actual system. In 2012 (Ref. 3), the team published a paper in the journal Physical Review Letters demonstrating computer simulations of a laser system that shuts off as energy is being added. In the current Nature Communications paper, the researchers created an experimental realization of their theory using a light source known as a quantum cascade laser.

The researchers report in the article that results could be of particular value in creating “lab-on-a-chip” devices – instruments that pack tiny optical devices onto a single computer chip. Understanding how multiple optical devices interact could provide ways to manipulate their performance electronically in previously unforeseen ways. Taking advantage of the way loss and gain are distributed within tightly coupled laser systems could lead to new types of highly accurate sensors, the researchers said.

“Our approach provides a whole new set of levers to create unforeseen and useful behaviors,” Türeci said.

The work at Vienna, including creation and demonstration of the actual device, was led by Stefan Rotter at Vienna along with Martin Brandstetter, Matthias Liertzer, C. Deutsch, P. Klang, J. Schöberl, G. Strasser and K. Unterrainer. Türeci participated in the development of the mathematical models underlying the phenomena. The work on the 2012 computer simulation of the system also included Li Ge, who was a post-doctoral researcher at Princeton at the time and is now an assistant professor at City University of New York.

The work was funded by the Vienna Science and Technology Fund and the Austrian Science Fund, as well as by the National Science Foundation through a major grant for the Mid-Infrared Technologies for Health and the Environment Center based at Princeton and by the Defense Advanced Research Projects Agency.

Read the abstract.

M. Brandstetter, M. Liertzer, C. Deutsch,P. Klang,J. Schöberl,H. E. Türeci,G. Strasser,K. Unterrainer & S. Rotter. Reversing the pump dependence of a laser at an exceptional point. Nature Communications 13 June 2014. DOI:10.1038/ncomms5034

Science 2 May 2008. DOI: 10.1126/science.1155311

Physical Review Letters 24 April 2012. DOI:10.1103/PhysRevLett.108.173901

Strange physics turns off laser (Nature Communications)

June 30, 2014

Collaborative learning — for robots

newsoffice.mit.edu
Algorithm lets independent agents collectively produce a machine-learning model without aggregating data.

Larry Hardesty | MIT News Office
June 25, 2014

Machine learning, in which computers learn new skills by looking for patterns in training data, is the basis of most recent advances in artificial intelligence, from voice-recognition systems to self-parking cars. It’s also the technique that autonomous robots typically use to build models of their environments.

That type of model-building gets complicated, however, in cases in which clusters of robots work as teams. The robots may have gathered information that, collectively, would produce a good model but which, individually, is almost useless. If constraints on power, communication, or computation mean that the robots can’t pool their data at one location, how can they collectively build a model?

At the Uncertainty in Artificial Intelligence conference in July, researchers from MIT’s Laboratory for Information and Decision Systems will answer that question. They present an algorithm in which distributed agents — such as robots exploring a building — collect data and analyze it independently. Pairs of agents, such as robots passing each other in the hall, then exchange analyses.

In experiments involving several different data sets, the researchers’ distributed algorithm actually outperformed a standard algorithm that works on data aggregated at a single location.

“A single computer has a very difficult optimization problem to solve in order to learn a model from a single giant batch of data, and it can get stuck at bad solutions,” says Trevor Campbell, a graduate student in aeronautics and astronautics at MIT, who wrote the new paper with his advisor, Jonathan How, the Richard Cockburn Maclaurin Professor of Aeronautics and Astronautics. “If smaller chunks of data are first processed by individual robots and then combined, the final model is less likely to get stuck at a bad solution.”

Campbell says that the work was motivated by questions about robot collaboration. But it could also have implications for big data, since it would allow distributed servers to combine the results of their data analyses without aggregating the data at a central location.

“This procedure is completely robust to pretty much any network you can think of,” Campbell says. “It’s very much a flexible learning algorithm for decentralized networks.”

Matching problem

To get a sense of the problem Campbell and How solved, imagine a team of robots exploring an unfamiliar office building. If their learning algorithm is general enough, they won’t have any prior notion of what a chair is, or a table, let alone a conference room or an office. But they could determine, for instance, that some rooms contain a small number of chair-shaped objects together with roughly the same number of table-shaped objects, while other rooms contain a large number of chair-shaped objects together with a single table-shaped object.

Over time, each robot will build up its own catalogue of types of rooms and their contents. But inaccuracies are likely to creep in: One robot, for instance, might happen to encounter a conference room in which some traveler has left a suitcase and conclude that suitcases are regular features of conference rooms. Another might enter a kitchen while the coffeemaker is obscured by the open refrigerator door and leave coffeemakers off its inventory of kitchen items.

Ideally, when two robots encountered each other, they would compare their catalogues, reinforcing mutual observations and correcting omissions or overgeneralizations. The problem is that they don’t know how to match categories. Neither knows the label “kitchen” or “conference room”; they just have labels like “room 1” and “room 3,” each associated with different lists of distinguishing features. But one robot’s room 1 could be another robot’s room 3.

With Campbell and How’s algorithm, the robots try to match categories on the basis of shared list items. This is bound to lead to errors: One robot, for instance, may have inferred that sinks and pedal-operated trashcans are distinguishing features of bathrooms, another that they’re distinguishing features of kitchens. But they do their best, combining the lists that they think correspond.

When either of those robots meets another robot, it performs the same procedure, matching lists as best it can. But here’s the crucial step: It then pulls out each of the source lists independently and rematches it to the others, repeating this process until no reordering results. It does this again with every new robot it encounters, gradually building more and more accurate models. Imposing order

This relatively straightforward procedure results from some pretty sophisticated mathematical analysis, which the researchers present in their paper. “The way that computer systems learn these complex models these days is that you postulate a simpler model and then use it to approximate what you would get if you were able to deal with all the crazy nuances and complexities,” Campbell says. “What our algorithm does is sort of artificially reintroduce structure, after you’ve solved that easier problem, and then use that artificial structure to combine the models properly.”

In a real application, the robots probably wouldn’t just be classifying rooms according to the objects they contain: They’d also be classifying the objects themselves, and probably their uses. But Campbell and How’s procedure generalizes to other learning problems just as well.

The example of classifying rooms according to content, moreover, is similar in structure to a classic problem in natural language processing called topic modeling, in which a computer attempts to use the relative frequency of words to classify documents according to topic. It would be wildly impractical to store all the documents on the Web in a single location, so that a traditional machine-learning algorithm could provide a consistent classification scheme for all of them. But Campbell and How’s algorithm means that scattered servers could churn away on the documents in their own corners of the Web and still produce a collective topic model.

“Distributed computing will play a critical role in the deployment of multiple autonomous agents, such as multiple autonomous land and airborne vehicles,” says Lawrence Carin, a professor of electrical and computer engineering and vice provost for research at Duke University. “The distributed variational method proposed in this paper is computationally efficient and practical. One of the keys to it is a technique for handling the breaking of symmetries manifested in Bayesian inference. The solution to this problem is very novel and is likely to be leveraged in the future by other researchers."

Collaborative learning — for robots

June 30, 2014

Equations reveal the rebellious rhythms at the heart of nature

www.lancaster.ac.uk
19 June 2014 09:34

Physicists are using equations to reveal the hidden complexities of the human body.

From the beating of our hearts to the proper functioning of our brains, many systems in nature depend on collections of ‘oscillators’; perfectly-coordinated, rhythmic systems working together in flux, like the cardiac muscle cells in the heart.

Unless they act together, not much happens. But when they do, powerful changes occur. Cooperation between neurons results in brain waves and cognition, synchronized contractions of cardiac cells cause the whole heart to contract and pump the blood around the body. Lasers would not function without all the atomic oscillators acting in unison. Soldiers even have to break step when they reach a bridge in case oscillations caused by their marching feet cause the bridge to collapse.

But sometimes those oscillations go wrong.

Writing in the journal Nature Communications, scientists at Lancaster University report the possibility of “glassy states” and a “super-relaxation” phenomenon, which might appear in the networks of tiny oscillators within the brain, heart and other oscillating entities.

To uncover these phenomena, they took a new approach to the solution of a set of equations proposed by the Japanese scientist Yoshiki Kuramoto in the 1970s. His theory showed it was possible in principle to predict the properties of a system as a whole from a knowledge of how oscillators interacted with each other on an individual basis.

Therefore, by looking at how the microscopic cardiac muscle cells interact we should be able to deduce whether the heart as a whole organ will contract properly and pump the blood round. Similarly, by looking at how the microscopic neurons in the brain interact, we might be able to understand the origins of whole-brain phenomena like thoughts, or dreams, or amnesia, or epileptic fits.

Physicists Dmytro Iatsenko , Professor Peter McClintock, and Professor Aneta Stefanovska have reported a far more general solution of the Kuramoto equations than anyone has achieved previously, with some quite unexpected results.

One surprise is that the oscillators can form “glassy” states, where they adjust the tempos of their rhythms but otherwise remain uncoordinated with each other, thus giving birth to some kind of “synchronous disorder” rather like the disordered molecular structure of window glass. Furthermore and even more astonishingly, under certain circumstances the oscillators can behave in a totally independent manner despite being tightly coupled together, the phenomenon the authors call “super-relaxation”.

These results raise intriguing questions. For example, what does it mean if the neurons of your brain get into a glassy state?

Dmytro Iatsenko, the PhD student who solved the equations, admitted the results posed more questions than they answered.

“It is not fully clear yet what it might mean if, for example, this happened in the human body, but if the neurons in the brain could get into a “glassy state” there might be some strong connection with states of the mind, or possibly with disease.”

Lead scientist Professor Aneta Stefanovska said: “With populations of oscillators, the exact moment when something happens is far more important than the strength of the individual event. This new work reveals exotic changes that can happen to large-scale oscillations as a result of alterations in the relationships between the microscopic oscillators. Because oscillations occur in myriads of systems in nature and engineering, these results have broad applicability.”

Professor Peter McClintock said: “The outcome of the work opens doors to many new investigations, and will bring enhanced understanding to several seemingly quite different areas of science.”

Equations reveal the rebellious rhythms at the heart of nature

June 30, 2014

New Math Technique Improves Atomic Property Predictions to Historic Accuracy

www.nist.gov
From NIST Tech Beat: June 25, 2014

By combining advanced mathematics with high-performance computing, scientists at the National Institute of Standards and Technology (NIST) and Indiana University (IU) have developed a tool that allowed them to calculate a fundamental property of most atoms on the periodic table to historic accuracy—reducing error by a factor of a thousand in many cases. The technique also could be used to determine a host of other atomic properties important in fields like nuclear medicine and astrophysics.*

NIST's James Sims and IU's Stanley Hagstrom have calculated the base energy levels for the four electrons in the element beryllium, an accomplishment that has required nearly an entire career's effort on Sims' part. Precise determination of the base energy—crucial for determining the amount necessary to raise an atom from its base energy level to any level higher—has great intrinsic value for fundamental atomic research, but the team's technique has implications far broader than for a single element.

Sims says the technique allowed the calculation of excitation states with eight-decimal accuracy, resulting in a remarkably smooth curve that they expected theoretically but were not sure they would attain in practice. For the vast majority of the elements in the periodic table, the calculated results are a thousand times more accurate than previous values. The results, according to Sims, suggest their method could enable computation of other atomic properties—electron affinity and ionization potential, for example—that are important for nuclear medicine, astrophysics and other fields of atomic research. A form of cancer treatment now under development called boron neutron capture therapy may benefit from these calculations, he says.

Their method is the culmination of decades of effort aimed at using quantum mechanics to predict base energy levels from first principles. Sims first proposed in the late 1960s that such a quantum approach could be possible, but the complex calculations involved were beyond the reach of the world's best computers. Only in 2006, after the advent of parallel computing—linking many computers together as a unified cluster—were he and Hagstrom able to create workable algorithms for calculating the energies for a two-electron hydrogen molecule more accurately than could be done experimentally. Then, in 2010, they improved the algorithms to bring lithium's three electrons within reach.**

Beryllium's four electrons proved a new hurdle, but perhaps the last significant one. Much of the difficulty stems from the fact that mutual repulsion among the electrons, combined with their attraction for the nucleus, creates a complex set of interacting forces that are at least time-consuming, if not practically impossible, to calculate. The complexity grows with the addition of each new electron, but the team found a mathematical approach that can reduce an atom's electron cloud to a group of problems, none of which are more complex than solving a four-electron system.

Calling their approach a shortcut would be in some ways a misnomer. Where the calculation for lithium required a cluster of 32 parallel processors, beryllium required 256, and even then, the cluster needed to operate at extremely high efficiency for days. But the payoff was that they could calculate the energies for all four-electron ground states—meaning not only all of the elements in beryllium's column on the periodic table, each of which has four electrons in its outer shell, but also for all other elements in ionized states that have four electrons, such as boron with one electron missing, carbon missing two, and so forth, on up to the as-yet-unnamed element 113.

*J.A. Sims and S.A. Hagstrom. Hylleraas-configuration-interaction nonrelativistic energies for the 1S ground states of the beryllium isoelectronic sequence. Journal of Chemical Physics, DOI 10.1063/1.4881639, June 11, 2014.

**See the 2010 Tech Beat article, "Theorists Close In on Improved Atomic Property Predictions" at www.nist.gov/public_affairs/tech-beat/tb20100112.cfm#atomic.

New Math Technique Improves Atomic Property Predictions to Historic Accuracy

June 30, 2014

The great salmon run algorithm: How to swim against the stream and avoid the grizzlies

www.sciencedaily.com

June 24, 2014

Solving complex problems is rarely a straightforward process, there are often many variables and countless plausible solutions each one of which has its pros and cons. Mechanical engineers at the Babol University of Technology in Mazandaran, Iran, have turned to nature to devise an algorithm based on the survival trials faced by salmon swimming upstream to the spawning grounds to help them fish out the optimal solution to a given problem. They provide details in the International Journal of Computer Applications in Technology.

Bio-inspiration has been widely used in problem solving, with so-called genetic algorithms plucking the best, or fittest, solutions from possible answers to a complex problem, modifying the answers slightly a random and then testing them again. Repeating the process allows problem solvers to find an optimal answer through a process akin to survival of the fittest in nature. Other problems, such as the travelling salesperson problem have been approached by mimicking how ants forage for food and create the shortest paths between nest and food source. Similarly, the flight of the honeybee from hive to floral patch and back has been emulated in computer algorithms. There are also so-called particle swarm optimization algorithms that mimic social behavior of flocking birds or schooling fish. These approaches have their benefits but they are also unsuitable for certain engineering problems.

The selective pressure on the upstream struggle of salmon offered alternative natural inspiration for Alireza Fathi and Ahmad Mozaffari. The pair realized that genetic algorithms would not cope well with certain engineering problems where many constraints on plausible solutions must be applied.

The team explains that the annual salmon run sees millions of fish migrating through mountain streams to the spawning area where the fish themselves originated. The upstream swim is fraught with hazards not least high waterfalls and people hoping to hook the fish as well as hungry Grizzly bears. Grizzlies themselves follow cooperative swarm intelligence rules for seeking out and fishing for the plumpest salmon navigating the waters of their territory. Humans too will scout a large area and then congregate where the fishing is best. There are also salmon sharks to avoid.

On the part of the salmon, they too split into groups as they head inland along the waterways and, through instinct and observation, find a navigable route upstream. Some will pass through more lakes and ponds others will opt for forested waterways and waterfalls. Grizzlies and sharks hunt the forested streams, humans tend to fish the open water, each with varying degrees of success. The team has codified the various hazards facing the salmon, and the salmon themselves thus become individual solutions to a problem fed into the computer algorithm. The great salmon run (TGSR) algorithm therefore becomes a simulation of the actual salmon run, but allows the team to home in on specific solutions to a problem that are optimal in the sense of reaching the spawning grounds unfettered by grizzly, shark or human.

The team has now successfully applied their algorithm to 25 standard benchmarking problems in engineering and to finding the best setup for a new type of laser. They compared the results with those obtainable using the conventional and more well-known genetic algorithms, particle swarm optimization techniques and artificial bee colony approaches. "In most cases, the TGSR algorithm worked better than the other methods," the team says. "Moreover, for some problems it was quicker at converging on an optimal solution.

Journal Reference:

Fathi, A. and Mozaffari, A. TGSR: the great salmon run optimisation algorithm. Int. J. Computer Applications in Technology, 2014

The great salmon run algorithm: How to swim against the stream and avoid the grizzlies

June 30, 2014

Turing test success marks milestone in computing history

www.reading.ac.uk

Release Date 08 June 2014

An historic milestone in artificial intelligence set by Alan Turing - the father of modern computer science - has been achieved at an event organised by the University of Reading.

The 65 year-old iconic Turing Test was passed for the very first time by computer programme Eugene Goostman during Turing Test 2014 held at the renowned Royal Society in London on Saturday.

'Eugene' simulates a 13 year old boy and was developed in Saint Petersburg, Russia. The development team includes Eugene's creator Vladimir Veselov, who was born in Russia and now lives in the United States, and Ukrainian born Eugene Demchenko who now lives in Russia.

The Turing Test is based on 20th century mathematician and code-breaker Turing's 1950 famous question and answer game, ‘Can Machines Think?'. The experiment investigates whether people can detect if they are talking to machines or humans. The event is particularly poignant as it took place on the 60th anniversary of Turing's death, nearly six months after he was given a posthumous royal pardon.

If a computer is mistaken for a human more than 30% of the time during a series of five minute keyboard conversations it passes the test. No computer has ever achieved this, until now. Eugene managed to convince 33% of the human judges (30 judges took part - see more details below) that it was human.

This historic event was organised by the University's School of Systems Engineering in partnership with RoboLaw, an EU-funded organisation examining the regulation of emerging robotic technologies.

Professor Kevin Warwick, a Visiting Professor at the University of Reading and Deputy Vice-Chancellor for Research at Coventry University, said: "In the field of Artificial Intelligence there is no more iconic and controversial milestone than the Turing Test, when a computer convinces a sufficient number of interrogators into believing that it is not a machine but rather is a human. It is fitting that such an important landmark has been reached at the Royal Society in London, the home of British Science and the scene of many great advances in human understanding over the centuries. This milestone will go down in history as one of the most exciting.

"Some will claim that the Test has already been passed. The words Turing Test have been applied to similar competitions around the world. However this event involved the most simultaneous comparison tests than ever before, was independently verified and, crucially, the conversations were unrestricted. A true Turing Test does not set the questions or topics prior to the conversations. We are therefore proud to declare that Alan Turing's Test was passed for the first time on Saturday.

"Of course the Test has implications for society today. Having a computer that can trick a human into thinking that someone, or even something, is a person we trust is a wake-up call to cybercrime. The Turing Test is a vital tool for combatting that threat. It is important to understand more fully how online, real-time communication of this type can influence an individual human in such a way that they are fooled into believing something is true...when in fact it is not."

Eugene was one of five computer programmes battling it for the Turing Test 2014 Prize. On winning the competition and achieving this historic milestone Vladimir Veselov said:

"I want to congratulate everyone who worked on Eugene Goostman. Our whole team is very excited with this result. It's a remarkable achievement for us and we hope it boosts interest in artificial intelligence and chatbots. Special thanks to Professor Kevin Warwick and Dr Huma Shah for their effort in organising the event.

"Eugene was 'born' in 2001. Our main idea was that he can claim that he knows anything, but his age also makes it perfectly reasonable that he doesn't know everything. We spent a lot of time developing a character with a believable personality. This year we improved the 'dialog controller' which makes the conversation far more human-like when compared to programs that just answer questions. Going forward we plan to make Eugene smarter and continue working on improving what we refer to as 'conversation logic'."

Among the judges tasked with separating the human and computer participants were the actor Robert Llewellyn, who played robot Kryten in the sci-fi comedy TV series Red Dwarf, and Lord Sharkey, who led the successful campaign for Alan Turing's posthumous pardon last year.

Professor Warwick concluded: "Not long before he died on 7 June 1954 Alan Turing, himself a Fellow of the Royal Society, predicted that in time this test would be passed. It is difficult to conceive that he could possibly have imagined what computers of today, and the networking that links them, would be like."

Further details about the Turing Test 2014:

Simultaneous tests as specified by Alan Turing
Each judge was involved in five parallel tests - so 10 conversations
30 judges took part
In total 300 conversations
In each five minutes a judge was communicating with both a human and a machine
Each of the five machines took part in 30 tests
To ensure accuracy of results, Test was independently adjudicated by Professor John Barnden, University of Birmingham, formerly head of British AI Society
This story has been updated to clarify the description of 'Eugene' as a computer programme rather than a 'supercomputer'

Turing test success marks milestone in computing history

June 30, 2014

Infinitesimal: How a Dangerous Mathematical Theory Shaped the Modern World,
by Amir Alexander

www.timeshighereducation.co.uk
19 JUNE 2014

Robyn Arianrhod on a compelling tale of Jesuits, geometry and heresy in the turbulent 17th century

One of the joys of modern academic life is the multicultural camaraderie of free and robust intellectual debate. In the mathematical sciences in particular, ideally we can put aside the world’s religious and political turmoil, along with our own personal and cultural differences, in order to discuss universal scientific ideas in the peace and civility of university tea rooms. This isn’t just a superficial modern gloss: it is an ideal with a long and fascinating history, as Amir Alexander shows in Infinitesimal.

In the early 1640s, amid the terror and chaos of the English Civil War, a group of mathematicians and “natural philosophers” (physicists) began to meet regularly at each other’s homes in London. Eventually, their “invisible college” would become chartered as the Royal Society of London – now one of the world’s oldest and most respected scientific bodies – but in those early years, the purpose of these gatherings “was no more than breathing freer air, and of conversing in quiet with one another, without being engaged in the passions and madness of that dismal age”. These poignant words are those of the Royal Society’s first historian, Thomas Sprat, and Alexander effortlessly elucidates the religious, political and class “passions” that polarised and divided England at that time. But the unique aspect of his telling of this history is his contention that such political and religious polarisation was reflected in a mathematical debate.

‘Infinitesimals’ were uncertainties that troubled both the Jesuits and Thomas Hobbes for mathematical as well as political reasons

Consequently, Infinitesimal’s historical reach includes not only England but also Italy, because the end result of this debate was different in each country. Alexander begins his story in Rome, a decade before those first clandestine meetings of the Royal Society, but the backdrop is similar: a cacophony of dogmatic voices, unleashed in this case by the Reformation and Counter-Reformation, which sparked the politico-religious dramas of the Thirty Years War. It is 1632, a year that witnesses the events leading up to the trial of Galileo Galilei, charged with heresy for claiming as a proven fact Copernicus’ theory that the Earth moves around the Sun. But Alexander focuses instead on a lesser-known, more subtle “heresy”: that the continuum, or real line, is composed of tiny “indivisibles” or “atoms”. Five Jesuit “Revisors” – who decide what can and cannot be taught in Jesuit schools – meet on 10 August 1632 to discuss this proposition, and pronounce it both “improbable” and contrary to the teachings of Aristotle, the Catholics’ favoured authority on science and philosophy. In other words, it is “condemned and prohibited”.

It’s an astonishing way to begin a book, but this is no trite tale of anti-scientific Catholicism: Alexander then backtracks to show the intellectual and spiritual depth behind such a prohibition, through a thrilling story of the rise of the Jesuits and their desperate efforts to lead war-torn Europe back to the certainties and order of Catholic absolutism. Later, in part two, he juxtaposes this story with that of the English political philosopher and geometer Thomas Hobbes. Instead of seeking religious absolutism – something he despised – Hobbes’ solution to the anarchy of the Interregnum was a model in which the people cede power to an absolute sovereign (a king, an aristocracy or an elected body) who will protect them from the war and chaos that would otherwise overwhelm them. As with Catholic absolutism, Hobbes’ sovereign decrees what opinions are allowed, the goal being to preserve the peace. Both Hobbes and the Jesuits backed up their political ideals by appealing to Euclidean geometry, with its orderly deductive proofs yielding (apparently) absolute truths.

Rigorous geometry is then contrasted with early modern attempts to grapple with the paradoxes and uncertainties in the mathematics of “infinitesimals” – uncertainties that troubled the Jesuits and Hobbes for mathematical as well as political reasons. These paradoxes would not be resolved until the 19th century, when the “limit” concept was made rigorous. The focus here, however, is the 17th century, when innovative mathematicians showed that infinitesimal methods were useful even if they were not yet rigorous. Extremely useful, in fact: they underlie calculus, the mathematics of change that has made possible modern theoretical and applied physics.

Part one counterpoises the Jesuit hierarchy’s insistence on teaching only time-honoured Euclidean geometry with the innovative use of infinitesimals by men such as Galileo, who used them in deducing his law of falling bodies, and Bonaventura Cavalieri, who in 1635 published a landmark book on infinitesimal methods, which he used to lay some of the foundations of calculus. The story includes a number of other contributors to this new mathematics, some of whom were Jesuits who struggled to reconcile their ideas with official doctrine. In part two, the key innovator is John Wallis, a founding member of those early Royal Society meetings. Alexander suggests that the society’s scholarly ideal of rational pluralism provided an alternative vision of a stable state to Hobbes’ all-powerful sovereign (or “Leviathan”).

To help the reader engage with the debate, accessible explanations of some of the mathematics are included (about 30 pages in all) – a creditable and useful addition, although I felt that occasionally 17th-century terminology was not defined clearly enough for a modern readership. There is also a rather loose use of the terms “infinitesimal” and “indivisible”. But this is a popular book, not a scholarly one, and Alexander pulls off the impressive feat of putting a subtle mathematical concept centre stage in a ripping historical narrative that I found even more satisfying in part two.

Perhaps it is because here there are just two key players, Hobbes and his nemesis Wallis. Wallis was England’s leading mathematician in the years before Isaac Newton astonished the world with his formalisation of the algorithms of calculus and his revolutionary theory of gravity. The tale of Hobbes and Wallis, the geometer and the infinitesimalist, is told in the context of their well-documented (and decidedly uncivil) intellectual and political “war” over infinitesimals.

I was initially uneasy about the adversarial approach Alexander takes in part one, pitting the Jesuit hierarchy against the infinitesimalists. For the purposes of a gripping narrative, transitional figures such as Nicole Oresme are left out, and the fact that most mathematicians, not only Jesuits, favoured geometry is glossed over. In part two, too, the narrative requires winners and losers, and here geometry is the loser. But Alexander does not mention, for example, that Newton himself was so concerned by the novelty and lack of rigour in his new calculus that he chose to present his theory of gravity mostly in terms of Euclidean geometry.

Nevertheless, this is a complex story told with skill and verve, and overall Alexander does an excellent job of presenting both sides of the debate. I particularly liked his treatment of the way innovators such as Wallis highlighted the difference between proving known facts rigorously and developing new knowledge with more heuristic methods. He thereby shows that mathematics is a far more flexible and radical tool than many lay readers realise.

There is much in this fascinating book, and it makes an interesting case for its intriguing conclusion: Renaissance Italy had led the world in art, science and mathematics, but it was England where “the face of modernity” emerged, with unprecedented political and religious freedoms reflected in the freedom and flexibility offered by this new kind of mathematics.

The Author

As a child, Amir Alexander recalls, “I loved reading, but I was not a particularly dedicated student. In eighth grade, I was almost expelled from school for shooting an orange peel at a teacher. I certainly deserved it, and I think the only thing that saved me was the fact that my grandfather was on the school’s board. I think it’s entirely appropriate that Bonnie, my wife of 23 years, is a high school teacher.”

Alexander, adjunct associate professor in the department of history at the University of California, Los Angeles, was born in a small Israeli town called Rehevot and grew up in Jerusalem. “In Israel you are not just surrounded by history, but you are dimly conscious of actually living history, adding to a saga that has been ongoing for thousands of years. I’ve no doubt that my habit of viewing everything historically - even mathematics! - comes from my Israeli roots.”

He observes: “Jerusalem, where I grew up, and Los Angeles, where I live today, are polar opposites. Los Angeles has a dynamism that I love, a feeling that nothing is here to stay, and therefore anything is possible. Even most of the buildings here have a transitory feel to them, like they’re just one step above a movie set, here today and gone tomorrow. The rest of the buildings I think actually aremovie sets. But in Jerusalem no one ever moves on. Abraham passed through there 4,000 years ago and everyone still remembers it like it happened yesterday. Not only that, but everyone has an opinion about it, is still fighting about it, and no one will give an inch. That makes for a very interesting place, but not an easy one. What I miss most about Jerusalem is its depth, the layer upon layer of contested memories and meanings. But it is a relief to live in superficial Los Angeles. Jerusalem could certainly use a dose of LA forgetfulness.”

Asked to sum up the Alexander household, he replies: “Until last year we were four humans (Bonnie, myself, our two children Jordan and Ella), two dogs, two cats. But Jordan is now in college, and Ella will follow suit in a couple of months, so our numbers are dwindling. We may have to get more animals.”

His previous book Duel at Dawn: Heroes, Martyrs, and the Rise of Modern Mathematics, also a Book of the Week in our pages, considered the figure of the mathematician as a tragic Romantic hero. Asked if he has ever felt a twinge of this self-perception, Alexander says: “My training is in history as well as mathematics, so I am not really an exemplar of a pure mathematician. But of course I identified with Evariste Galois when I wrote about him. What writer or academic doesn’t feel sometimes like their manifest brilliance goes unacknowledged by a hard-hearted world?

“Fortunately most of us don’t go to Galois’ extremes, but simply move on. Strangely, perhaps, I also find Thomas Hobbes to be a captivating figure. His political prescriptions were, to be sure, repulsive, but unlike the flexible and adaptable Wallis he remained true to his views to the end, despite being shunned and pilloried. He had the courage of his convictions, and I find I can relate to that.”

Asked about his mathematics peers’ fascination with the past, he says: “Some mathematicians are entirely uninterested in history. Mathematical truths, after all, are eternal, and live in their own timeless universe, so they think history is irrelevant. Other mathematicians are interested in history to the extent that it records the achievements of great mathematicians of the past and preserves the tradition of the field. But increasingly I find that mathematicians are also interested in history because it connects mathematics to broader culture, to politics, art, literature, religion. That is the kind of history that I write, and I have been extremely gratified by the reactions I have received from working mathematicians.”

Are we right to find the controversy over “infinitesimals” incomprehensible from our 21st-century perspective?

Alexander observes: “Infinitesimals have lost their ideological meaning since the 17th century, but other scientific issues have taken their place. Whether the Earth is warming, for example, is a scientific question, but in the US it has become the focus of a broad ideological fight between liberals and conservatives. To some extent that is also true of the fight over genetically modified foods, which is also a scientific question that carries a political charge. In other words, the fields of study may be different and the political issues have certainly changed, but then as now science serves as a focal point some of our most fundamental cultural debates.”

Of Infinitesimal’s protagonists, he says: “I deeply relate to Hobbes’ bullheaded intellectual stubbornness. But I wish I had at least a whiff of Wallis’ practical flexibility.” Karen Shook


Infinitesimal: How a Dangerous Mathematical Theory Shaped the Modern World
By Amir Alexander
Oneworld, 368pp, £20.00
ISBN 9781780745329
Published 3 July 2014

Infinitesimal: How a Dangerous Mathematical Theory Shaped the Modern World