What the Buddha Knew About Mathematics But Never Told You

By Joe Pagano

Article Source:

Form is emptiness, emptiness is form.» ---Heart Sutra

At the heart of Buddhism is the concept that stillness reigns supreme, that by virtue of stilling the mind and emptying it of its contents, one can become enlightened. What is mind-startling about this premise is the truth that rings loudly from its core. What we shall see here through a mathematical example is how the Buddha was indeed right: form does arise from emptiness and emptiness is actually form. It is all in the perception.

The famous mathematician, John von Neumann, created a method, known as von Neumann hierarchy, of generating the set of Natural numbers {1,2,3,...} from...essentially nothing. From this example, we see how form arises out of emptiness, and thus reinforces the precept of the Heart Sutra quoted above. To carry out this example, all we need is some basic knowledge of set theory.

A set is a group of objects. Thus the set A = {3, 5, 7, 9} comprises the odd numbers from 3 through 9 inclusive. We can also talk about the set {}, which is called the empty set. The empty set is the set which contains no elements; this set is sheer «emptiness.» Now sets can contain sets as elements. For example, take the set A above and now add the element {12} which is the set which contains the element 12. Form the set B = {3, 5, 7, 9, {12}}. The set B contains five elements, one of which is itself a set.

When we talk about the union of two sets, we simply merge the elements of each set into one, not repeating any common elements. Thus if C = {1, 2, 3} and D = {3, 4, 5} then C union D, or CUD, in which the symbol «U» means union, is the set E = {1, 2, 3, 4, 5}. Now the example that follows is going to show how we create something from nothing, or form from emptiness. We have the following steps:

Step 0: {} Empty Set

Step 1: { {} } Set containing the empty set

Step 2: { {}, { {} } } Set containing the previous two sets

Step 3: { {}, { {} }, { {}, { {} } } } Set containing the previous three sets Iterating in this way, we create a sequence in which the next element is the

union of the two previous sets. Now at Step 1, we have a set with one element; at Step 2, we have a set with two elements, and so on. In this way, we can generate the set of Natural numbers {1, 2, 3,...}

Thus from emptiness, namely the empty set, which contains nothing, we have created form, that is the Natural numbers, which contain an infinity of numbers. The only difference is how we look at the two sets: our perspective. In mathematics, there is a very special word for what we have just done between the sequence generated by the empty set and the Natural numbers. We say the two sets are isomorphic, which basically means that the elements of each set are the same except in the way we perceive them. This is much the same as the isomorphism that would exist between the numbers 1,2,3... in English and the same numbers called by another name in a foreign language such as French or German, or even Macedonian, if you will.

Thus emptiness is f

orm, depending on our perspective of course; and form, the Natural numbers, is emptiness, again depending on our perspective. Indeed the Buddha did know something about mathematics that he never told us; but now even we know some of his ancient secrets. Now maybe if we apply them to everyday living, everybody will be a lot better off.

Music and Mathematics — There Are Many Connections

By Joe Pagano

Article Source:

If you thought music was not a mathematical language, then think again. In fact, music and mathematics are very much intertwined, so much so that I guess you could say one could not live without the other. Here we examine a relationship that clearly demonstrates the strength of this tie. Let the music begin.

For those with a rudimentary knowledge of music, the diatonic scale is something quite familiar. To understand why certain pairs of notes sound good together and others do not, you need to look into the sinusoidal wave patterns and the physics of frequencies. The sine wave is one of the most basic wave patterns in mathematics and is depicted by smoothly alternating crest-trough regularity. Many physical and real-world phenomena can be explained by this basic wave pattern, including many of the fundamental tonic properties of music. Certain musical notes sound well together (musically this is called harmony or consonance) because their sinusoidal wave patterns reinforce each other at select intervals.

If you play the piano, then how each of the different notes sounds to you is dependent on how your instrument is tuned. There are different ways to tune instruments and these methods depend on mathematical principles. These tunings are based on multiples of frequencies applied to a given note, and as such, these multiples determine whether groups of notes sound well together, in which case we say such notes are in harmony, or poorly together, in which case we say such notes are out of harmony or dissonant.

Where these multiples come from depend on criteria set by the instrument maker and today there are certain standards that these fabricators follow. Yet criteria notwithstanding, the multiples are inherently mathematical. For example, in more advanced mathematics, students study series of numbers. A series is simply a pattern of numbers determined by some rule. One famous series is the harmonic series. This comprises the reciprocals of the whole numbers, that is 1/1, 1/2, 1/3, 1/4...The harmonic series serves as one set of criteria for certain tunings, one notably called Pythagorean Intonation

In Pythagorean intonation, notes are tuned according to the «rule of the perfect fifth.» A perfect fifth comprises the «musical distance» between two notes, such as C and G. Again without trying to turn this article into a treatise on musical theory, the notes between C and G are C#, D, D#, E, F, F#, and G. The

«distance» between each of these notes is called a half-step. Thus a perfect fifth comprises 7 half-steps, C-C#, C#-D, D-D#, D#-E, E-F, F-F#, and F#-G. When we number the notes in a musical harmonic series, the number ascribed to the C note and that ascribed to the G note will always be in the ratio of 2:3. Thus the frequencies of these notes will be tuned so that their ratios correspond to 2:3. That is the C-note frequency will be 2/3 the G-note frequency, or vice versa, the G note frequency will be 3/2 the C note frequency, in which frequency is measured in cycles per second or Hertz.

Now, continuing by tuning according to perfect fifths, the fifth above G is D. Applying the perfect fifth ratio, the D note will be tuned to a frequency which is 3:2 the G frequency, or looking at this from below, the G note is 2/3 the frequency of the D note. We can continue in like manner until we complete what is called the Circle of Fifths, bringing us back to a C note by applying successive ratios of 3/2 to the previous note in the cycle. This takes twelve steps and when complete, the frequency of the second C, or the higher octave C note should be exactly twice the frequency of the lower C note. This is a requirement of all octaves. However this does not happen by applying this ratio of 3/2.

Musicians have rectified this problem by resorting to none other than the field of irrational numbers. Recall that those numbers are such that they cannot be expressed as fractions, that is, their decimal representations, like the number pi or the square root of two, do not end and do not repeat. Thus as a result of the failure of the Pythagorean tuning method to produce perfect octaves, tuning methods have been developed to obviate this situation. One is called «equal temperament» tuning, and this is the standard method for most practical applications. Believe it or not, this tuning method incorporates rational powers of the number two. That is correct: fractional powers of the number two. So if you thought you were learning rational exponents for nothing in algebra class, here is one example of where such a topic is used in real life

The way equal temperament tuning works is as follows: each note throughout its octave has its frequency multiplied by successive twelfth roots of two to get to the next higher note. That is, if we start with the standard A note, which vibrates at 440 Hertz, let us say, to get to A#, we multiply this 440 by 2^(1/12). Since the twelfth root of two is equal to 1.05946 to five decimal places, A# would be tuned to 440*1.05946 or 464.18 Hertz. And thus the tuning continues with the next note B obtained by taking 2^(2/12)*440. Note that we increment the twelfth power of 2 by 1 each time, obtaining powers of 2 which are 1/12, 2/12, 3/12, etc.

What is nice about this method is its exactness, unlike the inexactness of the Pythagorean intonation method discussed earlier. Thus when we arrive at the octave note, the next A above the standard A, which should vibrate at twice the frequency of the original 440 Hertz A, we get A octave = 440*2^(12/12) which is 440*2 = 880 Hertz, as it should be---exactly. As we stated earlier, when tuning by the Pythagorean method, this does not happen because of the repetitive use of the ratio 3/2, and therefore accommodations must be made to bring in line the inexactness of this approach. These accommodations result in perceptible dissonances between certain notes and in certain keys.

This tuning exercise demonstrates that mathematics and music are well intertwined, and indeed one could say that these two disciplines are inseparable. Music is truly mathematical and mathematics is, well, yes musical. Since many people think of musical talent coming from the «creative» types and mathematics ability coming from the «nerdy» or non-creative types, this article in some part helps disabuse these same people of this notion. Yet the question remains: If two ostensibly different fields as music and mathematics are happily married, how many other fields out there, which at first seem to have nothing to do with mathematics, are just as intricately linked to this most fascinating subject. Meditate on that for awhile.

Advanced Mathematics — Can Someone Please Help Me!?

By Joe Pagano

Article Source:

For those adventurous souls out there who try to plumb the depths of more advanced mathematics, I certainly give you credit for your efforts. Studying advanced mathematics can be very humbling, to say the least. You feel good about yourself because you think you are smart, and then you read something on advanced mathematics and you realize how little you understand. At least that is the way I usually feel. The sad thing is though, it may not be that you and I are not bright enough but that the teachers and writers of this particular subject fail terriby at what they do. Thus the cry should be, «Can someone please help me!»

Because I love a challenge, I decided to make my major mathematics in college. This was by no means my strongest subject. English and foreign languages certainly came very easily to me and could have been slam-dunk choices as majors. I probably would have suffered a lot less than I did from studying things like advanced calculus, real and complex analysis, mathematical statistics and set theory. This study was further compounded by professors who, for the most part, failed to elucidate the subject matter. Consequently, I got my degree, although with not perfect grades, good grades, and am certainly proud of my accomplishment. The irony in all this is that after all that suffering, I now relish the subject and have written extensively on many facets of this discipline.

What I have found in my study of mathematics, particularly advanced mathematics, is that there are so few good teachers of it. When I was a graduate student — yes I actually decided to punish myself more by studying this subject at the graduate level — I remember sitting in my complex analysis course, listening to my Indian professor go off on tangents about exotic realms of this subject. What he was talking about, I can hardly say. I know I would catch an idea here or there, but none of what he said had any relevance, and I just sat there for the most part and pretended to understand. After all, nobody wanted to look foolish.

Now that I am a bit older, and hopefully wiser, I realize the foolhardiness of it all. The purpose of going to school and attending lectures or classes is to ask questions and learn. Material should be presented in a way so that students, willing to put in the time and effort, should understand — at least at a superficial level. What I found from most of the lectures I attended and most of the textbooks I used is that I understood very little — if anything. One could say that I could not discern the forest from the trees; but the real truth is that I could not discern even one tree from another. How sad.

Unfortunately, things have not changed since those days in the 80’s. When I try to plumb the depths of advanced mathematics I encounter the same outdated, stale, methods of pedagogy that just do not serve. Why can’t anyone produce books on advanced mathematics, or even articles on the subject, so that willing learners like me or you, could understand? Certainly the men who understand this subject are smart enough to be able to do this, no? Then again, there might be an agenda: that of not letting too many into this select circle of prominent mystique. This is much like the master karate instructor’s inner circle of disciples. These masters are not too quick to teach their secret methods, which took a lifetime to acquire and understand, to some neophyte, until that person has proven his loyalty, and even sanity. Indeed you would not want such killer techniques in the wrong person’s hands.

Such easy dissemination of enlightenment might serve a purpose in the martial arts, yet I argue that such in a discipline as advanced mathematics should be freely available. Even with my background and experience, I find it enormously frustrating that I cannot teach myself to master the theory of, let us say, partial differential equations, because there is not one book that tries to teach this subject without quickly throwing the student into the forest without a roadmap. Yes I know that the subject is comprehensive and depends on other branches of mathematics, and that if the author were to break everything down, the book might have to be three thousand pages; yet the alternative is that very little if anything is learned by the student, and thus the realm of advanced mathematics remains untouched but by a select few, leaving out many potential bright students, who perhaps, before being daunted and quitting the pursuit of such subject, would make great contributions to the subject and even world. After all, mathematics is the language of the universe, and a comprehensive understanding of this subject can lead to all kinds of useful applications.

Thus I scream, «Can anyone please help!» I want to be able to learn Einsteinian mathematics and all about tensors. Could someone please break this down so that a person with my intelligence might glimpse this awesome domain. Alas. No one answers; and therefore, I have to be content to trudge through such readings with the labors of childbirth. But I stay hopeful that one day this might change. Anyone out there?

Mathematician Uses Topology To Study Abstract Spaces, Solve Problems ScienceDaily (Aug. 16, 2010)

Studying complex systems, such as the movement of robots on a factory floor, the motion of air over a wing, or the effectiveness of a security network, can present huge challenges. Mathematician Robert Ghrist at the University of Illinois at Urbana-Champaign is developing advanced mathematical tools to simplify such tasks.

Ghrist uses a branch of mathematics called topology to study abstract spaces that possess many dimensions and solve problems that can’t be visualized normally. He will describe his technique in an invited talk at the International Congress of Mathematicians, to be held Aug. 23-30 in Madrid, Spain.

Ghrist, who also is a researcher at the university’s Coordinated Science Laboratory, takes a complex physical system — such as robots moving around a factory floor — and replaces it with an abstract space that has a specific geometric representation.

«To keep track of one robot, for example, we monitor its x and y coordinates in two-dimensional space,» Ghrist said. «Each additional robot requires two more pieces of information, or dimensions. So keeping track of three robots requires six dimensions. The problem is, we can’t visualize things that have six dimensions.»

Mathematicians nevertheless have spent the last 100 years developing tools for figuring out what abstract spaces of many dimensions look like.

«We use algebra and calculus to break these abstract spaces into pieces, figure out what the pieces look like, then put them back together and get a global picture of what the physical system is really doing,» Ghrist said.

Ghrist’s mathematical technique works on highly complex systems, such as roving sensor networks for security systems. Consisting of large numbers of stationary and mobile sensors, the networks must remain free of dead zones and security breaches.

Keeping track of the location and status of each sensor would be extremely difficult, Ghrist said. «Using topological tools, however, we can more easily stitch together information from the sensors to find and fill any holes in the network and guarantee that the system is safe and secure.»

While it may seem counterintuitive to initially translate such tasks into problems involving geometry, algebra or calculus, Ghrist said, that doing so ultimately produces a result that goes back to the physical system.

«That’s what applied mathematics has to offer,» Ghrist said. «As systems become increasingly complex, topological tools will become more and more relevant.»

Funding was provided by the National Science Foundation and the Defense Advanced Research Projects Agency.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.

Mathematical Problem Solved After More Than 50 Years: Chern Numbers Of Algebraic Varieties ScienceDaily (June 11, 2009)

A problem at the interface of two mathematical areas, topology and algebraic geometry, that was formulated by Friedrich Hirzebruch, had resisted all attempts at a solution for more than 50 years. The problem concerns the relationship between different mathematical structures. Professor Dieter Kotschick, a mathematician at the Ludwig-Maximilians-Universität (LMU) in Munich, has now achieved a breakthrough.

As reported in the online edition of the journal Proceedings of the National Academy of Sciences (PNAS), Kotschick has solved Hirzebruch’s problem. Topology studies flexible properties of geometric objects that are unchanged by continuous deformations. In algebraic geometry some of these objects are endowed with additional structure derived from an explicit description by polynomial equations. Hirzebruch’s problem concerns the relation between flexible and rigid properties of geometric objects. (PNAS, 9 June 2009)

Viewed topologically, the surface of a ball is always a sphere, even when the ball is very deformed: precise geometric shapes are not important in topology. This is different in algebraic geometry, where objects like the sphere are described by polynomial equations. Professor Dieter Kotschick has recently achieved a breakthrough at the interface of topology and algebraic geometry.

«I was able to solve a problem that was formulated more than 50 years ago by the influential German mathematician Friedrich Hirzebruch», says Kotschick.

«Hirzebruch’s problem concerns the relation between different mathematical structures. These are so-called algebraic varieties, which are the zero-sets of polynomials, and certain geometric objects called manifolds.» Manifolds are smooth topological spaces that can be considered in arbitrary dimensions. The spherical surface of a ball is just a two-dimensional manifold.

In mathematical terminology Hirzebruch’s problem was to determine which Chern numbers are topological invariants of complex-algebraic varieties. «I have proved that — except for the obvious ones — no Chern numbers are topologically invariant», says Kotschick. «Thus, these numbers do indeed depend on the algebraic structure of a variety, and are not determined by coarser, so-called topological properties. Put differently: The underlying manifold of an algebraic variety does not determine these invariants.»

The solution to Hirzebruch’s problem is announced in the current issue of

PNAS Early Edition, the online version of PNAS.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.

As reported in the online edition of the journal Proceedings of the National Academy of Sciences (PNAS), Kotschick has solved Hirzebruch’s problem. Topology studies flexible properties of geometric objects that are unchanged by continuous deformations. In algebraic geometry some of these objects are endowed with additional structure derived from an explicit description by polynomial equations. Hirzebruch’s problem concerns the relation between flexible and rigid properties of geometric objects. (PNAS, 9 June 2009)

Viewed topologically, the surface of a ball is always a sphere, even when the ball is very deformed: precise geometric shapes are not important in topology. This is different in algebraic geometry, where objects like the sphere are described by polynomial equations. Professor Dieter Kotschick has recently achieved a breakthrough at the interface of topology and algebraic geometry.

«I was able to solve a problem that was formulated more than 50 years ago by the influential German mathematician Friedrich Hirzebruch», says Kotschick.

«Hirzebruch’s problem concerns the relation between different mathematical structures. These are so-called algebraic varieties, which are the zero-sets of polynomials, and certain geometric objects called manifolds.» Manifolds are smooth topological spaces that can be considered in arbitrary dimensions. The spherical surface of a ball is just a two-dimensional manifold.

In mathematical terminology Hirzebruch’s problem was to determine which Chern numbers are topological invariants of complex-algebraic varieties. «I have proved that — except for the obvious ones — no Chern numbers are topologically invariant», says Kotschick. «Thus, these numbers do indeed depend on the algebraic structure of a variety, and are not determined by coarser, so-called topological properties. Put differently: The underlying manifold of an algebraic variety does not determine these invariants.»

The solution to Hirzebruch’s problem is announced in the current issue of

PNAS Early Edition, the online version of PNAS.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.

Mathematicians Find New Solutions To An Ancient Puzzle

ScienceDaily (Mar. 18, 2008)

Many people find complex math puzzling, including some mathematicians.

Recently, mathematician Daniel J. Madden and retired physicist, Lee W. Jacobi, found solutions to a puzzle that has been around for centuries. Jacobi and Madden have found a way to generate an infinite number of solutions for a puzzle known as ‘Euler’s Equation of degree four.’

The equation is part of a branch of mathematics called number theory. Number theory deals with the properties of numbers and the way they relate to each other. It is filled with problems that can be likened to numerical puzzles.

«It’s like a puzzle: can you find four fourth powers that add up to another fourth power» Trying to answer that question is difficult because it is highly unlikely that someone would sit down and accidentally stumble upon something like that,» said Madden, an associate professor of mathematics at The University of Arizona in Tucson.

Equations are puzzles that need certain solutions «plugged into them» in order to create a statement that obeys the rules of logic.

For example, think of the equation x + 2 = 4. Plugging «3» into the equation doesn’t work, but if x = 2, then the equation is correct.

In the mathematical puzzle that Jacobi and Madden worked on, the problem was finding variables that satisfy a Diophantine equation of order four. These equations are so named because they were first studied by the ancient Greek mathematician Diophantus, known as ‘the father of algebra.’

In its most simple version, the puzzle they were trying to solve is the equation: (a)(to the fourth power) + (b)(to the fourth power) + (c)(to the fourth power) + (d)(to the fourth power) = (a + b + c + d)(to the fourth power)

That equation, expressed mathematically, is: a4 + b4 +c4 +d4 = (a + b + c + d)4. Madden and Jacobi found a way to find the numbers to substitute, or plug in,

for the a’s, b’s, c’s and d’s in the equation. All the solutions they have found so far are very large numbers.

In 1772, Euler, one of the greatest mathematicians of all time, hypothesized that to satisfy equations with higher powers, there would need to be as many variables as that power. For example, a fourth order equation would need four different variables, like the equation above.

Euler’s hypothesis was disproved in 1987 by a Harvard graduate student named Noam Elkies. He found a case where only three variables were needed. Elkies solved the equation: (a)(to the fourth power) + (b)(to the fourth power) + (c)(to the fourth power) = e(to the fourth power), which shows only three variables are needed to create a variable that is a fourth power.

Inspired by the accomplishments of the 22-year-old graduate student, Jacobi began working on mathematics as a hobby after he retired from the defense industry in 1989.

Fortunately, this was not the first time he had dealt with Diophantine equations. He was familiar with them because they are commonly used in physics for calculations relating to string theory.

Jacobi started searching for new solutions to the puzzle using methods he found in some number theory texts and academic papers.

He used those resources and Mathematica, a computer program used for mathematical manipulations.

Jacobi initially found a solution for which each of the variables was 200 digits long. This solution was different from the other 88 previously known solutions to this puzzle, so he knew he had found something important.

Jacobi then showed the results to Madden. But Jacobi initially miscopied a variable from his Mathematica computer program, and so the results he showed Madden were incorrect.

«The solution was wrong, but in an interesting way. It was close enough to make me want to see where the error occurred,» Madden said.

When they discovered that the solution was invalid only because of Jacobi’s transcription error, they began collaborating to find more solutions.

Madden and Jacobi used elliptic curves to generate new solutions. Each solution contains a seed for creating more solutions, which is much more efficient than previous methods used.

In the past, people found new solutions by using computers to analyze huge amounts of data. That required a lot of computing time and power as the magnitude of the numbers soared.

Now people can generate as many solutions as they wish. There are an infinite number of solutions to this problem, and Madden and Jacobi have found a way to find them all.

«Modern number theory allowed me to see with more clarity the implications of his (Jacobi’s) calculations,» Madden said.

«It was a nice collaboration,» Jacobi said. «I have learned a certain amount of new things about number theory; how to think in terms of number theory, although sometimes I can be stubbornly algebraic.»

The article, ««On a4 + b4 +c4 +d4 = (a + b + c + d)4» is published in the March issue of The American Mathematical Monthly.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.

Mathematicians Solve 140-Year-Old Boltzmann Equation ScienceDaily (May 14, 2010)

Pennsylvania mathematicians have found solutions to a 140-year-old, 7dimensional equation that were not known to exist for more than a century despite its widespread use in modeling the behavior of gases.

The study, part historical journey but mostly mathematical proof, was conducted by Philip T. Gressman and Robert M. Strain of Penn’s Department of Mathematics. The solution of the Boltzmann equation problem was published in the Proceedings of the National Academy of Sciences. Solutions of this equation, beyond current computational capabilities, describe the location of gas molecules probabilistically and predict the likelihood that a molecule will reside at any particular location and have a particular momentum at any given time in the future. During the late 1860s and 1870s, physicists James Clerk Maxwell and Ludwig Boltzmann developed this equation to predict how gaseous material distributes itself in space and how it responds to changes in things like temperature, pressure or velocity.

The equation maintains a significant place in history because it modeled gaseous behavior well, and the predictions it led to were backed up by experimentation. Despite its notable leap of faith — the assumption that gases are made of molecules, a theory yet to achieve public acceptance at the time — it was fully adopted. It provided important predictions, the most fundamental and intuitively natural of which was that gasses naturally settle to an equilibrium state when they are not subject to any sort of external influence. One of the most important physical insights of the equation is that even when a gas appears to be macroscopically at rest, there is a frenzy of molecular activity in the form of collisions. While these collisions cannot be observed, they account for gas temperature.

Gressman and Strain were intrigued by this mysterious equation that illustrated the behavior of the physical world, yet for which its discoverers could only find solutions for gasses in perfect equilibrium.

Using modern mathematical techniques from the fields of partial differential equations and harmonic analysis — many of which were developed during the last five to 50 years, and thus relatively new to mathematics — the Penn mathematicians proved the global existence of classical solutions and rapid time decay to equilibrium for the Boltzmann equation with long-range interactions. Global existence and rapid decay imply that the equation correctly predicts that the solutions will continue to fit the system’s behavior and not undergo any mathematical catastrophes such as a breakdown of the equation’s integrity caused by a minor change within the equation. Rapid decay to equilibrium means that the effect of an initial small disturbance in the gas is short-lived and quickly becomes unnoticeable.

«Even if one assumes that the equation has solutions, it is possible that the solutions lead to a catastrophe, like how it’s theoretically possible to balance a needle on its tip, but in practice even infinitesimal imperfections cause it to fall over,» Gressman said.

The study also provides a new understanding of the effects due to grazing collisions, when neighboring molecules just glance off one another rather than collide head on. These glancing collisions turn out to be dominant type of collision for the full Boltzmann equation with long-range interactions.

«We consider it remarkable that this equation, derived by Boltzmann and Maxwell in 1867 and 1872, grants a fundamental example where a range of geometric fractional derivatives occur in a physical model of the natural world,» Strain said. «The mathematical techniques needed to study such phenomena were only developed in the modern era.»

The study was funded by the National Science Foundation.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.

Quasicrystals: Somewhere Between Order And Disorder ScienceDaily (May 29, 2008)

Professionally speaking, things in David Damanik’s world don’t line up — and he can prove it.

In new research that’s available online and slated for publication in July’s issue of the Journal of the American Mathematical Society, Damanik and colleague Serguei Tcheremchantsev offer a key proof in the study of quasicrystals, crystal-like materials whose atoms don’t line up in neat, unbroken rows like the atoms found in crystals. Damanik’s latest work focused on a popular model mathematicians use to study quasicrystals. The research, which was 10 years in the making, proves that quasicrystals in the model are not electrical conductors and sheds light on a little-understood corner of materials science.

«This is the first time this has been done, and given the broad academic interest in quasicrystals we expect the paper to generate significant interest,» said Damanik, associate professor of mathematics at Rice University.

Until 1982, quasicrystals weren’t just undiscovered, they were believed to be physically impossible. To understand why, it helps to understand how atoms line up in a crystal.

In literature dating to the early 19th Century, mineralogists showed that all crystals — like diamond or quartz — were made up of one neat row of atoms after another, each row repeating at regular intervals. Mathematicians and physical chemists later showed that the periodic, repeating structure of crystals could only come in a few fixed arrangements. This was elegantly revealed in the early 20th Century when crystals were bombarded with X-rays. The crystals diffracted light into patterns of spots that had «rotational symmetry,» meaning that the patterns looked exactly the same when they were spun partway around. For example, a square has four-fold rotational symmetry because it looks exactly the same four times as it is spun a full turn.

X-ray crystallography reinforced what physicists, chemists and mathematicians already knew about crystals; they could yield patterns of spots with only two-, three-, fouror six-fold rotational symmetry. The physics of their lattices permitted nothing else.

All was well until 1982, when physicist Dan Shechtman did an X-ray diffraction study on a new alloy he’d made at what is now the National Institute of Standards and Technology. The pattern of spots looked like those made by crystals, but it had five-fold rotational symmetry, like a pentagon — something that was clearly forbidden for a periodic structure.

The alloy — which was quickly dubbed quasicrystal — attracted intense scientific interest. Dozens of quasicrystals have since been made. Though none of their structures have yet been solved, scientists and mathematicians like Damanik are keen to understand them.

«Mathematically speaking, quasicrystals fall into a middle ground between order and disorder,» Damanik said. «Over the past decade, it’s become increasingly clear that the mathematical tools that people have used for decades to predict the electronic properties of materials will not work in this middle ground.»

For example, Schrödinger’s equation, which debuted in 1925, describes how electrons behave in any material. But for decades, mathematicians have been able to use just one of the equation’s terms — the Schrödinger operator — to find out whether a material will be a conductor or an insulator. In the past five years, mathematicians have proven that that method won’t work for quasicrystals. The upshot of this is that it is much more complex to actually run the numbers and find out how electrons behave inside a quasicrystal.

Supercomputers have been used to actually crunch the numbers, but Damanik said computer simulations are no substitute for a mathematical proof.

«Computer simulations have shown that electrons move through quasicrystals — albeit very slowly — in a way that’s markedly different from the way they move through a conductor,» Damanik said. «But computers never show you the whole picture. They only approximate a solution for a finite time. In our paper, we proved that electrons always behave this way in the quasicrystal model we studied, not just now or tomorrow but for all time.»

Damanik’s research was funded by the National Science Foundation.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.

Europe’s Leading Scientists Urge Creation of a CERN for Mathematics ScienceDaily (Dec. 1, 2010)

Europe needs an Institute of Industrial Mathematics to tighten the link between maths and industry as an enabler of innovation — putting maths at the heart of Europe’s innovation, according to the European Science Foundation in a report launched in Brussels at the «Maths and Industry» Conference.

Such an Institute would help not only to overcome the fragmentation that currently characterises mathematics research in Europe, but also to act as a magnet for excellence and innovation much like Europe’s Centre for Nuclear Research (CERN) gave the world both the world wide web and the Large Hadron Collider to investigate the big bang.

«It may often be invisible in the final product or to the final consumer, but mathematics is the fundamental ingredient to many innovations that help us respond to a rapidly changing economic landscape,» said Andreas Schuppert from Bayer Technology Services GmbH who contributed to the report.

«Creating a ‘CERN for mathematics’, the European Institute of Mathematics for Innovation, will promote industry and academic collaboration, stimulating innovation, growth and job creation. As such, it could help achieve Europe’s 2020 objectives of sustainable growth through a knowledge-based economy.»

The Institute would be designed as a vast network of world-class mathematicians, making them easily accessible for collaboration with companies seeking novel solutions. The institute would connect hubs of academic excellence, as well as resources such as databases and libraries. As a centralised resource, this Institute would be particularly useful for small and medium enterprises (SMEs) that often struggle to tap into the continent-wide pool of industrial mathematicians, but which represent Europe’s main driver of innovation and a major source of job creation.

«Bringing together mathematicians in one organisation will make it easier for companies to access the expertise they need, while at the same time facilitating access to funds by eliminating overlap at national level,» said Mario Primicerio from Universita degli Studi di Firenze, Italy who chaired the ESF report, «Maths and Industry.» In addition to setting up the Institute, the report also recommends allocating EU funds for a specific industrial and applied mathematics project under the upcoming 8th R&D Framework Programme, the next EU-wide funding initiative for science. In addition, it advises for the implementation of an industrial policy that includes an EU-wide ‘Small Business Act in Mathematics’ which would fund spin-off companies based on mathematics as is already the case in Germany and Sweden.

The ESF’s Forward Look report «Maths and Industry» results from a partnership with the European Mathematical Society and close collaboration by academia, industry and policy makers. It is available online at

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.

European Group Aims To Make Maths Teaching More Rigorous And Inspiring ScienceDaily (Aug. 29, 2008)

An attempt to re-energise mathematics teaching in Europe is being made in a new project examining a range of factors thought to influence achievement. Mathematics teaching is as vital as ever both in support of key fields such as life sciences, alternative energy development, or information technology, and also through its unique ability to develop widely applicable problem solving skills. It should be highly relevant not just for the elite few but for all people in education.

The new project was discussed at a recent workshop organised by the European Science Foundation (ESF), which brought together experts in different areas of mathematics education. «It was agreed that we would begin the process of developing a comparative project, involving between fifteen and twenty European countries, to examine the interrelatedness of the mathematics-related beliefs of teachers and students, teacher practices and student cognition,» said Paul Andrews, the workshop’s convenor and Senior Lecturer in Education at the Faculty of Education of Cambridge University in the UK.

Andrews pointed out that the solution to the mathematics teaching conundrum was complex and multi-dimensional, just like many of the great problems in the field itself. On the one hand, enthusiasm needed to be balanced with rigour in order to motivate students while also teaching skills and knowledge worth acquiring. «To assume that the development of enthusiasm is sufficient to guarantee achievement would be naïve as there are countries in which students have little enthusiasm for mathematics but achieve relatively highly and, of course, vice versa,» pointed out Andrews.

There has also been a tension between immediate vocational objectives in response to the needs of employers, and the higher ideal of teaching logical thinking and deeper mathematical problem solving. European countries have to date resolved this tension in different ways, with the UK being at the vocational end of the spectrum, while Hungary has taken the purest approach with its traditions for mathematical rigour.

«One of the problems of English education is that students experience a fragmented and procedural conception of mathematics, due to underlying notions of vocationalism, and so rarely come to see the subject as a coherent body of concepts and relationships which can be worth studying for the intrinsic satisfaction it can yield,» said Andrews. «The situation in countries like Hungary is almost the complete opposite — all students experience an integrated and intellectually worthwhile mathematics taught by teachers with little explicit interest in the applications of the subject but an enthusiasm for logical thinking and the problem-solving opportunities that mathematics can provide.»

But the issue of mathematics teaching is not just about content, but also attitude, on the part both of pupils and teachers. One significant finding to emerge from the workshop was that the common practice of dividing pupils into sets defined by ability, which, in the UK context, is applied more for mathematics teaching than any other subject, can be counterproductive, even for the most able pupils. «Where teachers do not necessarily expect to teach students in ability groups but expect to work with the full ability range, achievement is generally higher across the board,» said Andrews.

Another finding that perhaps contradicted common wisdom was that students often progressed best when taught to approach problem solving collectively instead of in isolation. This runs counter to the perception, manifested regularly in UK schools, that mathematics is a lonely endeavour pursued by individuals in competition rather than cooperation.

It remains to be seen whether the ESF project will lead to a radical shake up in mathematics teaching comparable to the introduction of the so called «new maths» in the 1980s in the place of the previous more arithmetically based approach. More likely it will lead to rebalancing of teaching, bringing greater consistency and rigour to deliver a more wholesome curriculum.

The workshop «The Relevance of Mathematics Education» was held in Cambridge, UK in January 2008. Each year, ESF supports approximately 50 Exploratory Workshops across all scientific domains. These small, interactive group sessions are aimed at opening up new directions in research to explore new fields with a potential impact on developments in science.

Disclaimer: This article is not intended to provide medical advice, diagnosis or treatment. Views expressed here do not necessarily reflect those of ScienceDaily or its staff.

US Needs Better-Trained Math Teachers to Compete Globally, Study Finds ScienceDaily (Apr. 19, 2010)

Math teachers in the United States need better training if the nation’s K-12 students are going to compete globally, according to international research released by a Michigan State University scholar.

William Schmidt, University Distinguished Professor of education, found that prospective U.S. elementary and middle-school math teachers are not as prepared as those from other countries. And this, combined with a weak U.S. math curriculum, produces similarly weak student achievement, he said.

The Teacher Education Study in Mathematics, or TEDS-M, is by far the largest of its kind, surveying more than 3,300 future teachers in the United States and 23,244 future teachers across 16 countries. Schmidt led the U.S. portion of the project.

«We must break the cycle in which we find ourselves,» said Schmidt, who presented his findings at a Washington news conference.

«A weak K-12 mathematics curriculum in the U.S., taught by teachers with an inadequate mathematics background, produces high school graduates who are at a disadvantage. When some of these students become future teachers and are not given a strong background in mathematics during teacher preparation, the cycle continues.»

More rigorous K-12 math standards, which are part of the Common Core State Standards Initiative, will be completed soon by the National Governors Association and the Council of Chief State Officers. The standards are expected to be adopted by a majority of the 48 states considering them.

But the new standards will require U.S. math teachers to be even more knowledgeable, Schmidt said. His study found that while nearly all future middle-school teachers in the top-achieving countries took courses in linear algebra and basic calculus, only about half of U.S. future teachers took the fundamental courses.

To attack the problem, Schmidt laid out a three-fold approach: Recruit teachers with stronger math backgrounds.

Implement more rigorous state certification requirements for math teachers. Require more demanding math courses in all teacher preparation programs.

Schmidt, who studied the performance of 81 public and private colleges and universities, said the real issue is how teachers are prepared — the courses they take and the experiences they have. The quality and type of programs in the United States varies widely by state and by institution.

TEDS-M revealed that differences in middle school teacher certification programs, for example, have a great impact on math-teaching capabilities. Future teachers prepared in programs focused on secondary schools (grades 6 and above) had significantly higher mathematics knowledge scores than those prepared in other types of programs, including those focused only on middle school teacher preparation.

«Teacher preparation curricula are critical, not only for our future teachers, but also for the children they will be teaching,» Schmidt said. «The problem isn’t simply the amount of formal math education our future teachers receive. It also involves studying the theoretical and practical aspects both of teaching mathematics and teaching in general.»

TEDS-M expands on previous research to include elementary teachers and draw comparisons across more countries. The international headquarters for the project also is MSU, with Maria Teresa Tatto, John R. Schwille and Sharon Senk serving as principal investigators in collaboration with the International Association for the Evaluation of Educational Achievement.

The U.S. study is sponsored by Boeing Co., Carnegie Corp. of New York, the Bill & Melinda Gates Foundation and the GE Foundation.

The full report, Breaking the Cycle: An International Comparison of U.S. Mathematics Teacher Preparation, is available at

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.

Few Gender Differences in Math Abilities, Worldwide Study Finds ScienceDaily (Jan. 6, 2010)

Girls around the world are not worse at math than boys, even though boys are more confident in their math abilities, and girls from countries where gender equity is more prevalent are more likely to perform better on mathematics assessment tests, according to a new analysis of international research.

«Stereotypes about female inferiority in mathematics are a distinct contrast to the actual scientific data,» said Nicole Else-Quest, PhD, a psychology professor at Villanova University, and lead author of the meta-analysis. «These results show that girls will perform at the same level as the boys when they are given the right educational tools and have visible female role models excelling in mathematics.»

The results are reported in the latest issue of Psychological Bulletin, published by the American Psychological Association. The finding that girls around the world appear to have less confidence in their mathematical abilities could help explain why young girls are less likely than boys to pursue careers in science, technology, engineering and mathematics.

Else-Quest and her fellow researchers examined data from the Trends in International Mathematics and Science Study and the Programme for International Student Assessment, representing 493,495 students ages 14-16 from 69 countries. Both studies’ results were released in 2003, and not all countries participated in both assessments. The TIMSS focuses on basic math knowledge, while the PISA test assesses students’ ability to use their math skills in the real world. The researchers felt these two tests offered a good sampling of students’ math abilities.

While these measures tested different math abilities, there were only small gender differences for each, on average. However, from nation to nation, the size of the gender differences varied a great deal.

The two studies also assessed students’ level of confidence in their math abilities and how important they felt it was to do well in math in order to have a successful career. Despite overall similarities in math skills, boys felt significantly more confident in their abilities than girls did and were more motivated to do well.

The researchers also looked at different measures of women’s education, political involvement, welfare and income in each country. There was some variability among countries when it came to gender differences in math and how it related to the status and welfare of women. For example, if certain countries had more women in research-related positions, the girls in that country were more likely to do better in math and feel more confident of those skills.

«This meta-analysis shows us that while the quality of instruction and curriculum affects children’s learning, so do the value that schools, teachers and families place on girls’ learning math. Girls are likely to perform as well as boys when they are encouraged to succeed,» said Else-Quest.

Disclaimer: This article is not intended to provide medical advice, diagnosis or treatment. Views expressed here do not necessarily reflect those of ScienceDaily or its staff.

Computers Effective In Verifying Mathematical Proofs ScienceDaily (Nov. 7, 2008)

New computer tools have the potential to revolutionize the practice of mathematics by providing far more-reliable proofs of mathematical results than have ever been possible in the history of humankind. These computer tools, based on the notion of «formal proof», have in recent years been used to provide nearly infallible proofs of many important results in mathematics.

When mathematicians prove theorems in the traditional way, they present the argument in narrative form. They assume previous results, they gloss over details they think other experts will understand, they take shortcuts to make the presentation less tedious, they appeal to intuition, etc. The correctness of the arguments is determined by the scrutiny of other mathematicians, in informal discussions, in lectures, or in journals. It is sobering to realize that the means by which mathematical results are verified is essentially a social process and is thus fallible. When it comes to central, well known results, the proofs are especially well checked and errors are eventually found.

Nevertheless the history of mathematics has many stories about false results that went undetected for a long time. In addition, in some recent cases, important theorems have required such long and complicated proofs that very few people have the time, energy, and necessary background to check through them. And some proofs contain extensive computer code to, for example, check a lot of cases that would be infeasible to check by hand. How can mathematicians be sure that such proofs are reliable?

To get around these problems, computer scientists and mathematicians began to develop the field of formal proof. A formal proof is one in which every logical inference has been checked all the way back to the fundamental axioms of mathematics. Mathematicians do not usually write formal proofs because such proofs are so long and cumbersome that it would be impossible to have them checked by human mathematicians. But now one can get «computer proof assistants» to do the checking. In recent years, computer proof assistants have become powerful enough to handle difficult proofs.

Only in simple cases can one feed a statement to a computer proof assistant and expect it to hand over a proof. Rather, the mathematician has to know how to prove the statement; the proof then is greatly expanded into the special syntax of formal proof, with every step spelled out, and it is this formal proof that the computer checks. It is also possible to let computers loose to explore mathematics on their own, and in some cases they have come up with interesting conjectures that went unnoticed by mathematicians. We may be close to seeing how computers, rather than humans, would do mathematics.

Four new articles in the December 2008 issue of Notices of the American Mathematical Society explore the current state of the art of formal proof and provide practical guidance for using computer proof assistants. If the use of these assistants becomes widespread, they could change deeply mathematics as it is currently practiced. One long-term dream is to have formal proofs of all of the central theorems in mathematics. Thomas Hales, one of the authors writing in the Notices, says that such a collection of proofs would be akin to «the sequencing of the mathematical genome».

The four articles are:

  1. Formal Proof, by Thomas Hales, University of Pittsburgh
  2. Formal Proof---Theory and Practice, by John Harrison, Intel Corporation
  3. Formal proof---The Four Colour Theorem, by Georges Gonthier, Microsoft Research, Cambridge, England
  4. Formal Proof---Getting Started, by Freek Wiedijk, Radboud University, Nijmegen, Netherlands

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.

Free Software Brings Affordability, Transparency To Mathematics ScienceDaily (Dec. 7, 2010)

Until recently, a student solving a calculus problem, a physicist modeling a galaxy or a mathematician studying a complex equation had to use powerful computer programs that cost hundreds or thousands of dollars. But an opensource tool based at the University of Washington won first prize in the scientific software division of Les Trophées du Libre, an international competition for free software.

The tool, called Sage, faced initial skepticism from the mathematics and education communities.

«I’ve had a surprisingly large number of people tell me that something like Sage couldn’t be done — that it just wasn’t possible,» said William Stein, associate professor of mathematics and lead developer of the tool. «I’m hearing that less now.»

Open-source software, which distributes programs and all their underlying code for free, is increasingly used in everyday applications. Firefox, Linux and Open Office are well-known examples.

But until recently, nobody had done the same for the everyday tools used in mathematics. Over the past three years, more than a hundred mathematicians from around the world have worked with Stein to build a user-friendly tool that combines powerful number-crunching with new features, such as collaborative online worksheets.

«A lot of people said: ‘Wow, I’ve been waiting forever for something like this,’» Stein said. «People are excited about it.»

Sage can take the place of commercial software commonly used in mathematics education, in large government laboratories and in math-intensive research. The program can do anything from mapping a 12-dimensional object to calculating rainfall patterns under global warming.

The idea began in 2005, when Stein was an assistant professor at Harvard University.

«For about 10 years I had been really unhappy with the state of mathematical software,» Stein said. The big commercial programs — Matlab, Maple, Mathematica and Magma — charge license fees. The Mathematica Web page, for example, charges $2,495 for a regular license. For another program, a collaborator in Colombia was quoted about $550, a special «Third World» discount price, to buy a license to use a particular tool, Stein said.

The frustrations weren’t only financial. Commercial programs don’t always reveal how the calculations are performed. This means that other mathematicians can’t scrutinize the code to see how a computer-based calculation arrived at a result.

«Not being able to check the code of a computer-based calculation is like not publishing proofs for a mathematical theorem,» Stein said. «It’s ludicrous.»

So Stein began a year and a half of frenzied work in which he created the Sage prototype, combining decades’ worth of more specialized free mathematical software and filling in the gaps.

«I worked really, really hard on this, and didn’t sleep much for a year. Now I’ve relaxed. There are a lot more people helping out,» Stein said. «It seems like everyone in the field has heard of Sage now, which is surreal.»

Among those helping is a team of five UW undergraduate students who work part-time on the code — everything from writing new formulas to improving the Google-ish graphical interface. (Even when Sage runs on an individual computer, not over the Internet, you use a Web browser to enter commands.)

Regular meetings, named «Sage days,» bring together volunteer developers. The fourth Sage day, held in Seattle in June, drew about 30 people. The sixth Sage day was held last month in Bristol, England. Forty-one people attended talks and many participated in coding sprints. Dozens of other people around the world contribute through Sage’s online discussion boards.

Last month, Stein and David Joyner, a mathematics professor at the U.S. Naval Academy in Annapolis, Md., published a letter in the Notices of the American Mathematical Society in which they argue that the mathematical community should support and develop open-source software.

Soon Sage will face off against the major software companies in physical space. In early January, thousands of mathematicians will gather in San Diego for the joint meeting of the American Mathematical Society and the Mathematical Association of America. In the exhibition hall, Stein has paid the first-timers’ rate of $400 to rent a booth alongside those of the major mathematical software companies, where he and students will hand out DVDs with copies of Sage.

«I think we can be better than the commercial versions,» he said. «I really want it to be the best mathematical software in the world.»

Sage research and student support is made possible by grants from the National Science Foundation. The Sage meetings are supported by various mathematical associations. The project has also received several thousand dollars in private donations.

Disclaimer: This article is not intended to provide medical advice, diagnosis or treatment. Views expressed here do not necessarily reflect those of ScienceDaily or its staff.

‘Combinatorial’ Approach Squashes Software Bugs Faster, Cheaper ScienceDaily (Dec. 17, 2010)

A team of computer scientists and mathematicians from the National Institute of Standards and Technology (NIST) and the University of Texas, Arlington is developing an open-source tool that catches programming errors by using an emerging approach called «combinatorial testing.» The NISTTexas tool, described at a recent conference,* could save software developers significant time and money when it is released next year.

Studying software crashes in a variety of applications from medical devices to Web browsers, NIST researchers obtained hard evidence to support long-held conventional wisdom: most software failures result from simple events rather than complex ones.** Even for Web browsers containing hundreds of different variables, most failures were caused by interactions between just two variables. Nonetheless, in the applications that the researchers studied, additional failures could result from interactions of up to six variables.

Based on that insight, the NIST-Texas team went beyond the popular practice of «pairwise testing,» or exploring interactions between only two variables at a time, and designed a method for efficiently testing different combinations of settings in up to at least six interacting variables at a time. Their technique resembles combinatorial chemistry in which scientists screen multiple chemical compounds simultaneously rather than one at a time.

For example, imagine a word-processing program that features 10 different text formats. Certain combinations of settings (such as turning on superscript, subscript and italics at the same time) could cause the software to crash. Trying all possible combinations of the 10 effects together would require 1,024 tests. However, testing all possible combinations of any three effects requires just 13 different tests, thanks in part to the fact that if the tests are selected judiciously the 10 different variables allow you to explore 120 combinations of «triples» simultaneously.

The new tool generates tests for exploring interactions among the settings of multiple variables in a given program. Compared to most earlier combinatorial testing software, which has typically focused on testing interactions between just two variables, the tool excels at quickly generating efficient tests for 6-way interactions or more.

The researchers plan to release the tool early next year as open-source code. They currently are inviting developers to participate in beta testing of the tool before release. This new approach for finding bugs to squash may be particularly useful for increasing the reliability of e-commerce Web sites, which often contain many interacting variables, as well as industrial process controls, such as for robotic assembly lines of high-definition televisions, which contain many interacting software-controlled elements that regularly turn on and off.

* Y. Lei, R. Kacker, D. R. Kuhn, V. Okun and J. Lawrence, IPOG: A general strategy for t-way software testing. IEEE International Conference on Engineering of Computer-Based Systems March 26-29, 2007, pp 549-556, Tucson AZ, USA.

** D.R. Kuhn, D.R. Wallace and A.J. Gallo, Jr. Software fault interactions and implications for software testing. IEEE Trans. on Software Engineering, June 2004 (Vol. 30, No. 6) pp. 418-421.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.

New Tool Improves Productivity, Quality When Translating Software ScienceDaily (Feb. 24, 2009)

Researchers at North Carolina State University have developed a software tool that will make it faster and easier to translate video games and other software into different languages for use in various international markets — addressing a hurdle to internationalization that has traditionally been timeconsuming and subject to error.

If you want to sell or promote a software application in a foreign market, you have to translate it into a new language. That used to mean programmers would have to pore over thousands of lines of code in order to identify every little string that relates to what appears on a user’s screen. This could be incredibly time consuming and, even then, there was always room for human error. Programmers have to be certain they are not replacing code that governs how the program actually works.

But now researchers from NC State and Peking University have created a software tool that identifies those pieces of software code that are designed to appear on-screen and communicate with the user (such as menu items), as opposed to those pieces of code that govern how the program actually functions. Once those «on-screen» pieces of code have been identified, the programmers can translate them into the relevant language — for example, translating the tabs on a toolbar from English into Chinese.

«This is a significant advance because it saves programmers from hunting through tens of thousands of lines of code,» says Dr. Tao Xie, an assistant professor of computer science at NC State. «Productivity goes up because finding the ‘need-to-translate’ strings can be done more quickly. The quality also goes up, because there is less chance that a programmer will make a mistake and overlook relevant code.»

As an example of how the software tool can identify errors and oversights made by human programmers, Xie says, the researchers found 17 translation omission errors when they applied the software tool on a popular online video game. The errors were then corrected.

The research was supported in part by the National Science Foundation and the U.S. Army Research Office. The research will be presented in May at the International Conference on Software Engineering in Vancouver, Canada, and will also be published in the proceedings of the conference.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.

Software Development: Speeding From Sketchpad To Smooth Code ScienceDaily (Aug. 11, 2009)

Creating error-free software remains time consuming and labour intensive. A major European research effort has developed a system that speeds software development from the drawing board to high-quality, platform-independent code.

According to Piotr Habela, technical coordinator of the VIDE (for VIsualize all moDel drivEn programming) project, software developers have many good ideas about how to visualise, develop, debug and modify software, plus standards to guide them. The problem is that the design and development process has always been fragmented.

He explains that methods for visualising or flowcharting how a program should work do not lead directly to computer code.

Software written in one programming language may be difficult to translate into another. No matter how carefully programmers work, complex software almost always includes errors that are difficult to diagnose and fix. Because of the lack of precise links between a program’s features and the software that implements them, updating or modifying a program often turns out to be timeconsuming and costly.

«What we attempted that was quite distinct,» says Habela, «was to make the development of executable software a single process, a single toolchain, rather than a sequence of separate activities.»

It took two-and-a-half years of intensive effort by VIDE’s ten academic and industrial research partners, funded by the European Union, but the result is a software design and development toolkit that promises to make creating wellfunctioning, easily-modified software — for example for small businesses — significantly smoother, faster, and less expensive.

Model driven architecture

A key part of VIDE’s approach was to build on the idea of Model Driven Architecture, a programming methodology developed by an international consortium, the Object Management Group.

The idea is that each stage of software development requires its own formal model. The VIDE team realised that by creating and linking those models in a rigorous way, they could automate many of the steps of software development.

A software developer might start by working with a domain expert — for example a business owner — to determine what a new program needs to do. Those inputs, outputs and procedures would be formalised in what is called a computation independent model (CIM), a model that does not specify what kinds of computation might be used to carry it out — it lays out what the program will do rather than how it will do it.

«Models are usually considered just documents,» says Habela. «Our goal was to make the models serve as production tools.»

In the case of VIDE, much of that modeling is visual, in the form of flowcharts and other diagrams that are intuitive enough for the domain expert to understand, but which are sufficiently formalised to serve as the inputs to the next stage of the software development process.

To carry out these first modeling steps, the researchers created a domain analysis tool and a programming language called VCLL, for VIDE CIM Level Language.

From CIM to PIM to program

Once they have produced a formal CIM of the program they want to implement, it’s time to move a step closer to a functioning program by translating it into a platform independent model, or PIM.

For the VIDE team, a PIM is a model that specifies precisely what a program needs to do, but at an abstract level that does not depend on any particular programming language.

The researchers developed several software tools to produce a usable, errorfree PIM. These include an executable modelling language and environment, a defect-detection tool, and finally a program that translates their final model into an executable Java program.

Luckily, the researchers did not have to build their system from the ground up. They were able to rely to a large extent on a pre-existing modeling language called UML, for Unified Modeling Language. UML provides a systematic way to visualise and describe a software system.

«We now have a kind of prototyping capability built into the development process,» says Habela. «You can design a model, specify its behavioural details, run it with sample data to see how it behaves, and then check with the domain expert to see if it is in fact the behaviour they expected.»

Several of the consortium members are implementing the VIDE toolkit in specific areas, for example web services, database management, and a variety of business processes.

Habela cautions that reaching VIDE’s goal of smoothly automating the entire software design and development process requires more work. Because of the broad scope of the project and the fundamental changes they are making, they are not yet ready to deploy the complete system.

However, he says, they have gone a long way towards clearing up «the muddy path from requirements to design.»

The VIDE project received funding from the ICT strand of the EU’s Sixth Framework Programme for research.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.

The Electronic Library of Mathematics: Mathematical Journals

Math Reviews: AMS database of more than a million reviews and abstracts of math publications (requires subscription).

2000 Mathematics Subject Classification Classification used by Math Reviews (MathSciNet) and Zentralblatt MATH (ZMATH) and other math bibliographies.

Fachinformationszentrum Karlsruhe (FIZ) — http://www.zentralblattmath.org

Information services for academic and industrial research. Access to MATH Database (Zentralblatt für Mathematik), International Reviews on Mathematical Education (Zentralblatt für Didaktik der Mathematik), CompuScience and index to Springer Lecture Notes in CS.

Front for the Mathematics ArXiv Userfriendly front end for the mathematics section of the arXiv (free).

The Electronic Library of Mathematics

The Electronic Library of Mathematics is offered by the European Mathematical Society (EMS) in cooperation with FIZ Karlsruhe / Zentralblatt MATH. Online journals, article collections, and monographs in electronic form: access is free.

MR Lookup —Bibliographic data from Math Reviews without the reviews (free).

EMANI — Electronic Mathematical Archiving Network Initiative: to support the longterm electronic preservation of mathematical publications.

EMIS: Mathematical Journals —Full text from about 40 research journals (free).

ArXiv — Open e-print archive with over 100,000 articles in physics, 10,000 in mathematics, and 1,000 in computer science. (Formerly called xxx; free)

Electronic Research Archive for Mathematics (The Jahrbuch Project) — A digital archive of the most important mathematical publications of the period 1868-1942 and a database based on the «Jahrbuch über die Fortschritte der Mathematik».

e-Math for Africa — Coordinating efforts to make an African consortium for e-journals and databases. Includes 300+ links to open access math e-journals.

Mathematics Journals Ranked by Impact SI impact factors up to 2000.

Journal Copyright Policy Gossip — Information from authors and publishers about the copyright policies of certain journals, maintained by William Stein.

Committee on Electronic Information Communication (CEIC) — Part of the International Mathematical Union. Its mandate is to coordinate world-wide efforts to publish mathematical papers, journals, and other scholarly work in web form. It may recommend standards on issues related to electronic communication.

Mathematical Errata A growing collection of mathematical errata, corrections, and addenda to textbooks, monographs, and journal articles.

MathDiss International —A project sponsored by the German Research Foundation to set up an international online full-text archive for mathematical theses in LaTeX format.


Nick Richardson is an Adjunct Instructor in the Criminal Justice department at St. Ambrose University. His research interests include criminology, social psychology, and substance abuse and addiction. Email is: Christopher Barnum is an Associate Professor of Criminal Justice and Sociology, and Director of the Master of Criminal Justice program at St.

Ambrose University. His research interests include criminology and social psychology. Email is: