Category Archives: Maths

The maths of life and death: our secret weapon in the fight against disease

Christian Yates, University of Bath

Maths is the language of science. It crops up everywhere from physics to engineering and chemistry – aiding us in understanding the origins of the universe and building bridges that won’t collapse in the wind. Perhaps a little more surprisingly, maths is also increasingly integral to biology.

For hundreds of years mathematics has been used, to great effect, to model relatively simple physical systems. Newton’s universal law of gravitation is a fine example. Relatively simple observations led to a rule which, with great accuracy, describes the motion of celestial bodies billions of miles away. Traditionally, biology has been viewed as too complicated to submit to such mathematical treatment.

Biological systems are often classified as “complex”. Complexity in this sense means that, due to the complicated interaction of many sub-components, biological systems can exhibit what we call emergent behaviour – the system as a whole demonstrates properties which the individual components acting alone cannot. This biocomplexity has often been mistaken for vitalism, the misconception that biological processes are dependent on a force or principle distinct from the laws of physics and chemistry. Consequently, it has been assumed that complex biological systems are not amenable to mathematical treatment.

How did the leopard get his spots?
VarnaK / Shutterstock.com

There were some early dissenters. Famous computer scientist and World War II code-breaker Alan Turing was one of the first to suggest that biological phenomena could be studied and understood mathematically. In 1952 he proposed a pair of beautiful mathematical equations which provide an explanation for how pigmentation patterns might form on animals’ coats.

Examples of different types of Turing patterns.
© Kit Yates

Not only was his work beautiful, it was also counter-intuitive – the sort of work that only a brilliant mind like Turing’s could ever have dreamed up. Even more of a pity, then, that he was so poorly treated under the draconian anti-homosexuality laws of the time. After a course of “corrective” hormone treatment, he killed himself just two years later.

An emerging field

Since then, the field of mathematical biology has exploded. In recent years, increasingly detailed experimental procedures have lead to a huge influx in the biological data available to scientists. This data is being used to generate hypotheses about the complexity of previously abstruse biological systems. In order to test these hypotheses, they must be written down in the form of a model which can be interrogated to determine whether it correctly mimics the biological observations. Mathematics is the natural language in which to do this.

In addition, the advent of, and subsequent increase in, computational ability over the last 60 years has enabled us to suggest and then interrogate complex mathematical models of biological systems. The realisation that biological systems can be treated mathematically, coupled with the computational ability to build and investigate detailed biological models, has led to the dramatic increase in the popularity of mathematical biology.

Maths has become a vital weapon in the scientific armoury we have to tackle some of the most pressing questions in medical, biological and ecological science in the 21st century. By describing biological systems mathematically and then using the resulting models, we can gain insights that are impossible to access though experiments and verbal reasoning alone. Mathematical biology is incredibly important if we want to change biology from a descriptive into a predictive science – giving us power, for example, to avert pandemics or to alter the effects of debilitating diseases.

A new weapon

Over the last 50 years, for example, mathematical biologists have built increasingly complex computational representations of the heart’s physiology. Today, these highly sophisticated models are being used in an attempt to understand better the complicated functioning of the human heart. Computer simulations of heart function allow us to make predictions about how the heart will interact with candidate drugs, designed to improve its function, without having to undertake expensive and potentially risky clinical trials.

We use mathematical biology to study disease as well. On an individual scale, researchers have elucidated the mechanisms by which our immune systems battles with viruses through mathematical immunology and suggested potential interventions for tipping the scales in our favour. On a wider scale, mathematical biologists have proposed mechanisms that can be used to control the spread of deadly epidemics like Ebola, and to ensure the finite resources dedicated to this purpose are employed in the most efficient way possible.

Mathematical biology is even being used to inform policy. There has been research done on fisheries for example, using mathematical modelling to set realistic quotas in order to ensure we do not overfish our seas and that we protect some of our most important species.

A day’s haul.
AJP / Shutterstock.com

The increased comprehension gleaned by taking a mathematical approach can lead to better understanding of biology at a range of different scales. At the Centre for Mathematical Biology in Bath, for example, we study a number of pressing biological problems. At one end of the spectrum, we try to develop strategies for averting the devastating effects of locust plagues comprising up to a billion individuals. At the other end, we try to elucidate the mechanisms that give rise to the correct development of the embryo.

Although mathematical biology has traditionally been the domain of applied mathematicians, it is clear that mathematicians who self-classify as pure have a role to play in the mathematical biology revolution. The pure discipline of topology is being used to understand the knotty problem of DNA packing and algebraic geometry is being used to select the most appropriate model of biochemical interaction networks.

As the profile of mathematical biology continues to rise, emerging and established scientists from disciplines across the scientific spectrum will be drawn to tackle the rich range of important and novel problems that biology has to offer.

The ConversationTuring’s revolutionary idea, although not fully appreciated in his time, demonstrated that there was no need to appeal to vitalism – the god in the machine – to understand biological processes. Chemical and physical laws encoded in mathematics, or “mathematical biology” as we now call it, could do just fine.

Christian Yates, Senior Lecturer in Mathematical Biology, University of Bath

This article was originally published on The Conversation. Read the original article.

There’s a mathematical formula for choosing the fastest queue

Enrico Scalas, University of Sussex and Nicos Georgiou, University of Sussex

It seems obvious. You arrive at the checkouts and see one queue is much longer than the other, so you join the shorter one. But, before long, the people in the bigger line zoom past you and you’ve barely moved towards the exit.

When it comes to queuing, the intuitive choice is often not the fastest one. Why do queues feel like they slow down as soon as you join them? And is there a way to decide beforehand which line is really the best one to join? Mathematicians have been studying these questions for years. So can they help us spend less time waiting in line?

The intuitive strategy seems to be to join the shortest queue. After all, a short queue could indicate it has an efficient server, and a long queue could imply it has an inexperienced server or customers who need a lot of time. But generally this isn’t true.

Without the right information, it could even be disadvantageous to join the shortest queue. For example, if the short queue at the supermarket has two very full trolleys and the long queue has four relatively empty baskets, many people would actually join the longer queue. If the servers are equally efficient, the important quantity here is the number of total items in the queue, not the number of customers. But if the trolleys weren’t very full but the hand baskets were, it wouldn’t be so easy to estimate and the choice wouldn’t be so clear.

This simple example introduces the concept of service time distribution. This is a random variable that measures how long it will take a customer to be served. It contains information about the average (mean) service time and about the standard deviation from the mean, which represents how the service time fluctuates depending on how long different customers need.

The other important variable is how often customers join the queue (the arrival rate). This depends on the average amount of time that passes between two consecutive customers entering the shop. The more people that arrive to use a service at a specific time, the longer the queues will be.

Never mind the queue, I picked the wrong shop.
Shutterstock

Depending on what these variables are, the shortest queue might be the best one to join – or it might not. For example, in a fish and chip shop you might have two servers both taking orders and accepting money. Then it is most often better to join the shortest queue since the time the servers’ tasks take doesn’t vary much.

Unfortunately, in practice, it’s hard to know exactly what the relevant variables are when you enter a shop. So you can still only guess what the fastest queue to join will be, or rely on tricks of human psychology, such as joining the leftmost queue because most right-handed people automatically turn right.

Did you get it right?

Once you’re in the queue, you’ll want to know whether you made the right choice. For example, is your server the fastest? It is easy to observe the actual queue length and you can try to compare it to the average. This is directly related to the mean and standard deviation of the service time via something called the Pollaczek-Khinchine formula, first established in 1930. This also uses the mean inter-arrival time between customers.

Unfortunately, if you try to measure the time the first person in the queue takes to get served, you’ll likely end up feeling like you chose the wrong line. This is known as Feller’s paradox or the inspection paradox. Technically, this isn’t an actual logical paradox but it does go against our intuition. If you start measuring the time between customers when you join a queue, it is more likely that the first customer you see will take longer than average to be served. This will make you feel like you were unlucky and chose the wrong queue.

The inspection paradox works like this: suppose a bank offers two services. One service takes either zero or five minutes, with equal probability. The other service takes either ten or 20 minutes, again with equal probability. It is equally likely for a customer to choose either service and so the bank’s average service time is 8.75 minutes.

If you join the queue when a customer is in the middle of being served then their service can’t take zero minutes. They must be using either the five, ten or 20 minute service. This pushes the time that customer will take to be served to more than 11 minutes on average, more than the true average for the of 8.75 minutes. In fact, two out of three times you encounter the same situation, the customer will want either the 10 or 20 minute service. This will make it seem like the line is moving more slowly than it should, all because a customer is already there and you have extra information.

So while you can use maths to try to determine the fastest queue, in the absence of accurate data – and for your own peace of mind – you’re often better just taking a gamble and not looking at the other options once you’ve made your mind up.

Enrico Scalas, Professor of Statistics and Probability, University of Sussex and Nicos Georgiou, Lecturer in Mathematics, Probability and Statistics, University of Sussex

This article was originally published on The Conversation. Read the original article.

Paradoxes of probability and other statistical strangeness

Stephen Woodcock, University of Technology Sydney

Statistics is a useful tool for understanding the patterns in the world around us. But our intuition often lets us down when it comes to interpreting those patterns. In this series we look at some of the common mistakes we make and how to avoid them when thinking about statistics, probability and risk.


You don’t have to wait long to see a headline proclaiming that some food or behaviour is associated with either an increased or a decreased health risk, or often both. How can it be that seemingly rigorous scientific studies can produce opposite conclusions?

Nowadays, researchers can access a wealth of software packages that can readily analyse data and output the results of complex statistical tests. While these are powerful resources, they also open the door to people without a full statistical understanding to misunderstand some of the subtleties within a dataset and to draw wildly incorrect conclusions.

Here are a few common statistical fallacies and paradoxes and how they can lead to results that are counterintuitive and, in many cases, simply wrong.


Simpson’s paradox

What is it?

This is where trends that appear within different groups disappear when data for those groups are combined. When this happens, the overall trend might even appear to be the opposite of the trends in each group.

One example of this paradox is where a treatment can be detrimental in all groups of patients, yet can appear beneficial overall once the groups are combined.

How does it happen?

This can happen when the sizes of the groups are uneven. A trial with careless (or unscrupulous) selection of the numbers of patients could conclude that a harmful treatment appears beneficial.

Example

Consider the following double blind trial of a proposed medical treatment. A group of 120 patients (split into subgroups of sizes 10, 20, 30 and 60) receive the treatment, and 120 patients (split into subgroups of corresponding sizes 60, 30, 20 and 10) receive no treatment.

The overall results make it look like the treatment was beneficial to patients, with a higher recovery rate for patients with the treatment than for those without it.

The Conversation, CC BY-ND

However, when you drill down into the various groups that made up the cohort in the study, you see in all groups of patients, the recovery rate was 50% higher for patients who had no treatment.

The Conversation, CC BY-ND

But note that the size and age distribution of each group is different between those who took the treatment and those who didn’t. This is what distorts the numbers. In this case, the treatment group is disproportionately stacked with children, whose recovery rates are typically higher, with or without treatment.


Base rate fallacy

What is it?

This fallacy occurs when we disregard important information when making a judgement on how likely something is.

If, for example, we hear that someone loves music, we might think it’s more likely they’re a professional musician than an accountant. However, there are many more accountants than there are professional musicians. Here we have neglected that the base rate for the number of accountants is far higher than the number of musicians, so we were unduly swayed by the information that the person likes music.

How does it happen?

The base rate fallacy occurs when the base rate for one option is substantially higher than for another.

Example

Consider testing for a rare medical condition, such as one that affects only 4% (1 in 25) of a population.

Let’s say there is a test for the condition, but it’s not perfect. If someone has the condition, the test will correctly identify them as being ill around 92% of the time. If someone doesn’t have the condition, the test will correctly identify them as being healthy 75% of the time.

So if we test a group of people, and find that over a quarter of them are diagnosed as being ill, we might expect that most of these people really do have the condition. But we’d be wrong.


In a typical sample of 300 patients, for every 11 people correctly identified as unwell, a further 72 are incorrectly identified as unwell.
The Conversation, CC BY-ND

According to our numbers above, of the 4% of patients who are ill, almost 92% will be correctly diagnosed as ill (that is, about 3.67% of the overall population). But of the 96% of patients who are not ill, 25% will be incorrectly diagnosed as ill (that’s 24% of the overall population).

What this means is that of the approximately 27.67% of the population who are diagnosed as ill, only around 3.67% actually are. So of the people who were diagnosed as ill, only around 13% (that is, 3.67%/27.67%) actually are unwell.

Worryingly, when a famous study asked general practitioners to perform a similar calculation to inform patients of the correct risks associated with mammogram results, just 15% of them did so correctly.


Will Rogers paradox

What is it?

This occurs when moving something from one group to another raises the average of both groups, even though no values actually increase.

The name comes from the American comedian Will Rogers, who joked that “when the Okies left Oklahoma and moved to California, they raised the average intelligence in both states”.

Former New Zealand Prime Minister Rob Muldoon provided a local variant on the joke in the 1980s, regarding migration from his nation into Australia.

How does it happen?

When a datapoint is reclassified from one group to another, if the point is below the average of the group it is leaving, but above the average of the one it is joining, both groups’ averages will increase.

Example

Consider the case of six patients whose life expectancies (in years) have been assessed as being 40, 50, 60, 70, 80 and 90.

The patients who have life expectancies of 40 and 50 have been diagnosed with a medical condition; the other four have not. This gives an average life expectancy within diagnosed patients of 45 years and within non-diagnosed patients of 75 years.

If an improved diagnostic tool is developed that detects the condition in the patient with the 60-year life expectancy, then the average within both groups rises by 5 years.

The Conversation, CC BY-ND

Berkson’s paradox

What is it?

Berkson’s paradox can make it look like there’s an association between two independent variables when there isn’t one.

How does it happen?

This happens when we have a set with two independent variables, which means they should be entirely unrelated. But if we only look at a subset of the whole population, it can look like there is a negative trend between the two variables.

This can occur when the subset is not an unbiased sample of the whole population. It has been frequently cited in medical statistics. For example, if patients only present at a clinic with disease A, disease B or both, then even if the two diseases are independent, a negative association between them may be observed.

Example

Consider the case of a school that recruits students based on both academic and sporting ability. Assume that these two skills are totally independent of each other. That is, in the whole population, an excellent sportsperson is just as likely to be strong or weak academically as is someone who’s poor at sport.

If the school admits only students who are excellent academically, excellent at sport or excellent at both, then within this group it would appear that sporting ability is negatively correlated with academic ability.

To illustrate, assume that every potential student is ranked on both academic and sporting ability from 1 to 10. There are an equal proportion of people in each band for each skill. Knowing a person’s band in either skill does not tell you anything about their likely band in the other.

Assume now that the school only admits students who are at band 9 or 10 in at least one of the skills.

If we look at the whole population, the average academic rank of the weakest sportsperson and the best sportsperson are both equal (5.5).

However, within the set of admitted students, the average academic rank of the elite sportsperson is still that of the whole population (5.5), but the average academic rank of the weakest sportsperson is 9.5, wrongly implying a negative correlation between the two abilities.

The Conversation, CC BY-ND

Multiple comparisons fallacy

What is it?

This is where unexpected trends can occur through random chance alone in a data set with a large number of variables.

How does it happen?

When looking at many variables and mining for trends, it is easy to overlook how many possible trends you are testing. For example, with 1,000 variables, there are almost half a million (1,000×999/2) potential pairs of variables that might appear correlated by pure chance alone.

While each pair is extremely unlikely to look dependent, the chances are that from the half million pairs, quite a few will look dependent.

Example

The Birthday paradox is a classic example of the multiple comparisons fallacy.

In a group of 23 people (assuming each of their birthdays is an independently chosen day of the year with all days equally likely), it is more likely than not that at least two of the group have the same birthday.

People often disbelieve this, recalling that it is rare that they meet someone who shares their own birthday. If you just pick two people, the chance they share a birthday is, of course, low (roughly 1 in 365, which is less than 0.3%).

However, with 23 people there are 253 (23×22/2) pairs of people who might have a common birthday. So by looking across the whole group you are testing to see if any one of these 253 pairings, each of which independently has a 0.3% chance of coinciding, does indeed match. These many possibilities of a pair actually make it statistically very likely for coincidental matches to arise.

For a group of as few as 40 people, it is almost nine times as likely that there is a shared birthday than not.

The probability of no shared birthdays drops as the number of people in a group increases.
The Conversation, CC BY-ND

Stephen Woodcock, Senior Lecturer in Mathematics, University of Technology Sydney

This article was originally published on The Conversation. Read the original article.

Here’s the best way to shuffle a pack of cards – with a little help from some maths

Graham Kendall, University of Nottingham

Shuffling a pack of cards isn’t as easy as you think, not if you want to truly randomise the cards. Most people will give a pack a few shuffles with the overhand or riffle methods (where the pack is split and the two halves are interweaved). But research has shown this isn’t enough to produce a sufficiently random order to make sure the card game being played is completely fair and to prevent people cheating. The Conversation

As I wrote in a recent article about card counting, not having an effective shuffling mechanism can be a serious problem for casinos:

Players have used shuffle tracking, where blocks of cards are tracked so that you have some idea when they will appear. If you are given the option to cut the pack, you try and cut the pack near where you think the block of cards you are tracking is so that you can bet accordingly. A variant on this is to track aces as, if you know when one is likely to appear, you have a distinct advantage over the casino.

Card Counting and Shuffle Tracking in Blackjack.

So how can you make sure your cards are well and truly shuffled?

To work out how many ways there are of arranging a standard 52-card deck, we multiply 52 by all the numbers that come before it (52 x 51 x 50 … 3 x 2 x 1). This is referred to as “52 factorial” and is usually written as “52!” by mathematicians. The answer is so big it’s easier to write it using scientific notation as 8.0658175e+67, which means it’s a number beginning with 8, followed by 67 more digits.

To put this into some sort of context, if you dealt one million hands of cards every second, it would take you 20 sexdecillion, or 20,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000, years to deal the same number of hands as there are ways to arrange a deck of cards.

You would think that it would be easy to get a random order from that many permutations. In fact, every arrangement is, in a sense, random. Even one where the cards are ordered by suit and then rank could be considered random. It is only the interpretation we put on this order that would make most people not consider it random. This is the same as the idea that the lottery is less likely to throw up the numbers one to six, whereas in reality this combination is just as probable as any other.

In theory, you could shuffle a deck so that the cards emerged in number order (all the aces, followed by all the twos, followed by all the threes and so on), with each set of numbers in the same suit order (say spades, hearts, diamonds and clubs). Most people would not consider this random, but it is just as likely to appear as any other specific arrangement of cards (very unlikely). This is an extreme example but you could come up with an arrangement that would be seen as random when playing bridge because it offered the players no advantage, but wouldn’t be random for poker because it produced consistently strong hands.

But what would a casino consider random? Mathematicians have developed several ways of measuring how random something is. Variation distance and separation distance are two measures calculated by mathematical formulas. They have a value of 1 for a deck of cards in perfect order (sorted by numbers and suits) and lower values for more mixed arrangements. When the values are less than 0.5, the deck is considered randomly shuffled. More simply, if you can guess too many cards in a shuffled deck, then the deck is not well shuffled.

The Best (and Worst) Ways to Shuffle Cards – Numberphile.

Persi Diaconis is a mathematician who has been studying card shuffling for over 25 years. Together with and Dave Bayer, he worked out that to produce a mathematically random pack, you need to use a riffle shuffle seven times if you’re using the variation distance measure, or 11 times using the separation distance. The overhand shuffle, by comparison, requires 10,000 shuffles to achieve randomness.

“The usual shuffling produces a card order that is far from random,” Diaconis has said. “Most people shuffle cards three or four times. Five times is considered excessive”.

But five is still lower than the number required for an effective shuffle. Even dealers in casinos rarely shuffle the required seven times. The situation is worse when more than one deck is used, as is the case in blackjack. If you are shuffling two decks, you should shuffle nine times and for six decks you need to shuffle twelve times.

Shuffle like a casino dealer.

Many casinos now use automatic shuffling machines. This not only speeds up the games but also means that shuffles can be more random, as the machines can shuffle for longer than the dealers. These shuffling machines also stop issues such as card counting and card tracking.

But even these machines are not enough. In another study, Diaconis and his colleagues were asked by a casino to look at a new design of a card shuffling machine that the casino had built. The researchers found that the machine was not sufficiently random, as they simply did not shuffle enough times. But using the machine twice would resolve the problem.

So next time you’re at a casino, take a look at how many times the dealers shuffle. The cards may not be as random as you think they are, which could be to your advantage.

Graham Kendall, Professor of Computer Science and Provost/CEO/PVC, University of Nottingham

This article was originally published on The Conversation. Read the original article.

Can maths help you win at roulette?

Graham Kendall, University of Nottingham

Albert Einstein supposedly once said: “No one can win at roulette unless he steals money from the table while the croupier isn’t looking.” The Conversation

Although I wouldn’t normally question Einstein, this statement isn’t true. In fact, you can use Einstein’s specialist subject, physics, to help you win. Or you can find a biased wheel that makes some numbers more likely to come up.

What Einstein actually meant was that there is no mathematical trick that can help you win at roulette. Each spin is an independent trial and, in the long run, the casino will win. This is different to a game such as Blackjack where the probabilities change as cards are dealt.

But some believe that it is possible to exploit the way the roulette wheel, and the betting cloth, is laid out to give themselves an advantage. The idea is that you can make bets on the layout in a way that you are guaranteed to win. But is this really possible?

Roulette wheel layout

Like a dartboard, the layout of a roulette wheel did not come about by accident. It was carefully planned and exhibits certain properties. In fact, there are two different layouts. An American wheel and a European wheel. The two layouts are shown below.

American roulette wheel layout.
Wikimedia Commons

European Roulette Wheel Layout.
Wikimedia Commons

Notice that the American wheel has two zeroes. This is important as it doubles the advantage for the casino. On a European wheel you would expect to lose, in the long run, 2.7% of any money you bet with. On an American wheel you can expect to lose 5.26% (if you are interested in the mathematics of roulette, the video at the end will show you how these odds are calculated).

The numbers are arranged in a different order on each wheel but there are some similarities in the patterns. On both wheels, the red and black numbers alternate around the wheel, although if you removed the zeroes, the American wheel would have consecutive reds and blacks. The wheels are also structured so that the low numbers (1-18) and the high numbers (19-36) should alternate as much as possible.

On a European wheel, this is only violated where the 5 sits next to the 10 (both low numbers). On the American wheel, there are many examples where this rule is violated. It is for this reason that the American wheel is considered not as balanced as the European wheel. Both wheels also try to distribute odd and even numbers as evenly as possible. But again there are a number of violations of this rule on both wheels.

On the European wheel there are two other interesting symmetries. First, all the low red numbers and black high numbers are on one side of the zero, and the high red numbers and low black numbers are on the other side. Second, the sequence 29-7-28-12-35-3-26-0-32 contains no numbers between 13 and 24 (the second dozen). You can place a bet on the whole of the second dozen, with odds of 2-1.

European roulette layout.
Wikipedia Commons

So, can we beat the maths?

A simple search on Google will return many (possibly millions) of systems for playing (and supposedly winning) roulette. Some easy, some complicated, some well described, some not so.

A system should really be a combination of a playing strategy and a money management strategy. Perhaps the best known money management strategy is the Martingale system. This system is guaranteed to win money as long as you have enough of a bankroll to double your bet after every loss and you do not hit the table limit, which you will quickly do so. The Martingale system is probably the quickest way to bankruptcy known to man.

Whatever betting strategy, and money management strategy, you choose, they all suffer from the same fate. Assuming that each number on the wheel has the same probability of being selected – meaning the wheel is not biased – the maths means the casino will always win. The system may look good, and may work in the short term, but when one of the numbers comes up that you have not bet on you will lose and the casino will move towards its win expectation (2.7% or 5.26%).

Some systems involve betting on many numbers, perhaps 20. In this case, you will win quite often as you are covering more than half of the numbers. But when one of the numbers does not turn up (and it will almost half the time) you lose all of the 20 bets you have made. This will often wipe out any wins to date.

Any system, so far devised, can be analysed to show that there is a win expectation for the casino. The following video shows the maths.

The mathematics of roulette.

You might as well place a single chip on the same number every time and hope that it appears more than it should during the short time that you are playing.

We can dress up the layout of the wheel, the layout of the betting cloth, our number selection and our money management system however we like, but the maths is always there, quietly working against us. You might as well just have fun, pick random numbers and trust to Lady Luck. Either that, or do as Einstein suggested and steal chips (not that we’d recommend it).

Graham Kendall, Professor of Computer Science and Provost/CEO/PVC, University of Nottingham

This article was originally published on The Conversation. Read the original article.

How Isaac Newton could help you beat the casino at roulette

Graham Kendall, University of Nottingham

Imagine walking into a casino with a computer strapped to your chest. Solenoid electromagnets thump against your body telling you where to place your bet on the roulette table. Suddenly, you start getting electric shocks. You rush to the toilet to undertake emergency repairs, hoping that the casino staff do not realise what is happening. The Conversation

In the late seventies, graduate student Doyne Farmer and colleagues did just that – with purpose-built computers that could predict where a roulette ball would land. The project, described in the book The Newtonian Casino (published as The Eudaemonic Pie in the US), was, however, difficult and fraught with technical problems. The team never really found a reliable way of doing it. But decades later, is it any closer to becoming a reality?

In a game of roulette, the croupier spins a wheel in one direction and a ball in the other direction. Players then place bets on where the ball will land by choosing either a single number, a range of numbers, the colours red or black or odd or even numbers.

Our understanding of the physics behind the movement of the ball and wheel is pretty solid – governed by Newton’s laws of motion. As the ball slows, gravity takes hold and it falls into one of the numbered compartments. It is predictable when the ball will leave the rim. However once it does, the route it takes to a numbered slot is less so. This is because the ball bounces around as it strikes various obstacles.

Every roulette wheel is slightly different. Atmospheric conditions continually change and the wheel itself has features that encourage randomness – such as the size of the frets between the numbers and the diamond-shaped obstacles that intercept the ball as it falls down to the wheel. This means that you cannot predict the exact number where the ball will land. But you only need to know which area of the wheel the ball will land and you can gain a massive advantage over the casino – more than 40%. This is a huge swing from the 5.26% margin that US casinos have over players – often referred to as the house edge. In Europe it is only 2.7%, as the wheel has only one zero (a US wheel has two zeroes).

Sweaty experiments

When Farmer and his team entered the casino for the first time, two people were wearing computers. One had a computer built into his shoes, with the task of inputting data by tapping switches under the toes. This computer performed two main functions. One was to adjust parameters for each wheel before a game, such as the rate at which the ball and wheel slowed down, and the velocity of the ball when it fell off the track. They also had to determine whether the wheel exhibited any tilt.

The second job was during live play. The player with the shoe computer tapped the toe switches each time a certain point (typically the double zero) on the wheel passed by and also when the ball passed by. Using this information, the program could calculate the speed of both the wheel and the ball – thus knowing when the ball would start to fall. Knowing the relative positions of the ball and the wheel meant that a prediction could be made about where the ball would finally land. The computer then had to transmit the prediction to the person wearing the second computer. This was achieved by weak radio signals.

Shoe computer. The Eudaemonic Pie display at the Heinz Nixdorf Museum.
https://en.wikipedia.org/wiki/J._Doyne_Farmer, CC BY-SA

The second computer, strapped to someone else, received the radio signals and conveyed this information to the player by the solenoid electromagnets that thumped that player’s stomach. A code had been developed which relayed the predicted number, with the player placing bets on that number and several numbers either side to account for the randomness. In order that the casinos could not easily see what they were doing, the team altered their betting patterns slightly. For example, not betting on all the consecutive numbers.

However this never gave them the 40% advantage observed in the lab – mainly due to technological problems such as short circuits caused by sweating, wires becoming loose and lost radio connections.

It took several years for the team (which now comprised about 20 people who’d worked on the project in varying degrees) to develop an improved computer system. Both computers were now in custom-built shoes. This could protect the operator from being electrocuted but would also make it harder for the casino to detect. The other innovation was that the computers were set in resin blocks, with only the toe-operated switches and the solenoids that now drummed against the feet, being visible. This was to try and combat the problems such as loose wires and sweating.

Binion’s casino.
Ken Lund/Flickr, CC BY-SA

They then entered Binion’s casino in Las Vegas ready for an all-out assault. Once the parameters had been set, the first prediction was to bet in the third octant – which included the numbers 1, 13, 24 and 36. The ball landed in 13 and the team got paid off at 35-1. The years of work looked promising, but the solenoids eventually started to act randomly, so the accurate predictions from one computer were not being transmitted to the other. The team suspected it was due to the electronic noise present in casinos. Eventually they had no choice but to abandon the idea.

Would it work today?

The main issue in the late seventies and early eighties was that the team had to build their own computers from scratch, literally – they had to design the computer, buy all the components and get busy with a soldering iron. These days, the computers are readily available, as the following video shows.

Technology has evolved. These days, all the required processing power could be fitted into a single unit. You could imagine a system based on a mobile phone where the camera videos the ball and the wheel and image processing software extracts the relevant data so that the prediction software can calculate the final position of the ball.

But certain challenges still remain. If several people are involved, which is the best way to avoid detection, how can you work as a team and pass data? Perhaps the use of free wifi in many casinos could be a solution? Another problem is how to best hide the fact that you are trying to use an electronic device to predict where the ball will land, when you need to input data and receive the prediction. Here, suitably connected glasses may be one get around, used in tandem with toe-operated switches.

The hardest challenge, however, is the casino itself. They are certainly unlikely to simply let you have a camera pointed at the roulette wheel, especially if you are winning. If they did, they would be likely to ask you to leave and as it is often illegal to use such devices. But with a little creativity it may not be long before scientists prove they are able to outsmart casinos.

Graham Kendall, Professor of Operations Research and Vice-Provost, University of Nottingham

This article was originally published on The Conversation. Read the original article.

How to beat the casino – legally

Graham Kendall, University of Nottingham

If there’s one thing everybody knows about gambling it’s that the house always wins. And while it is true that casinos always make a profit, there are a number of ways to cheat the system – some of which are actually perfectly legal.

Half a century ago, mathematician Edward Thorp published a groundbreaking book outlining how a player could use “card counting” to get an advantage in the game Blackjack by keeping track of the cards left in a deck. Ever since, casinos have been trying to eradicate card counting while card counters are getting increasingly skilled at not getting caught. So is it possible to outplay casinos today? And what will it be like in the future?

Winning Blackjack Hand.
Wikipedia Commons

Casinos are businesses and operate by building in a margin – often referred to as the house edge. If you play roulette and bet on a single number you will be paid at odds of 35-1 when the true odds are 36-1 in Europe and 37-1 in the US. The fact that you are receiving less than the true odds is the house edge and explains why casinos make money in the long term. Of course, some people have to win, otherwise casinos would cease to exist.

Advantage players

What casinos don’t like are “advantage players” – people seeking to have an edge over the house. Sometimes this involves cheating and/or illegal activities ranging from past posting (making a bet after the time when no more bets are to be taken) to collaborating at the poker table and using a computer to help make decisions.

Card counting, however, is legal. In Blackjack, the aim of the player is to achieve a hand of cards whose points add up nearer to 21 than the dealer’s hand, but without exceeding 21. Many hands are played from the same deck of cards, so what happens in one hand will influence what happens in future hands. As an example, if a ten has been played from the pack then it cannot appear in the next hand. This is different from other games, such as roulette, where the outcome of one spin has no effect on the next spin.

Professor Thorp and his contribution to card counting.

Card counting is based on the fact that a large proportion of high cards (such as tens, jacks, queens and kings, which are all worth ten points) left in the unplayed deck statistically improves the player’s chances. This is because a player can decide not to draw a new card to a hand such as 16, but the casino is forced to, as it follows strict rules. If there are a high proportion of high cards left in the unplayed deck of cards, the dealer has more chance of busting (going over 21). This can be combined with “basic strategy” – developed from computer simulations of millions of blackjack hands – which tells the player the best action to take for each possible card combination.

Combining card counting and basic strategy can help a player convert the (long term) house edge from 2.7%, in favour of the casino, to about a 1% advantage to the player. Of course, once you have this advantage you can increase your bet.

To give a simple example, if you were playing basic strategy and were dealt a ten and a six, and the dealer had a three showing (one of the dealers cards is visible to the player), you would stand (not take another card) as you hope that the dealer would draw a ten and bust. If you were card counting, and you knew that more low cards had been played, you might decide to increase your stake at this point.

Evolving battle

Casinos have introduced a number of measures to deter card counting. These include spotting those doing it and simply banning them from playing, or even from entering the casino. Another approach is to increase the number of decks from one to (typically) six, or even eight. Some casinos also shuffle the cards after only about 75% have been played or shuffle them constantly using automatic shufflers.

You might wonder why casinos don’t simply withdraw blackjack. Well, it remains a popular game, and one that is still profitable. There are also many would-be card counters who are not actually that good at it, and they provide income to the casinos.

Many blackjack players have fought back against such measures, arguing that casinos should allow gamblers to use skill when playing the game. As a card counter operating on their own is relatively easy to spot (intense concentration, increasing bets and so on), a team of students from MIT showed it could successfully be done in teams. The idea is that somebody else counts the cards – they may not even be sitting at the table. When the count reaches an agreed value, they signal to another player, who joins the table to start betting. This is a lot more difficult to detect but casinos may stop players joining the game until after a shuffle to combat such a strategy.

Breaking Vegas: the true story of The MIT blackjack team.

Other players have used shuffle tracking, where blocks of cards are tracked so that you have some idea when they will appear. If you are given the option to cut the pack, you try and cut the pack near where you think the block of cards you are tracking is so that you can bet accordingly. A variant on this is to track aces as, if you know when one is likely to appear, you have a distinct advantage over the casino.

It’s been 50 years since Thorp’s book, and it is unlikely that the war of wills between blackjack players and casinos will end any time soon. Some of our work has investigated how artificial neural networks (simple models of the human brain) could help evolve blackjack strategies. This was done by playing thousands of blackjack hands and the computer learning what to do in any given situation, getting better each time. There is a lot of scope to see if automated computer programs could learn even more sophisticated strategies.

Graham Kendall, Professor of Operations Research and Vice-Provost, University of Nottingham

This article was originally published on The Conversation. Read the original article.

Optimising the future with mathematics

Geoff Prince, Australian Mathematical Sciences Institute

AUSTRALIA 2025: How will science address the challenges of the future? In collaboration with Australia’s chief scientist Ian Chubb, we’re asking how each science discipline will contribute to Australia now and in the future. Written by luminaries and accompanied by two expert commentaries to ensure a broader perspective, these articles run fortnightly and focus on each of the major scientific areas. Today, we add mathematics to the mix.

Mathematics is an absolutely critical part of our future – and we can maximise its impact for the public and private good over the next 11 years if we take the opportunity now.

It is the multidisciplinary and universal nature of mathematics which makes this true: multidisciplinary because of its vast scope and universal because of the effectiveness of its processes.

In some fields it plays a supportive role and in others, the lead. I will describe a lead role which will be crucial to achieving the sort of economy we want: the optimisation of public and private sector enterprise. (I will touch on statistics and its role in data analysis only in passing as my esteemed colleague Terry Speed will cover it later in this series.)

Charles Darwin summed up the deep importance of mathematics when he said

Mathematics seems to endow one with something like a new sense.

Mathematicians do not have a monopoly on this extra sense. Broad mathematical capability across the community underpins most qualities identified in the aspiration for 2025. Bankers, nurses and engineers competently practise various forms of mathematics on a daily basis.

Today’s 12-year-olds entering secondary school will be 2025’s young graduates.
After the slide in the performance of our 15-year-olds exposed in the latest Programme for International Student Assessments (PISA) results, it’s not clear that they will enjoy the same broad mathematical capability as today’s 23-year-olds.

The Australian Mathematical Sciences Institute’s (AMSI) own aspiration for 2025 is to lift the percentage of secondary maths classes taught by qualified maths teachers from an appalling 66% now to 100%.

We have serious work to do here just to maintain the status quo, but we must also be prepared to deal with the new quantitative and qualitative challenges thrown up by this rapidly changing world – and to do that, we must be more agile than we are at present.

Getting practical about mathematics

Biology is a case in point. The slow uptake of mathematics and statistics in the university biology curriculum hampers our progress despite the demand for mathematically capable specialists at the research frontier.

The lesson here is to connect mathematics and biology in our schools, two disciplines which have not traditionally been close (notwithstanding Darwin’s observation). Maths is meeting the biosciences in the 21st century much as maths met physics in the 20th, and we must communicate this through the curriculum – not leave it to Brian Cox, Simon Singh, Facebook and Twitter.

We need our ‘mathematical sense’ or we risk ending up with The Blind Leading the Blind (Pieter Bruegel the Elder, 1568).
Wikimedia Commons, CC BY

The advanced mathematics that the discipline itself practises loosely splits into

  1. theoretical mathematics: developed without an immediate view to external application. It is the deep intellectual nature of theoretical mathematics which attracts many to the discipline (think of the Clay Millennium problems).
  2. applicable mathematics: focused on practical benefit on various time scales. It is applicable mathematics which most directly, but not exclusively, impacts on our aspirations for 2025.

Many of any of us move freely between the two and history shows that the multidisciplinary capacity of mathematics depends critically on the health of the discipline proper. The use of 19th and 20th century differential geometry in 21st century computer graphics is a striking example. This pointed observation is aimed at the managements of our universities!

The word cloud below shows some public, private and research enterprises, all contributing critically to where we will be in 2025 and all employing or engaging with research-trained mathematicians and statisticians.

Wordle

Mathematicians’ roles are increasingly important in a world addicted to progress, and they are multidisciplinary in nature – statisticians work with retailers to refine and analyse their loyalty programs and mathematicians work with banks to manage financial risk and with the hospitals to manage emergency ward workflows.

We make a fundamental contribution to the growth of knowledge based industries and to the smart operation of the natural and primary resource sectors. Unfortunately we don’t communicate this very well, especially to students and their parents, but we are making a start.

The practice of this applicable mathematics can be broken into support roles and lead roles. Roughly speaking the support roles involve the practice of existing sophisticated mathematics and the lead roles involve active research:

  • computational mathematics plays a lead role in industrial, biological, economic and environmental modelling, such as in the increasing accuracy and sophistication of climate change models
  • bioinformatics plays a lead role in genetics, creating algorithms to analyse genomic data to expose genetic markers for disease
  • optimisation should play a lead role in both making the Australian economy competitive in 2025 and in improving our national well-being.

Optimising optimisation

Broadly speaking, the mathematical field of optimisation involves determining an optimal scenario (relative to some criteria) among a collection of alternatives.

The determination of the most efficient route between two locations, where “route” and “location” can have many meanings, or the most economical use of resources in production processes. Optimisation problems can involve thousands of variables and minimise or maximise many “objective functions”.

It sounds dry, but it cries out “productivity growth!” and “competitive advantage!” and, in times of emergency, “lives saved!

Darwin would certainly agree that optimisation is in his “new sense” category.

Australia is getting better at optimisation, from traffic management to mining to aircraft scheduling, but it’s patchy. The defence forces are very good at it, in part due to the work of the Defence Science and Technology Organisation (DSTO), as well as the CSIRO, NICTA, IBM and some of the universities.

Adrianne Behning Photography/Flickr, CC BY-NC-ND

The health sector is not uniformly good at optimisation, nor are our public transport systems.

Small to medium enterprise is not good at it at all. We are babes in the woods compared to countries such as Germany and the US for whom optimisation is worth billions.

The really smart way to optimise infrastructure is to build optimality into the design. We almost never to do this – we usually optimise as an afterthought, if at all.

But one shining Australian example of optimisation in design is the work of business analytics and optimisation company Biarri Commercial Mathematics on the National Broadband Network (NBN) – work so good that they are one of six global 2014 finalists for the prestigious Franz Edelman Prize.

The mandating by government of optimisation integral to design for significant public and private infrastructure projects would have a transformative impact on the Australian economy. It would not only boost productivity but build in competitive advantage and contribute to a sustainable future.

Optimisation would become part of the economic culture at all scales.

By keeping the bureaucracy to a minimum this measure would encourage the growth of dynamic companies like Biarri and draw on the capacity of CSIRO, IBM, NICTA and the universities, all of whom would be able to tender for the design work.

It would strengthen the mathematical sciences and thrust us, sure-footed, towards 2025 and beyond without fear of falling into the ditch of mathematical ignorance.


Nalini Joshi, Professor of Mathematics at the University of Sydney

Mathematics is a universal language that unlocks innovation by abstracting a problem to reveal patterns that answer the crucial questions. The key to Australia’s future competitiveness and security lies in continually creating and adapting mathematical representations of the real world.

Mathematical truths make a complex world more comprehensible and manageable; they are intertwined with efficiency and innovation at all levels of the economy.

lytfyre/Flickr, CC BY-NC-SA

Mathematics can show us how to minimise traffic snarls in our cities, cut costs in a complex network of rail transportation, avoid congestion on the internet, produce innovative designs in optical lenses, weigh costs and benefits of environmental policies and optimise a small business plan.

Mathematics can create new and better Australian industries. It is now central to fundamental questions of nature, life and health.

How does genomic information lead to development and better health in early life? How can the resolution of medical images be improved while reducing their file size? How can mathematics be used to create a safer regulatory framework for financial markets?

The more technologically sophisticated a society becomes, the more critical its need for mathematical thinking. The pathways towards economic diversity and opportunity are paved with mathematics.


John Rice, Honorary Professor of Mathematics at the University of Sydney

A smart economy depends on mathematical skills but you would hardly know it. Mathematics in practice is often not recognised as such, and unrecognisable in terms of school and undergraduate mathematics. This is the great failure of mathematics education.

The greatest contribution that the discipline of mathematics could make to Australia’s smart economy is to remedy that.

The remedy concerns approach as well as content. Mathematics as it is practised, in research and professional occupations, requires thought, creativity, judgement, questioning and problem solving. An economy based on production lines might not require these skills as a matter of course, but a knowledge and innovations-based economy does.

queensu/Flickr, CC BY-NC-ND

Current mathematics education, in schools and universities, is satisfied with programming students to carry out certain mathematical processes, and assessment rewards students who can calculate everything even if they understand nothing.

It’s more like preparing for a production line than a knowledge based economy.

The mathematics discipline seeks a remedy in improving the knowledge base of those teaching mathematics. However, “upskilling” teachers with “more of the same” will not deliver mathematics in the form that a smart Australia needs.

We need mathematics “to be taught more like it is done” by those engaged in it, in both the innovations economy and research. This is a cultural change that involves the discipline itself, one that must be mainstreamed into school and university systems.

Without this, the connection between mathematics and the economy will remain dubious in the public mind, and mathematics will remain hamstrung in achieving its proper influence and delivering its benefits to a 21st century Australia.


This article is part of the Australia 2025: smart science series, co-published with the Office of the Chief Scientist.
Further reading:
Australia’s future depends on a strong science focus today
Physics: a fundamental force for future security
Proteins to plastics: chemistry as a dynamic discipline
Australia can nurture growth and prosperity through biology
A healthy future? Let’s put medical science under the microscope
Groundbreaking earth sciences for a smart – and lucky – country
To reach for the stars, Australia must focus on astronomy
Marine science: challenges for a growing ‘blue economy’
Building the nation will be impossible without engineers
Australia’s got ICT talent – so how do we make the most of it?
Agriculture in Australia: growing more than our farming future

Geoff Prince, Director and Professor, Australian Mathematical Sciences Institute

This article was originally published on The Conversation. Read the original article.

The science that makes us spend more in supermarkets, and feel good while we do it

Graham Kendall, University of Nottingham

When you walk into a supermarket, you probably want to spend as little money as possible. The supermarket wants you to spend as much money as possible. Let battle commence.

As you enter the store your senses come under assault. You will often find that fresh produce (fruit, vegetables, flowers) is the first thing you see. The vibrant colours put you in a good mood, and the happier you are the more you are likely to spend.

Your sense of smell is also targeted. Freshly baked bread or roasting chickens reinforce how fresh the produce is and makes you feel hungry. You might even buy a chicken “to save you the bother of cooking one yourself”. Even your sense of hearing may come under attack. Music with a slow rhythm tends to make you move slower, meaning you spend more time in the store.

Fresh Produce at a Supermarket.

Supermarkets exploit human nature to increase their profits. Have you ever wondered why items are sold in packs of 225g, rather than 250g? Cynics might argue that this is to make it more difficult to compare prices as we are working with unfamiliar weights. Supermarkets also rely on you not really checking what you are buying. You might assume that buying in bulk is more economic. This is not always the case. Besides, given that almost half of our food is believed to be thrown away, your savings might end up in the bin anyway.

Strategies such as those above get reported in the media on a regular basis. Mark Armstrong analysed retail discounting strategies for The Conversation last year, for example, and the Daily Mail recently published a feature on making “rip offs look like bargains”.

You might think that awareness of these strategies would negate their effectiveness, but that doesn’t appear to be the case. It would be a strong person that does not give way to an impulse buy occasionally and, for the supermarkets, the profits keep flowing.

Product placement

There are marketing strategies which you may not be aware of that also have an effect on our buying habits. Have you ever considered how supermarkets decide where to place items on the shelves and, more importantly, why they place them where they do?

When you see items on a supermarket shelf, you are actually looking at a planogram. A planogram is defined as a “diagram or model that indicates the placement of retail products on shelves in order to maximise sales”.

Planograms in action.
lyzadanger

Within these planograms, one phrase commonly used is “eye level is buy level”, indicating that products positioned at eye level are likely to sell better. You may find that the more expensive options are at eye level or just below, while the store’s own brands are placed higher or lower on the shelves. Next time you are in a supermarket, just keep note of how many times you need to bend down, or stretch, to reach something you need. You might be surprised.

The “number of facings”, that is how many items of a product you can see, also has an effect on sales. The more visible a product, the higher the sales are likely to be. The location of goods in an aisle is also important. There is a school of thought that goods placed at the start of an aisle do not sell as well. A customer needs time to adjust to being in the aisle, so it takes a little time before they can decide what to buy.

You might think that designing a good planogram is about putting similar goods together; cereals, toiletries, baking goods and so on. However, supermarkets have found it makes sense to place some goods together even though they are not in the same category. Beer and crisps is an obvious example. If you are buying beer, crisps seem like a good idea, and convenience makes a purchase more likely. You may also find that they are the high quality brands, but “that’s okay, why not treat ourselves?”

This idea of placing complementary goods together is a difficult problem. Beer and crisps might seem an easy choice but this could have an effect on the overall sales of crisps, especially if the space given to crisps in other parts of the store is reduced. And what do you do with peanuts, have them near the beer as well?

Supermarkets will also want customers to buy more expensive products – a process known as “upselling”. If you want to persuade the customer to buy the more expensive brand of lager, how should you arrange the store? You still need to stock the cheaper options, for those that are really are on a budget. But for the customers that can afford it, you want them to choose the premium product. Getting that balance right is not easy. My colleagues and I are among the researchers striving to develop the perfect algorithm taking into account size, height and depth of shelves, to direct customers to the right product, at the right time.

Shoppers won’t always obey the science, but these techniques are retailers’ most effective tools in the fight for our weekly budget. The battle between supermarkets and their customers continues.

Graham Kendall, Professor of Operations Research and Vice-Provost, University of Nottingham

This article was originally published on The Conversation. Read the original article.

How to get ants to solve a chess problem

Graham Kendall, University of Nottingham

Take a set of chess pieces and throw them all away except for one knight. Place the knight on any one of the 64 squares of a chess board. The Conversation

Can you make 63 legal moves so that you visit every square on the chess board exactly once? As a reminder, a knight can move two squares in a straight line, followed by a ninety degree turn and a move of one further square. It might seem like a hard task, but this set of moves, called the knight’s tour, can be achieved in too many ways to count.

The knight's tour

If you are able to make the 63 moves and end up on a square from which you can move back to the original square with the 64th legal move, then this is known as a closed tour. Other tours are called open tours.

Mathematicians have pondered how many closed tours exist, and they have come up with an astonishing number: more than 26 trillion. There are so many more open tours that we do not know the exact number.

Both Philip Hingston and I were so captivated by the knight’s tour problem that we wanted to find a different way to solve it. We found that motivation in nature – specifically in ants.

Ants use a certain pattern, or algorithm, to forage for food. This algorithm can be used to tackle many types of problems including the Travelling Salesman Problem and Vehicle Routing Problems. Philip and Graham wondered if they could use the ant colony optimisation algorithm to solve the knight’s tour problem.

Here’s how that algorithm works: a computer program is used to simulate a population of ants. These ants are assigned the task to find a solution to a problem. As each ant goes about their task they lay a pheromone trail – a smelly substance that ants use to communicate with each other. In the simulated algorithm, the most successful ants (the ones that solve the problem better), lay more pheromone than those that perform poorly.

L. Shyamal

We repeat this procedure many times (perhaps millions of times). Through repetitions, the pheromone trails on good solutions increase and they decrease on the poorer solutions due to evaporation, which is also programmed in the simulation algorithm.

In the simulation to solve the knight’s tour problem, the ants could only make legal knight moves and were restricted to stay within the confines of the chess board. If an ant successfully completes a tour then we reinforce that tour by depositing more pheromone on that tour, when compared to a tour that was not a full tour.

Ants which attempt to find later tours are more likely to follow higher levels of pheromone. This means that they are more likely to make the same moves as previously successful ants.

There is a balance to be struck. If the ants follow the successful ants too rigidly, then the algorithm will quickly converge to a single tour. If we encourage the ants too much, not to follow the pheromone of previous ants, then than they will just act randomly. So it is a case of tuning the algorithm’s parameters to try and find a good balance.

Using this algorithm, we were able to find almost half a million tours. This was a significant improvement over previous work, which was based on a genetic algorithm. These algorithms emulate Charles Darwin’s principle of natural evolution – survival of the fittest. Fitter members (those that perform well on the problem at hand) of a simulated population survive and weaker members die off.

It is not easy to say why the ant algorithm performed so well, when compared to the genetic algorithm. Perhaps it was down to tuning the algorithmic parameters, or perhaps ants really do like to play chess!

The knight’s tour problem was being worked on as far back as 840 AD. Little did those problem-solvers know that ants, albeit simulated ones, would be tackling the same puzzle more than 1,000 years in the future.

Graham Kendall, Professor of Operations Research and Vice-Provost, University of Nottingham

This article was originally published on The Conversation. Read the original article.