Monthly Archives: November 2016

Can maths help you win at roulette?

Graham Kendall, University of Nottingham

Albert Einstein supposedly once said: “No one can win at roulette unless he steals money from the table while the croupier isn’t looking.” The Conversation

Although I wouldn’t normally question Einstein, this statement isn’t true. In fact, you can use Einstein’s specialist subject, physics, to help you win. Or you can find a biased wheel that makes some numbers more likely to come up.

What Einstein actually meant was that there is no mathematical trick that can help you win at roulette. Each spin is an independent trial and, in the long run, the casino will win. This is different to a game such as Blackjack where the probabilities change as cards are dealt.

But some believe that it is possible to exploit the way the roulette wheel, and the betting cloth, is laid out to give themselves an advantage. The idea is that you can make bets on the layout in a way that you are guaranteed to win. But is this really possible?

Roulette wheel layout

Like a dartboard, the layout of a roulette wheel did not come about by accident. It was carefully planned and exhibits certain properties. In fact, there are two different layouts. An American wheel and a European wheel. The two layouts are shown below.

American roulette wheel layout.
Wikimedia Commons

European Roulette Wheel Layout.
Wikimedia Commons

Notice that the American wheel has two zeroes. This is important as it doubles the advantage for the casino. On a European wheel you would expect to lose, in the long run, 2.7% of any money you bet with. On an American wheel you can expect to lose 5.26% (if you are interested in the mathematics of roulette, the video at the end will show you how these odds are calculated).

The numbers are arranged in a different order on each wheel but there are some similarities in the patterns. On both wheels, the red and black numbers alternate around the wheel, although if you removed the zeroes, the American wheel would have consecutive reds and blacks. The wheels are also structured so that the low numbers (1-18) and the high numbers (19-36) should alternate as much as possible.

On a European wheel, this is only violated where the 5 sits next to the 10 (both low numbers). On the American wheel, there are many examples where this rule is violated. It is for this reason that the American wheel is considered not as balanced as the European wheel. Both wheels also try to distribute odd and even numbers as evenly as possible. But again there are a number of violations of this rule on both wheels.

On the European wheel there are two other interesting symmetries. First, all the low red numbers and black high numbers are on one side of the zero, and the high red numbers and low black numbers are on the other side. Second, the sequence 29-7-28-12-35-3-26-0-32 contains no numbers between 13 and 24 (the second dozen). You can place a bet on the whole of the second dozen, with odds of 2-1.

European roulette layout.
Wikipedia Commons

So, can we beat the maths?

A simple search on Google will return many (possibly millions) of systems for playing (and supposedly winning) roulette. Some easy, some complicated, some well described, some not so.

A system should really be a combination of a playing strategy and a money management strategy. Perhaps the best known money management strategy is the Martingale system. This system is guaranteed to win money as long as you have enough of a bankroll to double your bet after every loss and you do not hit the table limit, which you will quickly do so. The Martingale system is probably the quickest way to bankruptcy known to man.

Whatever betting strategy, and money management strategy, you choose, they all suffer from the same fate. Assuming that each number on the wheel has the same probability of being selected – meaning the wheel is not biased – the maths means the casino will always win. The system may look good, and may work in the short term, but when one of the numbers comes up that you have not bet on you will lose and the casino will move towards its win expectation (2.7% or 5.26%).

Some systems involve betting on many numbers, perhaps 20. In this case, you will win quite often as you are covering more than half of the numbers. But when one of the numbers does not turn up (and it will almost half the time) you lose all of the 20 bets you have made. This will often wipe out any wins to date.

Any system, so far devised, can be analysed to show that there is a win expectation for the casino. The following video shows the maths.

The mathematics of roulette.

You might as well place a single chip on the same number every time and hope that it appears more than it should during the short time that you are playing.

We can dress up the layout of the wheel, the layout of the betting cloth, our number selection and our money management system however we like, but the maths is always there, quietly working against us. You might as well just have fun, pick random numbers and trust to Lady Luck. Either that, or do as Einstein suggested and steal chips (not that we’d recommend it).

Graham Kendall, Professor of Computer Science and Provost/CEO/PVC, University of Nottingham

This article was originally published on The Conversation. Read the original article.

Helping autonomous vehicles and humans share the road

Jeffrey C. Peters, Stanford University

A common fantasy for transportation enthusiasts and technology optimists is for self-driving cars and trucks to form the basis of a safe, streamlined, almost choreographed dance. In this dream, every vehicle – and cyclist and pedestrian – proceeds unimpeded on any route, as the rest of the traffic skillfully avoids collisions and even eliminates stop-and-go traffic. It’s a lot like the synchronized traffic chaos in “Rush Hour,” a short movie by Black Sheep Films.

‘Rush Hour’ by Black Sheep Films.

Today, autonomous cars are becoming more common, but safety is still a question. More than 30,000 people die on U.S. roads every year – nearly 100 a day. That’s despite the best efforts of government regulators, car manufacturers and human drivers alike. Early statistics from autonomous driving suggest that widespread automation could drive the death toll down significantly.

There’s a key problem, though: Computers like rules – solid, hard-and-fast instructions to follow. How should we program them to handle difficult situations? The hypotheticals are countless: What if the car has to choose between hitting one cyclist or five pedestrians? What if the car must decide to crash into a wall and kill its occupant, or slam through a group of kindergartners? How do we decide? Who does the deciding?

So far, our transportation system has evolved to be operated by humans, who are good at following guidelines but often interpret them to properly handle ambiguity. We stop midblock and wave a pedestrian across, even though there’s no crosswalk. We cross the double yellow line to leave cyclists enough room on the shoulder.

Improving our transportation system to take advantage of the best of machines and humans alike will require melding ambiguity and rigid rules. It will require creating rules that are, in certain ways, even more complex than what we have today. But in other ways it will need to be simpler. The system will not only have to allow automated drivers to function well: It must be easily and clearly understood by the humans at its center.

Human decision-making

Google cars, Uber self-driving cars, autonomous taxis in Singapore, Tesla’s autonomous mode and even self-driving freight trucks are already on the road. Despite one fatal crash – of a Tesla on autopilot – autonomous vehicles are still safer than a normal human driver. Nevertheless, that crash attracted a lot of media attention.

Among the roughly 100 deaths a day on U.S. roads, this one stood out because people wondered: If the driver was not relying on the autonomous software, what would have happened? What might the human have done differently?

That specific fatal crash was actually fairly straightforward: The car didn’t see a truck in front of it and drove into it. But when people think about accidents, they often worry about having to make moral choices in an instant.

Philosophers call this the “trolley problem,” after a hypothetical example in which a trolley is hurtling down a track toward some people who cannot get out of the way in time. You have the option to switch the trolley onto a different track, where it will hit some other people.

Switch the trolley, or don’t?
McGeddon, CC BY-SA

There are an infinite number of variations on the problem, created by specifying the numbers and types of people, replacing them with animals, sending the trolley into a wall where its passengers die, and more. Would you, for example, save five children and let a senior citizen die? What about saving a dog versus killing a criminal? You can try out many of these variations – and make new ones – online in a fascinating “Moral Machine” game from which MIT researchers are gathering information on what decisions people make. They hope to find at least some human moral consensus, which can then inform autonomous vehicles and other intelligent machines.

The crux of the problem is whether you choose to switch the trolley or not. In one case, you make an active decision to intervene, deciding to save – and kill – certain groups. In the other, you choose not to act, effectively letting fate take its course. People who use the Moral Machine can see how their results compare to everyone else’s. So far the outcomes suggest that people intervene to save younger, fitter people with higher perceived social values (doctors over criminals, for example).

Human – and computer – preferences

To handle these relative preferences, we could equip people with beacons on their cellphones to signal nearby cars that they are a certain type of person (child, elderly, pedestrian, cyclist). Then programmers could instruct their autonomous systems to make decisions based on priorities from surveys or experiments like the Moral Machine.

But that raises serious problems. For example, would an autonomous car that noticed a child running in the middle of traffic decide to run over your grandmother on the sidewalk instead?

What should an autonomous car do here?
Kids on bikes via shutterstock.com

And what about groups of people? The Moral Machine’s creators and other researchers found that society as a whole has a strong preference for choosing to save more people. What if a negligent group of runners steered a car into your path while you walked alone?

The same study also showed that people would be less willing to purchase a vehicle that could include sacrificing the driver (themselves) as an option. If society as a whole is to benefit from the advantages of autonomous vehicles, we need people to buy the cars – so we need to make them more attractive to buyers. That might mean requiring cars to save drivers, as Mercedes has already decided to do.

Breaking the rules

Investigating the trolley problem reveals that “optimizing” for countless specific, but hypothetical, scenarios is not the solution. Further, if we allow autonomous vehicles to break the rules sometimes, under certain circumstances, perhaps malicious humans could game the system. For instance, a pedestrian could walk out in front of traffic without getting hit, but forcing cars to slam on the brakes. That one person might even cause multiple collisions, causing disruption without great risk to the disruptor.

Volvo has already noticed that some human drivers behave like bullies around autonomous cars. For example, a person might cut off an autonomous vehicle because he is confident the other car will avoid a collision itself. As a result, Volvo will not follow the currently common practice of clearly labeling autonomous cars on public roads. At least some of its test vehicles will remain unmarked, in hopes of measuring differences in human drivers’ behavior.

The Mercedes and Volvo developments are the first steps toward trying to clarify human expectations about autonomous cars. By standardizing people’s perceptions, it will be easier to predict what humans will do in different scenarios. That will help us engineer ways to keep everyone driving in harmony.

A common set of rules for all autonomous vehicles – whatever those are – will allow people to predict the cars’ behavior and adjust our behavior, policy and transportation infrastructure accordingly.

And if we’re going to make clearer rules, perhaps humans should follow them more closely too, as pedestrians, cyclists and drivers. In that world, we probably won’t find the perfect chaos of the “Rush Hour” short film. But it will be much more orderly – and safe and efficient – than today.

Jeffrey C. Peters, Postdoctoral Fellow in Studying Complex Systems, Stanford University

This article was originally published on The Conversation. Read the original article.