Tag Archives: Games

From little Acorns – a brief history of computer games in the classroom

Rhys James Jones, Swansea University

Play has always been central to growing up, – whether it’s in the street or on a playing field – or in the structured formality of teachers’ quizzes.

These days tablet computers are in nearly every pupil’s hands and children learn through computer games – both in the classroom and at home. Children’s coding initiatives and tiny computers such as the Raspberry Pi and the BBC’s micro:bit have also become big hits in the education world, helping to teach computer basics in playful ways.

But while it’s tempting to see the gamification of education as a new development, there is in fact a long history of children using computer games to help with their learning – which goes right back to the 1970s.

This was the decade during which computers first inched out of the research lab and into everyday life, making the idea of a home or personal computer somewhat closer to a reality. In 1974, Ted Nelson, a US pioneer of information technology, wrote what is often considered “the first personal computer book” – Computer Lib/Dream Machines. It was in this book that with uncanny foresight, Nelson suggested pupils in the future would make use of hyperlinked documents, and touchscreens to widen their knowledge.

Away from Nelson’s speculation, the classroom reality was more mundane. Few schools could afford computers of their own. And for those that could, computer science meant punching instructions onto paper tape – a form of data storage, consisting of a long strip of paper in which holes are punched to store said data.

GCSE Computing – 1970’s style.
The National Museum of Computing

But in the late 1970s, something of a change happened – at least in the UK. And a worried government, concerned about Japanese innovation and threats of automation, commissioned a report from the Manpower Services Commission (MSC) and the BBC to look into how to develop computer literacy initiatives.

Designed to raise computer awareness, these initiatives happily coincided with the rise of microprocessors which were enabling the manufacture of cheaper, smaller machines. And the BBC decided to invite UK companies to submit proposals for a Microcomputer System, to a predefined specification. A system proposed by a young company in Cambridge was chosen and Acorn’s BBC Microcomputer was born.

The BBC micro in all its glory.
Public Domain

Simpler and faster

The BBC Micro, along with some other machines, could be bought by schools at half price via government subsidies.

Their beige cases and red function keys became a familiar sight from primary through to university level. But they were still expensive: a discounted “Beeb” and monitor would cost a school more than £1,000 at today’s prices.

Learning to program was simpler and faster on the Beeb than in years past, with paper tape replaced by a monitor and a beginner’s coding language known as BASIC – which stands for “beginner’s all purpose symbolic instruction code”. This meant many more students were able to engage with computer science.

The rise of the games

Then there were the games. Despite excellent arcade clones and some true originals, the cost of the unsubsidised Beeb made it difficult for it to gain a foothold as a home videogame system.

Perhaps its educational image didn’t help either – and maybe it never quite shook off what comedian Simon Munnery described as

The stench of school … most of the games would be … Isn’t Geography Nice?

The Beebs’ dominance in schools led to a torrent of educational software being released, of varying quality. Indeed, many of these early educational “games” aided nothing more than rote learning.

The stench of educational games.
Author provided

But educational initiatives helped push the boundaries, particularly in science and maths. The best-remembered games were imaginative, often programmed by teachers themselves, and learning happened by stealth. For example, the fondly recalled Granny’s Garden, while limited, took players on a puzzle-solving journey to avoid traps and witches – all rendered in teletext graphics.

Adventure was also central to L: A Mathemagical Journey, which used shades of Lewis Carroll to build players’ numeracy skills, while encouraging them to reflect on their experience.

Straddling home and school, Acorn’s software arm Acornsoft used surprisingly entertaining artificial intelligence techniques to test and extend subject-based learning. Younger, newly literate learners could encounter Podd, a floating head performing various actions at their typed command.

But in the 21st century, it’s not just learning, but the whole education system that has become a game – as schools, classes, teachers and students strive to top the board in league tables and PISA rankings. At the same time, teachers’ unions, children and parents all argue against excessive assessment and testing.

Maybe then we should all learn from the classroom videogame pioneers of the past few decades. Because although it’s clear that game-based learning has a role to play in education, there still needs to be a point to it – within a wider context.

The ConversationAnd while educational games can be creative and innovative, they are at their best when they don’t smell too much of school.

Rhys James Jones, Senior Lecturer in Digital Media, Swansea University

This article was originally published on The Conversation. Read the original article.

Playing a science-based video game? It might be all wrong

Alex Leith, Michigan State University

You look down from the sky, manipulating the world and seeing how it responds to your changes. You are able to alter vegetation and climate while watching their effects on the surrounding organisms. In this way, and many others, digital games provide excellent opportunities for players to learn about complicated subjects, including the concept of evolution through natural selection. Even games designed for fun and not specifically for education can provide rich, concise, dynamic representations of complex science, technology, engineering and math topics.

Since I was young, digital games have successfully supplemented the educational process in a range of topics, including math, science and biology. Research shows that if these games are going to actually teach those concepts, they must represent them accurately. Games that include incorrect depictions teach the wrong lessons.

Since Charles Darwin and Alfred Russel Wallace, evolution has been understood as a process based on genetic differences between individual organisms of the same species. There are three key principles:

  1. Organisms with genetic advantages for their environment are more likely to survive and reproduce, while organisms whose genes make their bodies less suited will die more quickly and reproduce less effectively.
  2. Because these characteristics are genetic, they are passed on to offspring.
  3. Organisms with genes that improve their survival will have more successful offspring, which will in turn pass on their advantages to the next generation.

Some colleagues and I looked into how well current games could serve as educational tools, specifically about evolution. We examined how Darwinian evolution was represented in 22 games, which we located either through game databases like GameSpot or IGN, or through Google searches. Most games got evolution at least partly wrong. Only five accurately represented all three key principles of evolution.

The five that got it right

A screen-capture of a Norn interacting with its environment in ‘Creatures 2,’ developed by Creatures Labs and published by Mindscape.

Creatures” provides a rare example of the three principles. In that game, players create cartoon-like creatures called “norns,” through a process that allows norns to be altered not just in terms of appearance, but at the genetic level. For the most accurate representation of evolution, the game offers a play mode called “wolfling run.” In that mode, players cannot directly affect their norns, but can observe their relative fitness for a particular in-game scenario. The potential variations in both norn creation and the environment they must survive in provide for an astonishing number of evolutionary possibilities.

Maxis, best known for creating the “SimCity” game series, and its spinoff “The Sims” collection, also made a set of games called “SimEarth” and “SimLife.” Like “SimCity,” both give players top-down control of a world. “SimEarth” was designed for players to make major changes to the weather, landscape and animals to create an environment. Players were then able to see how the animals would fare in this created environment. “SimLife” was more specific: it has players engage with the animals (rather than merely creating them) to learn about the biology surrounding their survival.

A screen-capture of ‘Who Wants to Live a Million Years,’ playable on the Science Channel website.
from sciencechannel.com

We also found two academically oriented games that loosely presented the three mechanics of evolution: “Selection Game” and “Who Wants to Live a Million Years” (which was later renamed “Charles Darwin’s Game of Survival”). The two games were designed to be simple tools that could be played quickly in places like museums. Despite the limited mechanics present in such games, they still clearly show each element of the evolution process.

Market success doesn’t mean accuracy

The most commercially popular game we found didn’t quite get evolution right. “Spore” left out something many other games did, too: Organisms’ genetic differences didn’t affect their survival rates. Instead, organisms whose genes were unfit for the environment would not necessarily die more often, in keeping with evolutionary principles. Rather, players could intervene and increase an organism’s likelihood for success by, say, helping it move more intelligently and strategically, beyond the scope of its genetically predisposed movements.

Nevertheless, “Spore” does a reasonable job presenting the broader concept of evolution to players, and is the best such game made this century. (“Creatures,” “SimEarth,” and “SimLife” are all from the 1990s.) “Spore” is also still available for purchase, so it is the only game readily usable by the average educator or student.

But other findings were disappointing. Most games inaccurately portrayed evolution, usually in the same way Spore did – allowing player intervention to save organisms that were unfit for survival.

For these other games, evolution becomes more akin to mutation during a single organism’s life than a process that occurs through generations. In “E.V.O.: Search for Eden” and “L.O.L.: Lack of Love,” players earn points they can spend to modify their organisms. In “Eco,” at the end of each level, the player arbitrarily changes an attribute, though not necessarily one that affects an organism’s survival prospects. In each of these cases, what the game calls “evolution” is actually external genetic manipulation, rather than inheriting particular traits.

The ConversationThese inaccuracies may confuse those unsure of what evolution actually is. If other scientific subjects are similarly poorly depicted in video games, the potential educational benefits of these games could be lost. However, as game designers become more adept at modeling scientific themes, it could herald an educational revolution.

Alex Leith, Doctoral Candidate in Media and Information Studies, Michigan State University

This article was originally published on The Conversation. Read the original article.

Google’s latest Go victory shows machines are no longer just learning, they’re teaching

Mark Robert Anderson, Edge Hill University

Just over 20 years ago was the first time a computer beat a human world champion in a chess match, when IBM’s Deep Blue supercomputer beat Gary Kasparov in a narrow victory of 3½ games to 2½. Just under a decade later, machines were deemed to have conquered the game of chess when Deep Fritz, a piece of software running on a desktop PC, beat 2006 world champion Vladimir Kramnik. Now the ability of computers to take on humanity has taken a step further by mastering the far more complex board game Go, with Google’s AlphaGo program beating world number one Ke Jie twice in a best-of-three series. The Conversation

This signifcant milestone shows just how far computers have come in the past 20 years. DeepBlue’s victory at chess showed machines could rapidly process huge amounts of information, paving the way for the big data revolution we see today. But AlphaGo’s triumph represents the development of real artificial intelligence by a machine that can recognise patterns and learn the best way to respond to them. What’s more, it may signify a new evolution in AI, where computers not only learn how to beat us but can start to teach us as well.

Go is considered one of the world’s most complex board games. Like chess, it’s a game of strategy but it also has several key differences that make it much harder for a computer to play. The rules are relatively simple but the strategies involved to play the game are highly complex. It is also much harder to calculate the end position and winner in the game of Go.

It has a larger board (a 19×19 grid rather than an 8×8 one) and an unlimited number of pieces, so there are many more ways that the board can be arranged. Whereas chess pieces start in set positions and can each make a limited number of moves each turn, Go starts with a blank board and players can place a piece in any of the 361 free spaces. Each game takes on average twice as many turns as chess and there are six times as many legal move options per turn.

Each of these features means you can’t build a Go program using the same techniques as for chess machines. These tend to use a “brute force” approach of analysing the potential of large numbers of possible moves to select the best one. Feng-Hsiung Hsu, one of the key contributors to the DeepBlue team, argued in 2007 that applying this strategy to Go would require a million-fold increase in processing speed over DeepBlue so a computer could analyse 100 trillion positions per second.

Learning new moves

The strategy used by AlphaGo’s creators at Google subsidiary DeepMind was to create an artificial intelligence program that could learn how to identify favourable moves from useless ones. This meant it wouldn’t have to analyse all the possible moves that could be made at each turn. In preparation for its first match against professional Go player Lee Sedol, AlphaGo analysed around 300m moves made by professional Go players. It then used what are called deep learning and reinforcement learning techniques to develop its own ability to identify favourable moves.

But this wasn’t enough to enable AlphaGo to defeat highly ranked human players. The software was run on custom microchips specifically designed for machine learning, known as tensor processing units (TPUs), to support very large numbers of computations. This seems similar to the approach used by the designers of DeepBlue, who also developed custom chips for high-volume computation. The stark difference, however, is that DeepBlue’s chips could only be used for playing chess. AlphaGo’s chips run Google’s general-purpose AI framework, Tensorflow, and are also used to power other Google services such as Street View and optimisation tasks in the firm’s data centres.

Lesson for us all

The other thing that has changed since DeepBlue’s victory is the respect that humans have for their computer opponents. When playing chess computers, it was common for the human players to adopt so-called anti-computer tactics. This involves making conservative moves to prevent the computer from evaluating positions effectively.

In his first match against AlphaGo, however, Ke Jie, adopted tactics that had previously been used by his opponent to beat it at its own game. Although this attempt failed, it demonstrates a change in approach for leading human players taking on computers. Instead of trying to stifle the machine, they have begun trying to learn from how it played in the past.

In fact, the machine has already influenced the professional game of Go, with grandmasters adopting AlphaGo’s strategy during their tournament matches. This machine has taught humanity something new about a game it has been playing for over 2,500 years, liberating us from the experience of millennia.

What then might the future hold for the AI behind AlphaGo? The success of DeepBlue triggered rapid developments that have directly impacted the techniques applied in big data processing. The benefit of the technology used to implement AlphaGo is that it can already be applied to other problems that require pattern identification.

For example, the same techniques have been applied to the detection of cancer and to create robots that can learn to do things like open doors, among many other applications. The underlying framework used in AlphaGo, Google’s TensorFlow, has been made freely available for developers and researchers to build new machine-learning programs using standard computer hardware.

More excitingly, combining it with the many computers available through the internet cloud creates the promise of delivering machine-learning supercomputing. When this technology matures then the potential will exist for the creation of self-taught machines in wide-ranging roles that can support complex decision-making tasks. Of course, what may be even more profound are the social impacts of having machines that not only teach themselves but teach us in the process.

Mark Robert Anderson, Professor in Computing and Information Systems, Edge Hill University

This article was originally published on The Conversation. Read the original article.

Designing games that change perceptions, opinions and even players’ real-life actions

Lindsay Grace, American University School of Communication

In 1904, Lizzie Magie patented “The Landlord’s Game,” a board game about property ownership, with the specific goal of teaching players about how a system of land grabbing impoverishes tenants and enriches property owners. The game, which went on to become the mass-market classic “Monopoly,” was the first widely recognized example of what is today called “persuasive play.” The Conversation

‘The Landlord’s Game,’ the first persuasive game.
U.S. Patent and Trademark Office

Games offer a unique opportunity to persuade their audiences, because players are not simply listening, reading or interpreting the game’s message – they are subscribing to it. To play a game, players must accept its rules and then operate within the designed experience. As a result, games can change our perceptions, and ultimately our actions.

In American University’s Game Lab and Studio, which I direct, we’re creating a wide range of persuasive games to test various strategies of persuasion and to gauge players’ responses. We have developed games to highlight the problems with using delivery drones, encourage cultural understanding and assess understanding of mathematics.

Teaching math with a video game: Aim the laser with mathematical formulas.

And we’re expanding the realm beyond education and health. With support from the Knight Foundation, we’ve been researching ways to connect games and journalism to engage people more deeply with issues in the news. (The Knight Foundation has also funded The Conversation US.) Our newest game, helping people and news organizations distinguish between real news and fake reports, is out now.

Game play involves action

When talking about games as a persuasive tool, I often repeat the notion that readers read, viewers watch and players do. It’s not a coincidence that when sitting down to learn a new game, a prospective player most often asks, “What do you do?” Persuasive play offers people the opportunity to do more than merely read or watch – they can engage with the game’s subject matter in fundamentally valuable ways.

In our work, we want to enhance people’s understanding of complex topics, change their perspective and encourage them to think critically about the world around them.

For example, Game Lab faculty member Bob Hone worked with the National Institutes of Mental Health to create a game that is now in clinical trials as a treatment for anxiety without medication. The game, “Seeing the Good Side,” asks players to find numbers hidden in detailed drawings of classroom scenes. In the process, players practice calming themselves by looking around an entire space rather than focusing on one person’s anxiety-provoking angry face.

Games can relieve stress in other ways, too. A recent study we conducted with Educational Testing Service found that a game we created to replace multiple choice standardized tests offered a more positive test-taking experience for students. In addition, students were better able to demonstrate their abilities.

Turning to the news

Persuasive play is most common in education and health, but it’s becoming more common in other fields too. We’ve been working on using game design techniques to get people to engage with the news. We’ve also proposed adapting lessons from gaming to attract and retain audiences for news websites.

A game about reality: The beginning of a commuter’s day starts with an alarm clock.
American University Game Lab, CC BY-ND

One project we did involved creating a game for WAMU, a National Public Radio affiliate in Washington, D.C. The station was covering public transportation failures in the city, specifically of the D.C. Metro subway and train service. We designed a game to increase audience engagement with the material.

WAMU sent an experienced reporter, Maggie Farley, into the field to interview a variety of Metro riders about their experience. We aggregated those stories into a single narrative and then made that story playable. In our “Commuter Challenge,” players have to make it through a week on the Metro system as a low-wage employee in the D.C. Metro service area.

The problems facing players align with real-world trade-offs the reporter found people making: Should a worker choose a pricey ride-share service to get to the daycare in time for pickup or save money by taking the train but risk incurring fees for being late? Should a worker trust the announcement that the train will be only 15 minutes late or decline an extra shift because of rail service outages? Players have to balance their family, work and financial demands, in hopes of ending the week without running out of money or getting fired for being late to work.

Boosting connections

WAMU found that the game got four times more visits than other Metro-related articles on its site. And people spent four times longer playing the game than they did reading or listening to the standard news coverage. People, it seemed, were far more eager to play a game about the plight of Metro riders than they were to hear about it.

Most recently, we released a game called “Factitious.” It works like a simple quiz, giving players a headline and an article, at the bottom of which is a link to reveal the article’s source. Players must decide whether a particular article is real news or fake. The game tells the player the correct answer and offers hints for improvement. This helps players learn the importance of reading skeptically and checking sources before deciding what to believe.

In addition, for each article we can see how many people understood or misunderstood it as real or fake news and how long they took to make the decision. When we change headlines, images or text, we can monitor how players’ responses adjust, and report to news organizations on how those influence readers’ understanding. We hope games like this one become a model for getting honest feedback from the general population.

While the original “Monopoly” aimed to explain the drawbacks of land grabbing, contemporary persuasive play has even grander hopes. This new generation of games aims to learn about its players, change their perceptions and revise their behavior in less time than it takes to build a hotel on Park Place.

Lindsay Grace, Associate Professor of Communication; Director, American University Game Lab and Studio, American University School of Communication

This article was originally published on The Conversation. Read the original article.

Does playing chess make you smarter? A look at the evidence

Giovanni Sala, University of Liverpool and Fernand Gobet, University of Liverpool

The stereotype of the chess player is someone who is smart, logical and good at maths. This is why so many parents around the world are keen to get their children playing chess – in the hope that the game might help to boost their son or daughter’s intelligence levels and help them succeed in a wide variety of subjects.

But apart from chess being a great game, its history rooted in eastern India military, is there actually any evidence to show that playing chess can make you smarter?

In a previous article, we showed that chess players exhibit, on average, superior cognitive ability compared to non-chess players.
And the skills needed to play chess have also been shown to correlate with several measures of intelligence – such as fluid reasoning, memory, and processing speed.

But while the existence of a relationship between general cognitive ability and chess-skill is clear, is this simply because intelligent people are more likely to engage in the game of chess, or does engaging in chess make people smarter?

Brain game

The notion that playing chess makes you smarter goes something like this: chess requires concentration and intelligence, and as mathematics and literacy require the same general skills, then practising chess must also improve one’s academic achievement.

With this idea in mind, the Institute of Education conducted a large investigation to test the effects of chess instruction on the academic skills of nearly 4,000 British children.

School chess club.

The recently released results were disappointing – it seemed chess played no effect on children’s attainment levels in mathematics, literacy, or science.

Promptly, the chess community questioned the reliability of the results, particularly given that other studies offer a more optimistic picture about the academic benefits of chess instruction.

Assessing the evidence

The chess community is probably right in criticising the recent study, as it suffers from several methodological shortcomings that probably invalidate the results.

Before the results were published, we carried out a review of all the studies in the field. Our results showed some moderate effects of chess instruction on cognitive ability and academic achievement – especially mathematics.

Does chess need intelligence?

And yet, we still need to be cautious in interpreting these results as a positive indication of the power of chess on cognitive or academic skills. This is because most of the reviewed studies compared the effect of chess with groups doing no alternative activities.

This is a problem because research has shown that the excitement and fun induced by novel activities can cause a positive temporal effect on test scores – a placebo effect.

Crucially, when compared to an alternative activity – such as checkers or sports – chess did not show any significant effect on children’s skills. So, it could well just be that the observed positive effects of chess instruction are merely due to placebo effects.

Chess notes

What all this shows is that it is unlikely chess has a significant impact on overall cognitive ability. So while it might sound like a quick win – that a game of chess can improve a broad range of skills – unfortunately this is not the case.

The failure of generalisation of a particular skill, in fact, happens to occur in many other areas beyond chess – such as music training, which has been shown to have no effect on non-music cognitive or academic abilities. The same applies to video game training, brain training, and working memory training, among others.

Ancient intelligence or just a good game?

The fact that skills learned by training do not transfer across different domains seems to be a universal in human cognition. In other words, you get better, at best, at what you train in – which may just sound just like good old fashioned common sense.

But although expecting chess to enhance children’s cognitive ability and overall academic achievement is just wishful thinking, this doesn’t mean it can’t still add value to a child’s education.

Clearly, playing chess involves some level of arithmetical and geometrical skill, and designing mathematical games or exercises with chess material can still be a simple and fun way to help children to learn.

Giovanni Sala, PhD Candidate – Cognitive Psychology, University of Liverpool and Fernand Gobet, Professor of Decision Making and Expertise, University of Liverpool

This article was originally published on The Conversation. Read the original article.

Here’s the best way to shuffle a pack of cards – with a little help from some maths

Graham Kendall, University of Nottingham

Shuffling a pack of cards isn’t as easy as you think, not if you want to truly randomise the cards. Most people will give a pack a few shuffles with the overhand or riffle methods (where the pack is split and the two halves are interweaved). But research has shown this isn’t enough to produce a sufficiently random order to make sure the card game being played is completely fair and to prevent people cheating. The Conversation

As I wrote in a recent article about card counting, not having an effective shuffling mechanism can be a serious problem for casinos:

Players have used shuffle tracking, where blocks of cards are tracked so that you have some idea when they will appear. If you are given the option to cut the pack, you try and cut the pack near where you think the block of cards you are tracking is so that you can bet accordingly. A variant on this is to track aces as, if you know when one is likely to appear, you have a distinct advantage over the casino.

Card Counting and Shuffle Tracking in Blackjack.

So how can you make sure your cards are well and truly shuffled?

To work out how many ways there are of arranging a standard 52-card deck, we multiply 52 by all the numbers that come before it (52 x 51 x 50 … 3 x 2 x 1). This is referred to as “52 factorial” and is usually written as “52!” by mathematicians. The answer is so big it’s easier to write it using scientific notation as 8.0658175e+67, which means it’s a number beginning with 8, followed by 67 more digits.

To put this into some sort of context, if you dealt one million hands of cards every second, it would take you 20 sexdecillion, or 20,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000, years to deal the same number of hands as there are ways to arrange a deck of cards.

You would think that it would be easy to get a random order from that many permutations. In fact, every arrangement is, in a sense, random. Even one where the cards are ordered by suit and then rank could be considered random. It is only the interpretation we put on this order that would make most people not consider it random. This is the same as the idea that the lottery is less likely to throw up the numbers one to six, whereas in reality this combination is just as probable as any other.

In theory, you could shuffle a deck so that the cards emerged in number order (all the aces, followed by all the twos, followed by all the threes and so on), with each set of numbers in the same suit order (say spades, hearts, diamonds and clubs). Most people would not consider this random, but it is just as likely to appear as any other specific arrangement of cards (very unlikely). This is an extreme example but you could come up with an arrangement that would be seen as random when playing bridge because it offered the players no advantage, but wouldn’t be random for poker because it produced consistently strong hands.

But what would a casino consider random? Mathematicians have developed several ways of measuring how random something is. Variation distance and separation distance are two measures calculated by mathematical formulas. They have a value of 1 for a deck of cards in perfect order (sorted by numbers and suits) and lower values for more mixed arrangements. When the values are less than 0.5, the deck is considered randomly shuffled. More simply, if you can guess too many cards in a shuffled deck, then the deck is not well shuffled.

The Best (and Worst) Ways to Shuffle Cards – Numberphile.

Persi Diaconis is a mathematician who has been studying card shuffling for over 25 years. Together with and Dave Bayer, he worked out that to produce a mathematically random pack, you need to use a riffle shuffle seven times if you’re using the variation distance measure, or 11 times using the separation distance. The overhand shuffle, by comparison, requires 10,000 shuffles to achieve randomness.

“The usual shuffling produces a card order that is far from random,” Diaconis has said. “Most people shuffle cards three or four times. Five times is considered excessive”.

But five is still lower than the number required for an effective shuffle. Even dealers in casinos rarely shuffle the required seven times. The situation is worse when more than one deck is used, as is the case in blackjack. If you are shuffling two decks, you should shuffle nine times and for six decks you need to shuffle twelve times.

Shuffle like a casino dealer.

Many casinos now use automatic shuffling machines. This not only speeds up the games but also means that shuffles can be more random, as the machines can shuffle for longer than the dealers. These shuffling machines also stop issues such as card counting and card tracking.

But even these machines are not enough. In another study, Diaconis and his colleagues were asked by a casino to look at a new design of a card shuffling machine that the casino had built. The researchers found that the machine was not sufficiently random, as they simply did not shuffle enough times. But using the machine twice would resolve the problem.

So next time you’re at a casino, take a look at how many times the dealers shuffle. The cards may not be as random as you think they are, which could be to your advantage.

Graham Kendall, Professor of Computer Science and Provost/CEO/PVC, University of Nottingham

This article was originally published on The Conversation. Read the original article.

Can maths help you win at roulette?

Graham Kendall, University of Nottingham

Albert Einstein supposedly once said: “No one can win at roulette unless he steals money from the table while the croupier isn’t looking.” The Conversation

Although I wouldn’t normally question Einstein, this statement isn’t true. In fact, you can use Einstein’s specialist subject, physics, to help you win. Or you can find a biased wheel that makes some numbers more likely to come up.

What Einstein actually meant was that there is no mathematical trick that can help you win at roulette. Each spin is an independent trial and, in the long run, the casino will win. This is different to a game such as Blackjack where the probabilities change as cards are dealt.

But some believe that it is possible to exploit the way the roulette wheel, and the betting cloth, is laid out to give themselves an advantage. The idea is that you can make bets on the layout in a way that you are guaranteed to win. But is this really possible?

Roulette wheel layout

Like a dartboard, the layout of a roulette wheel did not come about by accident. It was carefully planned and exhibits certain properties. In fact, there are two different layouts. An American wheel and a European wheel. The two layouts are shown below.

American roulette wheel layout.
Wikimedia Commons

European Roulette Wheel Layout.
Wikimedia Commons

Notice that the American wheel has two zeroes. This is important as it doubles the advantage for the casino. On a European wheel you would expect to lose, in the long run, 2.7% of any money you bet with. On an American wheel you can expect to lose 5.26% (if you are interested in the mathematics of roulette, the video at the end will show you how these odds are calculated).

The numbers are arranged in a different order on each wheel but there are some similarities in the patterns. On both wheels, the red and black numbers alternate around the wheel, although if you removed the zeroes, the American wheel would have consecutive reds and blacks. The wheels are also structured so that the low numbers (1-18) and the high numbers (19-36) should alternate as much as possible.

On a European wheel, this is only violated where the 5 sits next to the 10 (both low numbers). On the American wheel, there are many examples where this rule is violated. It is for this reason that the American wheel is considered not as balanced as the European wheel. Both wheels also try to distribute odd and even numbers as evenly as possible. But again there are a number of violations of this rule on both wheels.

On the European wheel there are two other interesting symmetries. First, all the low red numbers and black high numbers are on one side of the zero, and the high red numbers and low black numbers are on the other side. Second, the sequence 29-7-28-12-35-3-26-0-32 contains no numbers between 13 and 24 (the second dozen). You can place a bet on the whole of the second dozen, with odds of 2-1.

European roulette layout.
Wikipedia Commons

So, can we beat the maths?

A simple search on Google will return many (possibly millions) of systems for playing (and supposedly winning) roulette. Some easy, some complicated, some well described, some not so.

A system should really be a combination of a playing strategy and a money management strategy. Perhaps the best known money management strategy is the Martingale system. This system is guaranteed to win money as long as you have enough of a bankroll to double your bet after every loss and you do not hit the table limit, which you will quickly do so. The Martingale system is probably the quickest way to bankruptcy known to man.

Whatever betting strategy, and money management strategy, you choose, they all suffer from the same fate. Assuming that each number on the wheel has the same probability of being selected – meaning the wheel is not biased – the maths means the casino will always win. The system may look good, and may work in the short term, but when one of the numbers comes up that you have not bet on you will lose and the casino will move towards its win expectation (2.7% or 5.26%).

Some systems involve betting on many numbers, perhaps 20. In this case, you will win quite often as you are covering more than half of the numbers. But when one of the numbers does not turn up (and it will almost half the time) you lose all of the 20 bets you have made. This will often wipe out any wins to date.

Any system, so far devised, can be analysed to show that there is a win expectation for the casino. The following video shows the maths.

The mathematics of roulette.

You might as well place a single chip on the same number every time and hope that it appears more than it should during the short time that you are playing.

We can dress up the layout of the wheel, the layout of the betting cloth, our number selection and our money management system however we like, but the maths is always there, quietly working against us. You might as well just have fun, pick random numbers and trust to Lady Luck. Either that, or do as Einstein suggested and steal chips (not that we’d recommend it).

Graham Kendall, Professor of Computer Science and Provost/CEO/PVC, University of Nottingham

This article was originally published on The Conversation. Read the original article.

How Isaac Newton could help you beat the casino at roulette

Graham Kendall, University of Nottingham

Imagine walking into a casino with a computer strapped to your chest. Solenoid electromagnets thump against your body telling you where to place your bet on the roulette table. Suddenly, you start getting electric shocks. You rush to the toilet to undertake emergency repairs, hoping that the casino staff do not realise what is happening. The Conversation

In the late seventies, graduate student Doyne Farmer and colleagues did just that – with purpose-built computers that could predict where a roulette ball would land. The project, described in the book The Newtonian Casino (published as The Eudaemonic Pie in the US), was, however, difficult and fraught with technical problems. The team never really found a reliable way of doing it. But decades later, is it any closer to becoming a reality?

In a game of roulette, the croupier spins a wheel in one direction and a ball in the other direction. Players then place bets on where the ball will land by choosing either a single number, a range of numbers, the colours red or black or odd or even numbers.

Our understanding of the physics behind the movement of the ball and wheel is pretty solid – governed by Newton’s laws of motion. As the ball slows, gravity takes hold and it falls into one of the numbered compartments. It is predictable when the ball will leave the rim. However once it does, the route it takes to a numbered slot is less so. This is because the ball bounces around as it strikes various obstacles.

Every roulette wheel is slightly different. Atmospheric conditions continually change and the wheel itself has features that encourage randomness – such as the size of the frets between the numbers and the diamond-shaped obstacles that intercept the ball as it falls down to the wheel. This means that you cannot predict the exact number where the ball will land. But you only need to know which area of the wheel the ball will land and you can gain a massive advantage over the casino – more than 40%. This is a huge swing from the 5.26% margin that US casinos have over players – often referred to as the house edge. In Europe it is only 2.7%, as the wheel has only one zero (a US wheel has two zeroes).

Sweaty experiments

When Farmer and his team entered the casino for the first time, two people were wearing computers. One had a computer built into his shoes, with the task of inputting data by tapping switches under the toes. This computer performed two main functions. One was to adjust parameters for each wheel before a game, such as the rate at which the ball and wheel slowed down, and the velocity of the ball when it fell off the track. They also had to determine whether the wheel exhibited any tilt.

The second job was during live play. The player with the shoe computer tapped the toe switches each time a certain point (typically the double zero) on the wheel passed by and also when the ball passed by. Using this information, the program could calculate the speed of both the wheel and the ball – thus knowing when the ball would start to fall. Knowing the relative positions of the ball and the wheel meant that a prediction could be made about where the ball would finally land. The computer then had to transmit the prediction to the person wearing the second computer. This was achieved by weak radio signals.

Shoe computer. The Eudaemonic Pie display at the Heinz Nixdorf Museum.
https://en.wikipedia.org/wiki/J._Doyne_Farmer, CC BY-SA

The second computer, strapped to someone else, received the radio signals and conveyed this information to the player by the solenoid electromagnets that thumped that player’s stomach. A code had been developed which relayed the predicted number, with the player placing bets on that number and several numbers either side to account for the randomness. In order that the casinos could not easily see what they were doing, the team altered their betting patterns slightly. For example, not betting on all the consecutive numbers.

However this never gave them the 40% advantage observed in the lab – mainly due to technological problems such as short circuits caused by sweating, wires becoming loose and lost radio connections.

It took several years for the team (which now comprised about 20 people who’d worked on the project in varying degrees) to develop an improved computer system. Both computers were now in custom-built shoes. This could protect the operator from being electrocuted but would also make it harder for the casino to detect. The other innovation was that the computers were set in resin blocks, with only the toe-operated switches and the solenoids that now drummed against the feet, being visible. This was to try and combat the problems such as loose wires and sweating.

Binion’s casino.
Ken Lund/Flickr, CC BY-SA

They then entered Binion’s casino in Las Vegas ready for an all-out assault. Once the parameters had been set, the first prediction was to bet in the third octant – which included the numbers 1, 13, 24 and 36. The ball landed in 13 and the team got paid off at 35-1. The years of work looked promising, but the solenoids eventually started to act randomly, so the accurate predictions from one computer were not being transmitted to the other. The team suspected it was due to the electronic noise present in casinos. Eventually they had no choice but to abandon the idea.

Would it work today?

The main issue in the late seventies and early eighties was that the team had to build their own computers from scratch, literally – they had to design the computer, buy all the components and get busy with a soldering iron. These days, the computers are readily available, as the following video shows.

Technology has evolved. These days, all the required processing power could be fitted into a single unit. You could imagine a system based on a mobile phone where the camera videos the ball and the wheel and image processing software extracts the relevant data so that the prediction software can calculate the final position of the ball.

But certain challenges still remain. If several people are involved, which is the best way to avoid detection, how can you work as a team and pass data? Perhaps the use of free wifi in many casinos could be a solution? Another problem is how to best hide the fact that you are trying to use an electronic device to predict where the ball will land, when you need to input data and receive the prediction. Here, suitably connected glasses may be one get around, used in tandem with toe-operated switches.

The hardest challenge, however, is the casino itself. They are certainly unlikely to simply let you have a camera pointed at the roulette wheel, especially if you are winning. If they did, they would be likely to ask you to leave and as it is often illegal to use such devices. But with a little creativity it may not be long before scientists prove they are able to outsmart casinos.

Graham Kendall, Professor of Operations Research and Vice-Provost, University of Nottingham

This article was originally published on The Conversation. Read the original article.

How to beat the casino – legally

Graham Kendall, University of Nottingham

If there’s one thing everybody knows about gambling it’s that the house always wins. And while it is true that casinos always make a profit, there are a number of ways to cheat the system – some of which are actually perfectly legal.

Half a century ago, mathematician Edward Thorp published a groundbreaking book outlining how a player could use “card counting” to get an advantage in the game Blackjack by keeping track of the cards left in a deck. Ever since, casinos have been trying to eradicate card counting while card counters are getting increasingly skilled at not getting caught. So is it possible to outplay casinos today? And what will it be like in the future?

Winning Blackjack Hand.
Wikipedia Commons

Casinos are businesses and operate by building in a margin – often referred to as the house edge. If you play roulette and bet on a single number you will be paid at odds of 35-1 when the true odds are 36-1 in Europe and 37-1 in the US. The fact that you are receiving less than the true odds is the house edge and explains why casinos make money in the long term. Of course, some people have to win, otherwise casinos would cease to exist.

Advantage players

What casinos don’t like are “advantage players” – people seeking to have an edge over the house. Sometimes this involves cheating and/or illegal activities ranging from past posting (making a bet after the time when no more bets are to be taken) to collaborating at the poker table and using a computer to help make decisions.

Card counting, however, is legal. In Blackjack, the aim of the player is to achieve a hand of cards whose points add up nearer to 21 than the dealer’s hand, but without exceeding 21. Many hands are played from the same deck of cards, so what happens in one hand will influence what happens in future hands. As an example, if a ten has been played from the pack then it cannot appear in the next hand. This is different from other games, such as roulette, where the outcome of one spin has no effect on the next spin.

Professor Thorp and his contribution to card counting.

Card counting is based on the fact that a large proportion of high cards (such as tens, jacks, queens and kings, which are all worth ten points) left in the unplayed deck statistically improves the player’s chances. This is because a player can decide not to draw a new card to a hand such as 16, but the casino is forced to, as it follows strict rules. If there are a high proportion of high cards left in the unplayed deck of cards, the dealer has more chance of busting (going over 21). This can be combined with “basic strategy” – developed from computer simulations of millions of blackjack hands – which tells the player the best action to take for each possible card combination.

Combining card counting and basic strategy can help a player convert the (long term) house edge from 2.7%, in favour of the casino, to about a 1% advantage to the player. Of course, once you have this advantage you can increase your bet.

To give a simple example, if you were playing basic strategy and were dealt a ten and a six, and the dealer had a three showing (one of the dealers cards is visible to the player), you would stand (not take another card) as you hope that the dealer would draw a ten and bust. If you were card counting, and you knew that more low cards had been played, you might decide to increase your stake at this point.

Evolving battle

Casinos have introduced a number of measures to deter card counting. These include spotting those doing it and simply banning them from playing, or even from entering the casino. Another approach is to increase the number of decks from one to (typically) six, or even eight. Some casinos also shuffle the cards after only about 75% have been played or shuffle them constantly using automatic shufflers.

You might wonder why casinos don’t simply withdraw blackjack. Well, it remains a popular game, and one that is still profitable. There are also many would-be card counters who are not actually that good at it, and they provide income to the casinos.

Many blackjack players have fought back against such measures, arguing that casinos should allow gamblers to use skill when playing the game. As a card counter operating on their own is relatively easy to spot (intense concentration, increasing bets and so on), a team of students from MIT showed it could successfully be done in teams. The idea is that somebody else counts the cards – they may not even be sitting at the table. When the count reaches an agreed value, they signal to another player, who joins the table to start betting. This is a lot more difficult to detect but casinos may stop players joining the game until after a shuffle to combat such a strategy.

Breaking Vegas: the true story of The MIT blackjack team.

Other players have used shuffle tracking, where blocks of cards are tracked so that you have some idea when they will appear. If you are given the option to cut the pack, you try and cut the pack near where you think the block of cards you are tracking is so that you can bet accordingly. A variant on this is to track aces as, if you know when one is likely to appear, you have a distinct advantage over the casino.

It’s been 50 years since Thorp’s book, and it is unlikely that the war of wills between blackjack players and casinos will end any time soon. Some of our work has investigated how artificial neural networks (simple models of the human brain) could help evolve blackjack strategies. This was done by playing thousands of blackjack hands and the computer learning what to do in any given situation, getting better each time. There is a lot of scope to see if automated computer programs could learn even more sophisticated strategies.

Graham Kendall, Professor of Operations Research and Vice-Provost, University of Nottingham

This article was originally published on The Conversation. Read the original article.

What problems will AI solve in future? An old British gameshow can help explain

Ian Miguel, University of St Andrews and Patrick Prosser, University of Glasgow

The Crystal Maze, the popular UK television show from the early 1990s, included a puzzle that is very useful for explaining one of the main conundrums in artificial intelligence. The puzzle appeared a few times in the show’s Futuristic Zone, one of four zones in which a team of six contestants sought to win “time crystals” that bought time to win prizes at the Crystal Dome at the end of the show.

Never solved in the two-minute time frame, the puzzle was based on a network of connected red circles (see clip below). On the wall was written a clue: “No consecutive letters in adjacent circles”. The letters A to H were printed on circular plates which could be fitted onto each circle.

So what is the right approach? We might start by considering which circles are hardest to label. With a little thought, you might choose the two middle circles, since they have the most connections. Now consider which letters might best be put on them: A and H are natural candidates because they each have only one neighbour (B and G, respectively). We might put them into the grid like this:

Ian Miguel

We can now do some deduction to eliminate incompatible possibilities for the other circles. For example the top-left circle is connected to both of the central circles. Since no consecutive letters can appear in connected circles, it can’t now contain B or G. Similar reasoning can be applied to the top-right, bottom-left, and bottom-right circles:

Ian Miguel

The leftmost and rightmost circles have to be treated differently, since each is only adjacent to one central circle. On the left we can rule out B, and on the right we can rule out G:

Ian Miguel

Look carefully at the remaining options and only the leftmost circle still has G as a possibility, and only the rightmost circle has B. Once we put them in place, we can remove further possibilities from the adjacent circles:

Ian Miguel

It is now time to make another guess. It seems reasonable to start with the top-left circle and try its first possibility: C. This allows us to rule out D from the adjacent circle and C from the bottom left. If we now guess E for the top-right circle, the bottom-left circle has only one possibility left, D, which leaves just F for the bottom-right circle. We have a solution:

Ian Miguel

Decisions, decisions

This puzzle is an example of a much wider class of decision-making problems that arise in our lives, such as rostering decisions in a hospital or factory, scheduling buses or trains, or designing medical experiments. To save us the aggravation of coming up with the best solutions, one of the challenges for artificial intelligence is to develop a general way of representing and reasoning about them.

One method is known as the constraint satisfaction problem. Just like our Crystal Maze puzzle, problems that fit this model involve a set of required decisions (“cover each circle with a plate”); a fixed set of possibilities (“use the plates from A to H provided”); and a set of constraints that allow only certain combinations of possibilities (“no consecutive letters in adjacent circles”). If you input the requirements for your particular problem into a piece of software known as a constraint solver, it can then try to solve it. It will do this in much the same way as we solved the puzzle: it combines guessing (we call this “search”) with deduction, ruling out possibilities that cannot be part of a solution based on the decisions made so far.

The greatest challenge for programmers in this field is that as you increase the size of the input problem, it quickly becomes much harder to find solutions. This is directly related to how the software “guesses” the answer. Although our guesses proved correct in our simple puzzle, in AI they can often lead us down blind alleys. With large problems there can be a vast number of possibilities and a similarly vast number of dead ends.

One key question is whether there is some way of reaching solutions without going down these alleys. As yet, we don’t know. This directly relates to one of the most important open questions in computer science, the P vs NP problem, for which the Clay Mathematics Institute in the US is offering Us$1m (£657,000) for a solution. It essentially asks whether every problem whose answer can be checked quickly by a computer can also be quickly solved by a computer.

Until someone solves it, the prevailing view is that it cannot. If so, our software does have to search through all the possible guesses, in which case we need to make it as efficient as possible. One important factor here is the search strategy – which decision we tell the computer to focus on next and which value we assign to it. Also very important is what we decide are the requirements for the particular problem. Mapping our puzzle to a constraint satisfaction template was straightforward, but in real life there are often many different options. Choosing the right strategy and model can be the difference between finding a quick solution and failing in any practical amount of time.

We have now reached the stage where the latest constraint-solving software can solve far more complex practical problems than, say, ten years ago. It was used to plan the scientific activities of the Philae comet lander last year, for instance. It also offers a better way of organising evacuation schedules for large-scale disasters.

Constraint solving has found most success with scheduling problems, but there are other similar AI tools that are more useful for other types of questions. We won’t go into them here, but they include the likes of propositional satisfiability, evolutionary algorithms and mathematical programming techniques. The job of specialists is to analyse a problem, identify which combination of tools will be the most successful for a particular case, and put together a bespoke piece of software. Once computers can do this analysis and identification, hopefully only a few years in the future, we will have made a huge leap forward. Meanwhile, the battle to make each of these tools as powerful as possible continues.

Ian Miguel, Professor of Computer Science, University of St Andrews and Patrick Prosser, Senior Lecturer in Computer Science, University of Glasgow

This article was originally published on The Conversation. Read the original article.