Tag Archives: Chess

How football clubs fail and succeed after reaching England’s Premier League

Rob Wilson, Sheffield Hallam University and Dan Plumley, Sheffield Hallam University

Football always divides opinion. As the latest English season draws to a close and the Football League playoffs take centre stage, there will be some that grumble about the format. They will say how “unfair” it is that a club can finish third in the league in the regular season, yet be denied promotion by a club that finished sixth after a late surge. Set that aside though, and you are left with the pure drama. It is win or bust, and prolongs the excitement of the regular season, giving more teams, more to play for in a crescendo of late season fixtures.

The playoffs concept was borrowed from US team sports where this end-of-season competition is a regular feature, attracting huge media exposure and significant commercial interest. In England, for thirty years now, the playoffs have determined the final promotion spot within each division of the Football League. Four teams first try to get to the playoff final at Wembley stadium, then face a nerve-jangling 90 minutes or more to secure a step up the football pyramid.

The inspiration from US sports is important. Put aside the passion, excitement, disappointment and any sense of injustice for a moment. The playoffs can be of huge importance financially. A playoff victory can have the power to stabilise a club’s financial position, clear debts and allow significant investment in players. The pot of gold at the end of this rainbow has largely been filled with TV money. The most recent domestic deal was signed for £5.14 billion. Add in the international rights and this swells to £8.4 billion.

Lower down the leagues, the money on offer is not eye-watering. Our conservative estimates put the prize at around £500,000 for promotion from League Two to League One and around £7m for promotion from League One to the Championship. However, the prize on offer for promotion to the Premier League is staggering and has led to the Championship playoff final being labelled the “richest game in football” with a value of around £170m-£200m. Huddersfield, Reading, Fulham and Sheffield Wednesday are facing off for the jackpot this time around.

Revenue generator

The often-quoted £200m figure is a little misleading as it takes into account so-called parachute payments which only kick in if a club is relegated the following season. Clubs will receive a minimum uplift of £120m though, which can be triple or quadruple their turnover. In fact, the chart below shows that when Bournemouth was promoted in 2015, the club saw a six-fold increase in revenue, essentially driven by additional broadcasting fees.

When the prize is so very shiny, straining to reach for it presents a strategic dilemma for clubs. The boost to revenue from promotion can stabilise a club financially, just like it did for Blackpool in 2010, helping it to (theoretically) secure a long-term future. In Blackpool’s case, however, on-field performance was destabilised and supporters became disenfranchised. Seven years later, Blackpool now hope to be promoted back to League One this season, via the playoffs.

Promotion can also increase the level of expectation and create pressure to retain a position in the world’s richest league. The club can get excited and the board can sanction acquisitions that fall outside a reasonable budget and seriously threaten the short and even long-term financial future of the club. This recalls the experience at Queens Park Rangers, which somehow accumulated £143m of losses despite generating about £250m in revenue during their stay in the Premier League. QPR managed to spend a startling £285m on wages and £114m on player purchases, while their level of debt surged to a peak of £194m.

Prepare to fail

The third option is to rein in your ambition, develop a strategic plan, grow incrementally and accept that you may become a yo-yo club like Burnley, or survive by the skin of your teeth like Stoke City.

Either way, the club builds a longer term future at the top table which benefits everyone. Survival through this approach means that a club receives at least another £120m so can build still further and become a stable Premiership club. But even failing and being relegated means a club will still have money to spend, receive a parachute payment (of another £45m or so) and spend a season in the Championship with turnover in excess of three times that of a standard team. This provides a significant competitive advantage over your rivals as Newcastle United showed this year – the Magpies spent big and gained promotion at the first attempt.

Ultimately, the direction of travel comes down to owner objectives, which can differ depending on their background and motivations. One thing that is clear: spending beyond your means does not always guarantee success.

The chart above allows us to examine a club’s transfer spending in the year following promotion. It is a confusing picture, but the red bars show those clubs which were relegated the following season, and demonstrate clearly that spending big is no guarantee of survival. This chart doesn’t show the starting point for each club in terms of player quality, but how you spend it is plainly crucial, and the chart shows too that you can survive without throwing the kitchen sink at player acquisitions.

There is broader evidence that the most successful clubs, with the most money, do tend to outperform, but the trade-off between financial and sporting performance is hazardous. Many clubs now choose to chase multiple and escalating objectives: recall the devastating failure at Leeds United in 2003, when creditors were owed almost £100m after the club chased the dream of playing in the Champions League. You chase that dream at your peril is the warning; plan carefully, and spend wisely is the advice to your board. Relegation doesn’t have to be a trapdoor, but promotion can be a trap.

Rob Wilson, Principal Lecturer in Sport Finance, Sheffield Hallam University and Dan Plumley, Senior Lecturer in Sport Business Management, Sheffield Hallam University

This article was originally published on The Conversation. Read the original article.

Does playing chess make you smarter? A look at the evidence

Giovanni Sala, University of Liverpool and Fernand Gobet, University of Liverpool

The stereotype of the chess player is someone who is smart, logical and good at maths. This is why so many parents around the world are keen to get their children playing chess – in the hope that the game might help to boost their son or daughter’s intelligence levels and help them succeed in a wide variety of subjects.

But apart from chess being a great game, its history rooted in eastern India military, is there actually any evidence to show that playing chess can make you smarter?

In a previous article, we showed that chess players exhibit, on average, superior cognitive ability compared to non-chess players.
And the skills needed to play chess have also been shown to correlate with several measures of intelligence – such as fluid reasoning, memory, and processing speed.

But while the existence of a relationship between general cognitive ability and chess-skill is clear, is this simply because intelligent people are more likely to engage in the game of chess, or does engaging in chess make people smarter?

Brain game

The notion that playing chess makes you smarter goes something like this: chess requires concentration and intelligence, and as mathematics and literacy require the same general skills, then practising chess must also improve one’s academic achievement.

With this idea in mind, the Institute of Education conducted a large investigation to test the effects of chess instruction on the academic skills of nearly 4,000 British children.

School chess club.

The recently released results were disappointing – it seemed chess played no effect on children’s attainment levels in mathematics, literacy, or science.

Promptly, the chess community questioned the reliability of the results, particularly given that other studies offer a more optimistic picture about the academic benefits of chess instruction.

Assessing the evidence

The chess community is probably right in criticising the recent study, as it suffers from several methodological shortcomings that probably invalidate the results.

Before the results were published, we carried out a review of all the studies in the field. Our results showed some moderate effects of chess instruction on cognitive ability and academic achievement – especially mathematics.

Does chess need intelligence?

And yet, we still need to be cautious in interpreting these results as a positive indication of the power of chess on cognitive or academic skills. This is because most of the reviewed studies compared the effect of chess with groups doing no alternative activities.

This is a problem because research has shown that the excitement and fun induced by novel activities can cause a positive temporal effect on test scores – a placebo effect.

Crucially, when compared to an alternative activity – such as checkers or sports – chess did not show any significant effect on children’s skills. So, it could well just be that the observed positive effects of chess instruction are merely due to placebo effects.

Chess notes

What all this shows is that it is unlikely chess has a significant impact on overall cognitive ability. So while it might sound like a quick win – that a game of chess can improve a broad range of skills – unfortunately this is not the case.

The failure of generalisation of a particular skill, in fact, happens to occur in many other areas beyond chess – such as music training, which has been shown to have no effect on non-music cognitive or academic abilities. The same applies to video game training, brain training, and working memory training, among others.

Ancient intelligence or just a good game?

The fact that skills learned by training do not transfer across different domains seems to be a universal in human cognition. In other words, you get better, at best, at what you train in – which may just sound just like good old fashioned common sense.

But although expecting chess to enhance children’s cognitive ability and overall academic achievement is just wishful thinking, this doesn’t mean it can’t still add value to a child’s education.

Clearly, playing chess involves some level of arithmetical and geometrical skill, and designing mathematical games or exercises with chess material can still be a simple and fun way to help children to learn.

Giovanni Sala, PhD Candidate – Cognitive Psychology, University of Liverpool and Fernand Gobet, Professor of Decision Making and Expertise, University of Liverpool

This article was originally published on The Conversation. Read the original article.

Why football, not chess, is the true final frontier for robotic artificial intelligence

Daniel Polani, University of Hertfordshire

The perception of what artificial intelligence was capable of began to change when chess grand master and world champion Garry Kasparov lost to Deep Blue, IBM’s chess-playing program, in 1997. Deep Blue, it was felt, had breached the domain of a cerebral activity considered the exclusive realm of human intellect. This was not because of something technologically new: in the end, chess was felled by the brute force of faster computers and clever heuristics. But if chess is considered the game of kings, then the east Asian board game Go is the game of emperors.

Significantly more complex, requiring even more strategic thinking, and featuring an intricate interweaving of tactical and strategical components, it posed an even greater challenge to artificial intelligence. Go relies much more on pattern recognition and subtle evaluation of the general positions of playing pieces. With a number of possible moves per turn an order of magnitude greater than chess, any algorithm trying to evaluate all possible future moves was expected to fail.

Until the early 2000s, programs playing Go progressed slowly, and could be beaten by amateurs. But this changed in 2006, with the introduction of two new techniques. First was the Monte Carlo tree search, an algorithm that rather than attempting to examine all possible future moves instead tests a sparse selection of them, combining their value in a sophisticated way to get a better estimate of a move’s quality. The second was the (re)discovery of deep networks, a contemporary incarnation of neural networks that had been experimented with since the 1960s, but which was now cheaper, more powerful, and equipped with huge amounts of data with which to train the learning algorithms.

The combination of these techniques saw a drastic improvement in Go-playing programs, and ultimately Google DeepMind’s AlphaGo program beat Go world champion Lee Sedol in March 2016. Now that Go has fallen, where do we go from here?

The future of AI is in physical form

Following Kasparov’s defeat in 1997, scientists considered that the challenge for AI was not to conquer some cerebral game. Rather, it needed to be physically embodied in the real world: football.

Football is easy for humans to pick up, but to have a humanoid robot running around a field on two legs, seeing and taking control of the ball, communicating under pressure with teammates, and all mostly without falling over, was considered completely out of the question in 1997. Only a handful of laboratories were able to design a walking humanoid robot. Led by Hiroaki Kitano and Manuela Veloso, the ambitious goal set that year was to have by 2050 a team of humanoid robots able to play a game of football against the world champion team according to FIFA rules, and win. And so the RoboCup competition was born.

The RoboCup tournament held its 20th competition in Leipzig this year. Its goal has always been to improve and challenge the capacity of artificial intelligence and robotics, not in the abstract but in the much more challenging form of physical robots that act and interact with others in real time. In the years since, many other organisations have recognised how such competitions boost technological progress.

The first RoboCup featured only wheeled robots and simulated 2D football leagues, but soon leagues that permitted Sony’s four-legged AIBO robot dogs were introduced and, since 2003, humanoid leagues. In the beginning, the humanoids’ game was quite limited, with very shaky robots attempting quivering steps, and where kicking the ball almost invariably caused the robot to fall. In recent years, their ability has significantly improved: many labs now boast five or six-a-side humanoid robot teams.

No ordinary ballgame

In order to push competitors on to reach the goal of a real football match by 2050, the conditions are made harder every year. Last year, the green carpet was replaced by artificial turf, and the goalposts and the ball coloured white. This makes it harder for robots to maintain stability and poses a challenge of recognising the goals and ball. So while the robots may seem less capable this year than the year before, it’s because the goalposts are moving.

The tasks involved in playing football, although much more intuitive to humans than chess or Go, are a major challenge for robots. Technical problems of hitherto unimaginable complexity have to be solved: timing a kick while running, identifying the ball against a glaring sun, running on wet grass, providing the robot with sufficient energy for 45 minutes’ play, even the materials that go into constructing a robot can’t disintegrate during a forceful game. Other problems to be solved will define important aspects of our life with robots in the future: when a robot collides with a human player, who can take how much damage? If humans commit fouls, may a robot foul back?

RoboCup offers up in miniature the problems we face as we head towards intelligent robots interacting with humans. It is not in the cerebral boardgames of chess or Go, but here on the pitch in the physical game of football that the frontline of life with intelligent robots is being carved out.

Daniel Polani, Professor of Artificial Intelligence, University of Hertfordshire

This article was originally published on The Conversation. Read the original article.

How to get ants to solve a chess problem

Graham Kendall, University of Nottingham

Take a set of chess pieces and throw them all away except for one knight. Place the knight on any one of the 64 squares of a chess board. The Conversation

Can you make 63 legal moves so that you visit every square on the chess board exactly once? As a reminder, a knight can move two squares in a straight line, followed by a ninety degree turn and a move of one further square. It might seem like a hard task, but this set of moves, called the knight’s tour, can be achieved in too many ways to count.

The knight's tour

If you are able to make the 63 moves and end up on a square from which you can move back to the original square with the 64th legal move, then this is known as a closed tour. Other tours are called open tours.

Mathematicians have pondered how many closed tours exist, and they have come up with an astonishing number: more than 26 trillion. There are so many more open tours that we do not know the exact number.

Both Philip Hingston and I were so captivated by the knight’s tour problem that we wanted to find a different way to solve it. We found that motivation in nature – specifically in ants.

Ants use a certain pattern, or algorithm, to forage for food. This algorithm can be used to tackle many types of problems including the Travelling Salesman Problem and Vehicle Routing Problems. Philip and Graham wondered if they could use the ant colony optimisation algorithm to solve the knight’s tour problem.

Here’s how that algorithm works: a computer program is used to simulate a population of ants. These ants are assigned the task to find a solution to a problem. As each ant goes about their task they lay a pheromone trail – a smelly substance that ants use to communicate with each other. In the simulated algorithm, the most successful ants (the ones that solve the problem better), lay more pheromone than those that perform poorly.

L. Shyamal

We repeat this procedure many times (perhaps millions of times). Through repetitions, the pheromone trails on good solutions increase and they decrease on the poorer solutions due to evaporation, which is also programmed in the simulation algorithm.

In the simulation to solve the knight’s tour problem, the ants could only make legal knight moves and were restricted to stay within the confines of the chess board. If an ant successfully completes a tour then we reinforce that tour by depositing more pheromone on that tour, when compared to a tour that was not a full tour.

Ants which attempt to find later tours are more likely to follow higher levels of pheromone. This means that they are more likely to make the same moves as previously successful ants.

There is a balance to be struck. If the ants follow the successful ants too rigidly, then the algorithm will quickly converge to a single tour. If we encourage the ants too much, not to follow the pheromone of previous ants, then than they will just act randomly. So it is a case of tuning the algorithm’s parameters to try and find a good balance.

Using this algorithm, we were able to find almost half a million tours. This was a significant improvement over previous work, which was based on a genetic algorithm. These algorithms emulate Charles Darwin’s principle of natural evolution – survival of the fittest. Fitter members (those that perform well on the problem at hand) of a simulated population survive and weaker members die off.

It is not easy to say why the ant algorithm performed so well, when compared to the genetic algorithm. Perhaps it was down to tuning the algorithmic parameters, or perhaps ants really do like to play chess!

The knight’s tour problem was being worked on as far back as 840 AD. Little did those problem-solvers know that ants, albeit simulated ones, would be tackling the same puzzle more than 1,000 years in the future.

Graham Kendall, Professor of Operations Research and Vice-Provost, University of Nottingham

This article was originally published on The Conversation. Read the original article.

How to teach Deep Blue to play poker and deliver groceries

Graham Kendall, University of Nottingham

Deep Blue gained world-wide attention in 1997 when it defeated the then chess world champion Garry Kasparov. But playing chess was all that Deep Blue could do. Ask it to play another game, even a simpler one, such as checkers, and Deep Blue would not even know how to play at beginner level. The same is also true of many other programs that can beat humans. Computers that can play poker cannot play bridge.

Royal Flush.
Images of Money

This type of tailored software development is also apparent in systems that we rely on every day. A system that produces nurse rosters may not be able to cope with producing shift patterns for a factory, even though they are both personnel scheduling systems. Programs that plan delivery routes of an online supermarket cannot usually be used to schedule appointments for servicing home appliances, even though they are both examples of a Vehicle Routing Problem.

In recent years there has been a growing interest in a field called hyper-heuristics, which aims to develop more general computer systems. The idea is to build systems that are not tailored for just one type of problem, but which can be reused for a wide range of problems.

The figure below shows a typical hyper-heuristic framework. Let’s assume that this framework is being used to tackle a nurse rostering problem, where we have to assign nurses to work a certain number of shifts over a certain time period, say a week.

Hyper-heuristic Framework.

If we start with a possible shift pattern (perhaps from the previous week), we can do certain things to improve it. For example, we could move a nurse from one shift to another, we could swap two nurses or we could remove all nurses from a certain shift (say the Wednesday evening shift) and replace them with nurses that do not meet their contractual arrangements, just to give a few examples. These changes to the shift pattern are usually called heuristics.

The important thing is that we have a number of these low-level heuristics that we can use to improve the current roster. All these heuristics are placed in the bottom of the framework. We now choose one of these heuristics and execute it (for instance, swap one nurse with another). We repeat the process of choosing and executing a heuristic over and over again, in the hope that we will gradually get a better roster. The quality of the roster is measured by the evaluation function, which checks the outcome.

The key to this approach is to decide in which order to execute the low-level heuristics. This is where the top part of the framework comes into play. The hyper-heuristic looks at the state of the system and decides which heuristic to execute. This is repeated until we decide to stop (maybe after a certain period of time, or after we have executed the low-level heuristics a certain number of times).

What makes a hyper-heuristic different, from other heuristic-selecting algorithms, is the “domain barrier”. This stops the higher level hyper-heuristic knowing anything about the problem it is trying to solve. The hyper-heuristic only has access to data that is common to any problem. This includes how long each low-level heuristic took to execute, the track record of each low-level heuristic (how well it has performed), how pairs of low-level heuristics work with each other, to give just a few examples.

The benefit of the domain barrier is that we can replace the low-level heuristics, and the evaluation function, with another type of problem. As the hyper-heuristic has no knowledge of the problem being tackled we would hope that we can use the same higher level algorithm to tackle this new problem. And, indeed, this has been shown to be the case in a large number of scientific problems.

The challenge in hyper-heuristics lies in developing a robust high-level strategy that is able to adapt to as many different problems as possible. We are still some way off having a hyper-heuristic that is able to produce nurse rosters, plan deliveries and play poker, but, given the pace of progress in this field, we hope to achieve this goal in the not-too-distant future.

Graham Kendall, Professor of Operations Research and Vice-Provost, University of Nottingham

This article was originally published on The Conversation. Read the original article.