No problem too big #1: Artificial intelligence and killer robots

Adam Hulbert, UNSW

This is the first episode of a special Speaking With podcast series titled No Problem Too Big, where a panel of artists and researchers speculate on the end of the world as though it has already happened. The Conversation

It’s not the world we grew up in. Not since artificial intelligence. The machines have taken control.

Three fearless researchers gather in the post-apocalyptic twilight: a computer scientist, a mechanical engineer and a sci-fi author.

Together, they consider the implications of military robots and autonomous everything, and discover that the most horrifying post-apocalyptic scenario might look something like unrequited robot love.

Joanne Anderton is an award-winning author of speculative fiction stories for anyone who likes their worlds a little different. More information about Joanne and her novels can be found here.

No Problem Too Big is created and hosted by Adam Hulbert, who lectures in media and sonic arts at the University of New South Wales. It is produced with the support of The Conversation and University of New South Wales.

Sound design by Adam Hulbert.

Theme music by Phonkubot.

Additional music:

Beast/Decay/Mist by Haunted Me (via Free Music Archive)

Humming Ghost by Haunted Me (via Free Music Archive)

Additional audio:

Stephen Hawking interview, BBC News

Adam Hulbert, Sonic Arts Convener, UNSW

This article was originally published on The Conversation. Read the original article.

African scientists are punching above their weight and changing the world

John Butler-Adam, University of Pretoria

Over the past five years, Africa’s contributions to the world’s research –- that is, new knowledge –- have varied from a low of 0.7% to the present and highest level of 1.1%. The Conversation

There are many reasons for Africa’s small contribution to world research. One of them, sadly, is that at least some of this new knowledge is produced by African scientists working beyond their own countries and continent. Many have chosen to leave because they feel the facilities and funding opportunities are better than those “at home”.

It’s also important to point out that the sum of knowledge generated each year, including Africa’s contribution to it, is measured using research articles published by scientists and scholars in scientifically recognised journals. This means some of the actual work that’s being done isn’t getting the attention or credit it deserves, yet. The journal system is not a perfect way of assessing scientific productivity. For now, though, it’s a means that can be applied fairly to document peer reviewed research from around the world.

These concerns aside there is, I’m happy to report, much to celebrate about research in Africa. For starters, the world’s largest collection of peer-reviewed, African-published journals, is growing all the time. African Journals Online currently carries 521 titles across a range of subjects and disciplines.

Women researchers are also well represented, though there’s still work to be done: three out of 10 sub-Saharan researchers are women.

The continent’s researchers are working on challenges as varied as astrophysics, malaria, HIV/AIDS and agricultural productivity. They are making significant advances in these and many other critical areas. The projects I talk about here are just a few examples of the remarkable work Africa’s scientists are doing on and for the continent.

A range of research

Africa is establishing itself as global player in astronomical research. The Southern African Large Telescope (SALT) is the largest single optical telescope of its kind in the Southern hemisphere. Work undertaken at this facility, in South Africa’s Northern Cape province, has resulted in the publication of close to 200 research papers.

The telescope has support from and working relationships with universities in 10 countries. Its recent work helped a team of South African and international collaborators to uncover a previously unknown major supercluster in the constellation Vela.

SALT has two siblings: MeerKAT, which is already producing results, and the Square Kilometre Array, which is still being developed.

In a very different sphere, Professors Salim and Quarraisha Abdool Karim have won African and international awards for their groundbreaking and lifesaving work in the area of HIV/AIDS. Professor Glenda Gray, the CEO of South Africa’s Medical Research Council, has been honoured by Time magazine as one of the world’s 100 most influential people. She, too, is a pioneer in HIV/AIDS research.

In Kenya, dedicated research institutes are tackling agricultural challenges in areas like crop production and livestock health. This not only boosts Africa’s research output, but contributes greatly to rural development on the continent.

Elsewhere, Nigeria has established a number of research institutes that focus on a range of agricultural challenges. Research is also being undertaken in the important area of oceanography.

Although it operates from the University of Cape Town, the African Climate and Development Initiative has been working as a partner in Mozambique. There it’s addressing the critical – and interrelated – challenges of climate change and adaptation responses for horticulture, cassava and the red meat value chain. This is important work in one of Africa’s poorest countries, which is battling drought and hunger.

And then there’s also research “out of Africa”. This involves discoveries about the human past and the origins of homo sapiens. Historically, this sort of research was often undertaken by people who didn’t come from Africa. More and more, though, African scholars have come to the fore. The scientists who discovered a new human ancestor and mapped a cave system that’s serving up amazing fossil evidence are following in giant footsteps: those of Robert Broom, Raymond Dart and Phillip Tobias.

Research that matters

What does all of this tell us about research in Africa? Perhaps three ideas are worth considering.

First, while Africa and its universities, institutes and scientists need to make far greater contributions to world knowledge, high quality and important research is happening. Its overall contribution might be small, but smart people are undertaking smart and important work.

Secondly, the range of research being undertaken is remarkable in view of the size of Africa’s overall contribution: from galaxies to viruses; from agriculture to malaria; and from drought to oceanography.

And thirdly it is notable, and of great significance, that irrespective of the disciplines involved, the research is tackling both international concerns and those specific to the African continent and its people’s needs.

Yes, 1.1% is a small figure. What’s actually happening, on the other hand, adds up to a pretty impressive score card.

John Butler-Adam, Editor-in-Chief of the South African Journal of Science and Consultant, Vice Principal for Research and Graduate Education, University of Pretoria

This article was originally published on The Conversation. Read the original article.

To stay in the game universities need to work with tech companies

Martin Hall, University of Cape Town

The world of higher and professional education is changing rapidly. Digitally-enabled learning, in all its forms, is here to stay. Over the last five years, massive open online courses (MOOCs) have enabled universities to share their expertise with millions across the world. This shows how rapidly developing digital technologies can make learning accessible. The Conversation

These new technologies are shaking up traditional classrooms, too. And as the nature of work changes professionals are turning to high level, online courses to keep pace with new demands.

But much of this new technology is the preserve of private sector companies. This means that universities have to work with them. Yet partnerships with for-profit companies still don’t feel right for many in the higher education sphere. Knowledge has long been seen as a public good, and education as a basic right. Many of today’s universities were shaped by the principles of public funding.

This world was changing well before the disruptive impact of digital technologies, with tuition fees rising above the rate of inflation and the emergence of private universities as part of the higher education landscape. But there’s still unease about technology and its role. The reality, though, is that higher education institutions will have to get over their queasiness if they’re to survive in this brave new world.

Universities may not have the know how or the money to match the innovations coming onto the market through private tech companies. The decision by Nasdaq-listed technology education (edtech) company 2U to acquire Cape Town based startup GetSmarter for R1,4bn ($103million) is the largest price tag yet for a South African company working in digital education.

This is an indication of what it would cost a university to set up a full online division. Few institutions will have this money, or the ability to raise it. The alternative is to reconsider the advantages of public-private partnerships, taking care to retain authority over quality. For many universities this could be the only way of keeping pace with the changing world of education.

The story of a start up

The story of how GetSmarter got off the ground is a text book case of how a simple idea, combined with guts and luck, can reap huge rewards.

GetSmarter was launched in 2008 with a tiny budget and offered just one online course, in wine evaluation. By 2016 its annual revenues had grown to about R227 million. The foundation for this expansion has been a wide range of courses developed and offered in partnership with the University of Cape Town and, more recently, the University of the Witwatersrand and Stellenbosch University.

GetSmarter’s key breakthrough into the international realm came with professional programmes in association with the Massachusetts Institute of Technology (MIT) and Cambridge University. GetSmarter’s first course with HarvardX will soon be presented.

After its acquisition was announced I talked to the company’s CEO, Sam Paddock, co-founded with brother Rob. We discussed the lessons for other small digital companies – and for universities that are mulling the value of digital learning.

The Paddock brothers leveraged the cash flow from their father’s niche law firm to launch their first online course. They then used upfront payments for that course and the courses that followed to keep financing their next offerings. In the nine years that followed, edtech has become a crowded and complex field.

GetSmarter’s purchase price has garnered a lot of media attention: it’s high, in US dollar terms, and is a vote of confidence in the company. The price represents a valuation of a company’s assets, intellectual property and know-how, and strategic positioning for the future.

But what does it say about the kinds of investments and partnerships that conventional universities will have to make as they adapt to the full disruption from new digital technologies? The key aspect of GetSmarter’s success is how its partnership with universities has played out. As Paddock points out:

We are starting to realise the potential of public-private partnerships, where the credibility and resources of great universities can be combined with the skills of nimble private operators.

Good news for the digital economy

This acquisition is also good news for South Africa’s digital economy. Paddock says GetSmarter will employ more South African graduates and give them international experience and expertise.

And, he says, ecosystems often develop from one significant investment in an individual company. “This was how Silicon Valley started, as well as London’s ”silicon roundabout“. Cape Town, GetSmarter’s home city, has been trumpeted as South Africa’s own Silicon Valley: ”Silicon Cape“.

The opportunity to lead in digital innovation and application has been widely recognised, for example through the work of Accelerate Cape Town. The Cape Innovation and Technology Initiative (CiTi) has a range of initiatives underway, including a three year partnership with Telkom intended to build the digital workforce.

Last year, cellphone giant Vodacom announced an investment of R600m to assist in developing South Africa’s digital skills.

GetSmarter’s big win is good news and proof – if universities needed it – that such initiatives can bolster higher education’s offering in a rapidly changing world. Universities in Africa know that they need to keep up with the relentless march of digitally enabled learning. GetSmarter’s journey from bootstrapped startup to a billion rand enterprise is a case study, worthy of attention.

Martin Hall, Emeritus Professor, MTN Solution Space Graduate School of Business, University of Cape Town

This article was originally published on The Conversation. Read the original article.

Why using AI to sentence criminals is a dangerous idea

Christopher Markou, University of Cambridge

Artificial intelligence is already helping determine your future – whether it’s your Netflix viewing preferences, your suitability for a mortgage or your compatibility with a prospective employer. But can we agree, at least for now, that having an AI determine your guilt or innocence in a court of law is a step too far?

Worryingly, it seems this may already be happening. When American Chief Justice John Roberts recently attended an event, he was asked whether he could forsee a day “when smart machines, driven with artificial intelligences, will assist with courtroom fact finding or, more controversially even, judicial decision making”. He responded: “It’s a day that’s here and it’s putting a significant strain on how the judiciary goes about doing things”.

Roberts might have been referring to the recent case of Eric Loomis, who was sentenced to six years in prison at least in part by the recommendation of a private company’s secret proprietary software. Loomis, who has a criminal history and was sentenced for having fled the police in a stolen car, now asserts that his right to due process was violated as neither he nor his representatives were able to scrutinise or challenge the algorithm behind the recommendation.

The report was produced by a software product called Compas, which is marketed and sold by Nortpointe Inc to courts. The program is one incarnation of a new trend within AI research: ones designed to help judges make “better” – or at least more data-centric – decisions in court.

While specific details of Loomis’ report remain sealed, the document is likely to contain a number of charts and diagrams quantifying Loomis’ life, behaviour and likelihood of re-offending. It may also include his age, race, gender identity, browsing habits and, I don’t know … measurements of his skull. The point is we don’t know.

What we do know is that the prosecutor in the case told the judge that Loomis displayed “a high risk of violence, high risk of recidivism, high pretrial risk.” This is standard stuff when it comes to sentencing. The judge concurred and told Loomis that he was “identified, through the Compas assessment, as an individual who is a high risk to the community”.

The Wisconsin Supreme Court convicted Loomis, adding that the Compas report brought valuable information to their decision, but qualified it by saying he would have received the same sentence without it. But how can we know that for sure? What sort of cognitive biases are involved when an all-powerful “smart” system like Compas suggests what a judge should do?

Unknown use

Now let’s be clear, there is nothing “illegal” about what the Wisconsin court did – it’s just a bad idea under the circumstances. Other courts are free to do the same.

Worryingly, we don’t actually know the extent to which AI and other algorithms are being used in sentencing. My own research indicates that several jurisdictions are “trialling” systems like Compas in closed trials, but that they cannot announce details of their partnerships or where and when they are being used. We also know that there are a number of AI startups that are competing to build similar systems.

However, the use of AI in law doesn’t start and end with sentencing, it starts at investigation. A system called VALCRI has already been developed to perform the labour-intensive aspects of a crime analyst’s job in mere seconds – wading through tonnes of data like texts, lab reports and police documents to highlight things that may warrant further investigation.

The UK’s West Midlands Police will be trialling VALCRI for the next three years using anonymised data – amounting to some 6.5m records. A similar trial is underway from the police in Antwerp, Belgium. However, past AI and deep learning projects involving massive data sets have been have been problematic.

Benefits for the few?

Technology has brought many benefits to the court room, ranging from photocopiers to DNA fingerprinting and sophisticated surveillance techniques. But that doesn’t mean any technology is an improvement.

Algorithms can be racist, too.
Vintage Tone/Shutterstock

While using AI in investigations and sentencing could potentially help save time and money, it raises some thorny issues. A report on Compas from ProPublica made clear that black defendants in Broward County Florida “were far more likely than white defendants to be incorrectly judged to be at a higher rate of recidivism”. Recent work by Joanna Bryson, professor of computer science at the University of Bath, highlights that even the most “sophisticated” AIs can inherit the racial and gender biases of those who create them.

What’s more, what is the point of offloading decision making (at least in part) to an algorithm on matters that are uniquely human? Why do we go through the trouble of selecting juries composed of our peers? The standard in law has never been one of perfection, but rather the best that our abilities as mere humans allow us. We make mistakes but, over time, and with practice, we accumulate knowledge on how not to make them again – constantly refining the system.

What Compas, and systems like it, represent is the “black boxing” of the legal system. This must be resisted forcefully. Legal systems depend on continuity of information, transparency and ability to review. What we do not want as a society is a justice system that encourages a race to the bottom for AI startups to deliver products as quickly, cheaply and exclusively as possible. While some AI observers have seen this coming for years, it’s now here – and it’s a terrible idea.

An open source, reviewable version of Compas would be an improvement. However, we must ensure that we first raise standards in the justice system before we begin offloading responsibility to algorithims. AI should not just be an excuse not to invest.

While there is a lot of money to be made in AI, there is also a lot of genuine opportunity. It can change a lot for the better if we get it right, and ensure that its benefits accrue for all and don’t just entrench power at the top of the pyramid.

I have no perfect solutions for all these problems right now. But I do know that when it comes to the role of AI in law we must ask in which context they are being used, for what purposes and with what meaningful oversight. Until those questions can be answered with certainty, be very very sceptical. Or at the very least know some very good lawyers.

Christopher Markou, PhD Candidate, Faculty of Law, University of Cambridge

This article was originally published on The Conversation. Read the original article.

How football clubs fail and succeed after reaching England’s Premier League

Rob Wilson, Sheffield Hallam University and Dan Plumley, Sheffield Hallam University

Football always divides opinion. As the latest English season draws to a close and the Football League playoffs take centre stage, there will be some that grumble about the format. They will say how “unfair” it is that a club can finish third in the league in the regular season, yet be denied promotion by a club that finished sixth after a late surge. Set that aside though, and you are left with the pure drama. It is win or bust, and prolongs the excitement of the regular season, giving more teams, more to play for in a crescendo of late season fixtures.

The playoffs concept was borrowed from US team sports where this end-of-season competition is a regular feature, attracting huge media exposure and significant commercial interest. In England, for thirty years now, the playoffs have determined the final promotion spot within each division of the Football League. Four teams first try to get to the playoff final at Wembley stadium, then face a nerve-jangling 90 minutes or more to secure a step up the football pyramid.

The inspiration from US sports is important. Put aside the passion, excitement, disappointment and any sense of injustice for a moment. The playoffs can be of huge importance financially. A playoff victory can have the power to stabilise a club’s financial position, clear debts and allow significant investment in players. The pot of gold at the end of this rainbow has largely been filled with TV money. The most recent domestic deal was signed for £5.14 billion. Add in the international rights and this swells to £8.4 billion.

Lower down the leagues, the money on offer is not eye-watering. Our conservative estimates put the prize at around £500,000 for promotion from League Two to League One and around £7m for promotion from League One to the Championship. However, the prize on offer for promotion to the Premier League is staggering and has led to the Championship playoff final being labelled the “richest game in football” with a value of around £170m-£200m. Huddersfield, Reading, Fulham and Sheffield Wednesday are facing off for the jackpot this time around.

Revenue generator

The often-quoted £200m figure is a little misleading as it takes into account so-called parachute payments which only kick in if a club is relegated the following season. Clubs will receive a minimum uplift of £120m though, which can be triple or quadruple their turnover. In fact, the chart below shows that when Bournemouth was promoted in 2015, the club saw a six-fold increase in revenue, essentially driven by additional broadcasting fees.

When the prize is so very shiny, straining to reach for it presents a strategic dilemma for clubs. The boost to revenue from promotion can stabilise a club financially, just like it did for Blackpool in 2010, helping it to (theoretically) secure a long-term future. In Blackpool’s case, however, on-field performance was destabilised and supporters became disenfranchised. Seven years later, Blackpool now hope to be promoted back to League One this season, via the playoffs.

Promotion can also increase the level of expectation and create pressure to retain a position in the world’s richest league. The club can get excited and the board can sanction acquisitions that fall outside a reasonable budget and seriously threaten the short and even long-term financial future of the club. This recalls the experience at Queens Park Rangers, which somehow accumulated £143m of losses despite generating about £250m in revenue during their stay in the Premier League. QPR managed to spend a startling £285m on wages and £114m on player purchases, while their level of debt surged to a peak of £194m.

Prepare to fail

The third option is to rein in your ambition, develop a strategic plan, grow incrementally and accept that you may become a yo-yo club like Burnley, or survive by the skin of your teeth like Stoke City.

Either way, the club builds a longer term future at the top table which benefits everyone. Survival through this approach means that a club receives at least another £120m so can build still further and become a stable Premiership club. But even failing and being relegated means a club will still have money to spend, receive a parachute payment (of another £45m or so) and spend a season in the Championship with turnover in excess of three times that of a standard team. This provides a significant competitive advantage over your rivals as Newcastle United showed this year – the Magpies spent big and gained promotion at the first attempt.

Ultimately, the direction of travel comes down to owner objectives, which can differ depending on their background and motivations. One thing that is clear: spending beyond your means does not always guarantee success.

The chart above allows us to examine a club’s transfer spending in the year following promotion. It is a confusing picture, but the red bars show those clubs which were relegated the following season, and demonstrate clearly that spending big is no guarantee of survival. This chart doesn’t show the starting point for each club in terms of player quality, but how you spend it is plainly crucial, and the chart shows too that you can survive without throwing the kitchen sink at player acquisitions.

There is broader evidence that the most successful clubs, with the most money, do tend to outperform, but the trade-off between financial and sporting performance is hazardous. Many clubs now choose to chase multiple and escalating objectives: recall the devastating failure at Leeds United in 2003, when creditors were owed almost £100m after the club chased the dream of playing in the Champions League. You chase that dream at your peril is the warning; plan carefully, and spend wisely is the advice to your board. Relegation doesn’t have to be a trapdoor, but promotion can be a trap.

Rob Wilson, Principal Lecturer in Sport Finance, Sheffield Hallam University and Dan Plumley, Senior Lecturer in Sport Business Management, Sheffield Hallam University

This article was originally published on The Conversation. Read the original article.

Why data-driven science is more than just a buzzword

Tara Murphy, University of Sydney

Forget looking through a telescope at the stars. An astronomer today is more likely to be online: digitally scheduling observations, running them remotely on a telescope in the desert, and downloading the results for analysis.

For many astronomers the first step in doing science is exploring this data computationally. It may sound like a buzzword, but data-driven science is part of a profound shift in fields like astronomy.

A 2015 report by the Australian Academy of Science found that among more than 500 professional astronomers in Australia, around one quarter of their research effort was now computational in nature. Yet many high school and university science, technology and engineering subjects still treat the necessary skills as second-class citizens.

Referring both to the modelling of the world through simulations and the exploration of observational data, computation is central not only to astronomy but a range of sciences, including bioinformatics, computational linguistics and particle physics.

To prepare the next generation, we must develop new teaching methods that recognise data-driven and computational approaches as some of the primary tools of contemporary research.

The era of big data in science

The great empiricists of the 17th century believed that if we used our senses to collect as much data as possible, we would ultimately understand our world.

Although empirical science has a long history, there are some key differences between a traditional approach and the data-driven science we do today.

The change that has perhaps had the most impact is the sheer amount of data that computers can now collect. This has enabled a change in philososphy: data can be gathered to serve many projects rather than just one, and the way we explore and mine data allows us to “plan for serendipity”.

Cleo Loi describes her discovery of plasma tubes in the Earth’s ionosphere.

Take the search for new types of astronomical phenomena. Large data sets can yield unexpected results: some modern examples are the discovery of fast radio bursts by astronomer Duncan Lorimer and the discovery of plasma tubes in the Earth’s ionosphere by a former undergraduate student of mine, Cleo Loi. Both of these depended on mining of archival data sets that had been designed for a different purpose.

Many scientists now work collaboratively to design experiments that can serve many projects at once and test different hypotheses. For example, the book outlining the science case for the future Square Kilometre Array Telescope, to be built in South Africa and Australia, has 135 chapters contributed by 1,200 authors.

Our education system needs to change, too

Classic images of science include Albert Einstein writing down the equations of relativity, or Marie Curie discovering radium in her laboratory.

A page from Albert Einstein’s Zurich Notebook.

Our understanding of how science works is often formed in high school, where we learn about theory and experiment. We picture these twin pillars working together, with experimental scientists testing theories, and theorists developing new ways to explain empirical results.

Computation, however, is rarely mentioned, and so many key skills are left undeveloped.

To design unbiased experiments and select robust samples, for example, scientists need excellent statistical skills. But often this part of maths takes a back seat in university degrees. To ensure our data-driven experiments and explorations are rigorous, scientists need to know more than just high school statistics.

Marie Curie in her chemistry laboratory at the Radium Institute in France, April 1921.

In fact, to solve problems in this era, scientists also need to develop computational thinking. It’s not just about coding, although that’s a good start. They need to think creatively about algorithms, and how to manage and mine data using sophisticated techniques such as machine learning.

Applying simple algorithms to massive data sets simply doesn’t work, even when you have the power of 10,000-core supercomputers. Switching to more sophisticated techniques from computer science, such as the kd-tree algorithm for matching astronomical objects, can speed up software by orders of magnitude.

Some steps are being taken in the right direction. Many universities are introducing courses and degrees in data science, incorporating statistics and computer science combined with science or business. For example, I recently launched an online course on data-driven astronomy, which aims to teach skills like data management and machine learning in the context of astronomy.

In schools the new Australian Curriculum in Digital Technologies makes coding and computational thinking part of the syllabus from Year 2. This will develop vital skills, but the next step is to integrate modern approaches directly into science classrooms.

Computation has been an important part of science for more than half a century, and the data explosion is making it even more central. By teaching computational thinking as part of science, we can ensure our students are prepared to make the next round of great discoveries.

Tara Murphy, Associate Professor and ARC Future Fellow, University of Sydney

This article was originally published on The Conversation. Read the original article.

Does playing chess make you smarter? A look at the evidence

Giovanni Sala, University of Liverpool and Fernand Gobet, University of Liverpool

The stereotype of the chess player is someone who is smart, logical and good at maths. This is why so many parents around the world are keen to get their children playing chess – in the hope that the game might help to boost their son or daughter’s intelligence levels and help them succeed in a wide variety of subjects.

But apart from chess being a great game, its history rooted in eastern India military, is there actually any evidence to show that playing chess can make you smarter?

In a previous article, we showed that chess players exhibit, on average, superior cognitive ability compared to non-chess players.
And the skills needed to play chess have also been shown to correlate with several measures of intelligence – such as fluid reasoning, memory, and processing speed.

But while the existence of a relationship between general cognitive ability and chess-skill is clear, is this simply because intelligent people are more likely to engage in the game of chess, or does engaging in chess make people smarter?

Brain game

The notion that playing chess makes you smarter goes something like this: chess requires concentration and intelligence, and as mathematics and literacy require the same general skills, then practising chess must also improve one’s academic achievement.

With this idea in mind, the Institute of Education conducted a large investigation to test the effects of chess instruction on the academic skills of nearly 4,000 British children.

School chess club.

The recently released results were disappointing – it seemed chess played no effect on children’s attainment levels in mathematics, literacy, or science.

Promptly, the chess community questioned the reliability of the results, particularly given that other studies offer a more optimistic picture about the academic benefits of chess instruction.

Assessing the evidence

The chess community is probably right in criticising the recent study, as it suffers from several methodological shortcomings that probably invalidate the results.

Before the results were published, we carried out a review of all the studies in the field. Our results showed some moderate effects of chess instruction on cognitive ability and academic achievement – especially mathematics.

Does chess need intelligence?

And yet, we still need to be cautious in interpreting these results as a positive indication of the power of chess on cognitive or academic skills. This is because most of the reviewed studies compared the effect of chess with groups doing no alternative activities.

This is a problem because research has shown that the excitement and fun induced by novel activities can cause a positive temporal effect on test scores – a placebo effect.

Crucially, when compared to an alternative activity – such as checkers or sports – chess did not show any significant effect on children’s skills. So, it could well just be that the observed positive effects of chess instruction are merely due to placebo effects.

Chess notes

What all this shows is that it is unlikely chess has a significant impact on overall cognitive ability. So while it might sound like a quick win – that a game of chess can improve a broad range of skills – unfortunately this is not the case.

The failure of generalisation of a particular skill, in fact, happens to occur in many other areas beyond chess – such as music training, which has been shown to have no effect on non-music cognitive or academic abilities. The same applies to video game training, brain training, and working memory training, among others.

Ancient intelligence or just a good game?

The fact that skills learned by training do not transfer across different domains seems to be a universal in human cognition. In other words, you get better, at best, at what you train in – which may just sound just like good old fashioned common sense.

But although expecting chess to enhance children’s cognitive ability and overall academic achievement is just wishful thinking, this doesn’t mean it can’t still add value to a child’s education.

Clearly, playing chess involves some level of arithmetical and geometrical skill, and designing mathematical games or exercises with chess material can still be a simple and fun way to help children to learn.

Giovanni Sala, PhD Candidate – Cognitive Psychology, University of Liverpool and Fernand Gobet, Professor of Decision Making and Expertise, University of Liverpool

This article was originally published on The Conversation. Read the original article.

People don’t trust scientific research when companies are involved

John C. Besley, Michigan State University; Aaron M. McCright, Michigan State University; Joseph D. Martin, University of Leeds; Kevin Elliott, Michigan State University, and Nagwan Zahry, Michigan State University

A soda company sponsoring nutrition research. An oil conglomerate helping fund a climate-related research meeting. Does the public care who’s paying for science?

In a word, yes. When industry funds science, credibility suffers. And this does not bode well for the types of public-private research partnerships that appear to be becoming more prevalent as government funding for research and development lags.

The recurring topic of conflict of interest has made headlines in recent weeks. The National Academies of Science, Engineering, and Medicine has revised its conflict of interest guidelines following questions about whether members of a recent expert panel on GMOs had industry ties or other financial conflicts that were not disclosed in the panel’s final report.

Our own recent research speaks to how hard it may be for the public to see research as useful when produced with an industry partner, even when that company is just one of several collaborators.

What people think of funding sources

We asked our study volunteers what they thought about a proposed research partnership to study the potential risks related to either genetically modified foods or trans fats.

We randomly assigned participants to each evaluate one of 15 different research partnership arrangements – various combinations of scientists from a university, a government agency, a nongovernmental organization and a large food company.

For example, 1/15th of participants were asked to consider a research collaboration that included only university researchers. Another 1/15th of participants considered a research partnership that included both university and government scientists, and so on. In total we presented four conditions where there was a single type of researcher, another six collaborations with two partners, four with three partners and one with all four partners.

When a research team included an industry partner, our participants were generally less likely to think the scientists would consider a full range of evidence and listen to different voices. An industry partner also reduced how much participants believed any resulting data would provide meaningful guidance for making decisions.

At the outset of our work, we thought including a diverse array of partners in a research collaboration might mitigate the negative perceptions that come with industry involvement. But, while including scientists from a nonindustry organization (particularly a nongovernmental organization) made some difference, the effect was small. Adding a government partner provided no substantive additional benefit.

When we asked participants to describe what they thought about the research partnership in their own words, they were skeptical whether an industry partner could ever be trusted to release information that might hurt its profits.

Our results may be even more troubling because we chose a company with a good reputation. We used pretests to select particular examples – of a corporation, as well as a university, government agency and nongovernmental organization – that had relatively high positive ratings and relatively low negative ratings in a test sample.

Can industry do valid science?

You don’t have to look far for real-life examples of poorly conducted or intentionally misleading industry research. The pharmaceutical, chemical, nutrition and petroleum industries have all weathered criticism of their research integrity, and for good reason. These ethically questionable episodes no doubt fuel public skepticism of industry research. Stories of pharmaceutical companies conducting less than rigorous clinical trials for the benefit of their marketing departments, or the tobacco industry steadfastly denying the connection between smoking and cancer in the face of mounting evidence, help explain public concern about industry-funded science.

But industry generally has a long and impressive history of supporting scientific research and technical development. Industry-supported research has generated widely adopted technologies, driven the evolution of entire economic sectors, improved processes that were harmful to public health and the environment and won Nobel Prizes. And as scientists not currently affiliated with industry scramble to fund their research in an era of tight budgets, big companies have money to underwrite science.

Does it matter within what kind of institution a researcher hangs her lab coat?
Vivien Rolfe, CC BY-SA

Can this lack of trust be overcome? Moving forward, it will be essential to address incentives such as short-term profit or individual recognition that can encourage poor research – in any institutional context. By showing how quickly people may judge industry-funded research, our work indicates that it’s critical to think about how the results of that research can be communicated effectively.

Our results should worry those who want research to be evaluated largely on its scientific merits, rather than based upon the affiliations of those involved.

Although relatively little previous scholarship has investigated this topic, we expected to find that including multiple, nonindustry organizations in a scientific partnership might, at least partly, assuage participants’ concerns about industry involvement. This reflects our initial tentative belief that, given the resources and expertise within industry, there must be some way to create public-private partnerships that produce high-quality research which is perceived widely as such.

Our interdisciplinary team – a risk communication scholar, a sociologist, a philosopher of science, a historian of science and a toxicologist – is also examining philosophical arguments and historical precedents for guidance on these issues.

Philosophy can tell us a great deal about how the values of investigators can influence their results. And history shows that not so long ago, up until a few decades after World War II, many considered industry support a way to uphold research integrity by protecting it from government secrecy regimes.

Looking forward, we are planning additional social scientific experiments to examine how specific procedures that research partnerships sometimes use may affect public views about collaborations with industry partners. For example, perhaps open-data policies, transparency initiatives or external reviewer processes may alleviate bias concerns.

Given the central role that industry plays in scientific research and development, it is important to explore strategies for designing multi-sector research collaborations that can generate legitimate, high-quality results while being perceived as legitimate by the public.

John C. Besley, Associate Professor of Advertising and Public Relations, Michigan State University; Aaron M. McCright, Associate Professor of Sociology, Michigan State University; Joseph D. Martin, Fellow-in-Residence at the Consortium for History of Science, Technology, and Medicine and Visiting Research Fellow at the Centre for History and Philosophy of Science, University of Leeds; Kevin Elliott, Associate Professor of Fisheries & Wildlife and Philosophy, Michigan State University, and Nagwan Zahry, PhD Student in Media and Information Studies, Michigan State University

This article was originally published on The Conversation. Read the original article.

There’s a mathematical formula for choosing the fastest queue

Enrico Scalas, University of Sussex and Nicos Georgiou, University of Sussex

It seems obvious. You arrive at the checkouts and see one queue is much longer than the other, so you join the shorter one. But, before long, the people in the bigger line zoom past you and you’ve barely moved towards the exit.

When it comes to queuing, the intuitive choice is often not the fastest one. Why do queues feel like they slow down as soon as you join them? And is there a way to decide beforehand which line is really the best one to join? Mathematicians have been studying these questions for years. So can they help us spend less time waiting in line?

The intuitive strategy seems to be to join the shortest queue. After all, a short queue could indicate it has an efficient server, and a long queue could imply it has an inexperienced server or customers who need a lot of time. But generally this isn’t true.

Without the right information, it could even be disadvantageous to join the shortest queue. For example, if the short queue at the supermarket has two very full trolleys and the long queue has four relatively empty baskets, many people would actually join the longer queue. If the servers are equally efficient, the important quantity here is the number of total items in the queue, not the number of customers. But if the trolleys weren’t very full but the hand baskets were, it wouldn’t be so easy to estimate and the choice wouldn’t be so clear.

This simple example introduces the concept of service time distribution. This is a random variable that measures how long it will take a customer to be served. It contains information about the average (mean) service time and about the standard deviation from the mean, which represents how the service time fluctuates depending on how long different customers need.

The other important variable is how often customers join the queue (the arrival rate). This depends on the average amount of time that passes between two consecutive customers entering the shop. The more people that arrive to use a service at a specific time, the longer the queues will be.

Never mind the queue, I picked the wrong shop.

Depending on what these variables are, the shortest queue might be the best one to join – or it might not. For example, in a fish and chip shop you might have two servers both taking orders and accepting money. Then it is most often better to join the shortest queue since the time the servers’ tasks take doesn’t vary much.

Unfortunately, in practice, it’s hard to know exactly what the relevant variables are when you enter a shop. So you can still only guess what the fastest queue to join will be, or rely on tricks of human psychology, such as joining the leftmost queue because most right-handed people automatically turn right.

Did you get it right?

Once you’re in the queue, you’ll want to know whether you made the right choice. For example, is your server the fastest? It is easy to observe the actual queue length and you can try to compare it to the average. This is directly related to the mean and standard deviation of the service time via something called the Pollaczek-Khinchine formula, first established in 1930. This also uses the mean inter-arrival time between customers.

Unfortunately, if you try to measure the time the first person in the queue takes to get served, you’ll likely end up feeling like you chose the wrong line. This is known as Feller’s paradox or the inspection paradox. Technically, this isn’t an actual logical paradox but it does go against our intuition. If you start measuring the time between customers when you join a queue, it is more likely that the first customer you see will take longer than average to be served. This will make you feel like you were unlucky and chose the wrong queue.

The inspection paradox works like this: suppose a bank offers two services. One service takes either zero or five minutes, with equal probability. The other service takes either ten or 20 minutes, again with equal probability. It is equally likely for a customer to choose either service and so the bank’s average service time is 8.75 minutes.

If you join the queue when a customer is in the middle of being served then their service can’t take zero minutes. They must be using either the five, ten or 20 minute service. This pushes the time that customer will take to be served to more than 11 minutes on average, more than the true average for the of 8.75 minutes. In fact, two out of three times you encounter the same situation, the customer will want either the 10 or 20 minute service. This will make it seem like the line is moving more slowly than it should, all because a customer is already there and you have extra information.

So while you can use maths to try to determine the fastest queue, in the absence of accurate data – and for your own peace of mind – you’re often better just taking a gamble and not looking at the other options once you’ve made your mind up.

Enrico Scalas, Professor of Statistics and Probability, University of Sussex and Nicos Georgiou, Lecturer in Mathematics, Probability and Statistics, University of Sussex

This article was originally published on The Conversation. Read the original article.

Paradoxes of probability and other statistical strangeness

Stephen Woodcock, University of Technology Sydney

Statistics is a useful tool for understanding the patterns in the world around us. But our intuition often lets us down when it comes to interpreting those patterns. In this series we look at some of the common mistakes we make and how to avoid them when thinking about statistics, probability and risk.

You don’t have to wait long to see a headline proclaiming that some food or behaviour is associated with either an increased or a decreased health risk, or often both. How can it be that seemingly rigorous scientific studies can produce opposite conclusions?

Nowadays, researchers can access a wealth of software packages that can readily analyse data and output the results of complex statistical tests. While these are powerful resources, they also open the door to people without a full statistical understanding to misunderstand some of the subtleties within a dataset and to draw wildly incorrect conclusions.

Here are a few common statistical fallacies and paradoxes and how they can lead to results that are counterintuitive and, in many cases, simply wrong.

Simpson’s paradox

What is it?

This is where trends that appear within different groups disappear when data for those groups are combined. When this happens, the overall trend might even appear to be the opposite of the trends in each group.

One example of this paradox is where a treatment can be detrimental in all groups of patients, yet can appear beneficial overall once the groups are combined.

How does it happen?

This can happen when the sizes of the groups are uneven. A trial with careless (or unscrupulous) selection of the numbers of patients could conclude that a harmful treatment appears beneficial.


Consider the following double blind trial of a proposed medical treatment. A group of 120 patients (split into subgroups of sizes 10, 20, 30 and 60) receive the treatment, and 120 patients (split into subgroups of corresponding sizes 60, 30, 20 and 10) receive no treatment.

The overall results make it look like the treatment was beneficial to patients, with a higher recovery rate for patients with the treatment than for those without it.

The Conversation, CC BY-ND

However, when you drill down into the various groups that made up the cohort in the study, you see in all groups of patients, the recovery rate was 50% higher for patients who had no treatment.

The Conversation, CC BY-ND

But note that the size and age distribution of each group is different between those who took the treatment and those who didn’t. This is what distorts the numbers. In this case, the treatment group is disproportionately stacked with children, whose recovery rates are typically higher, with or without treatment.

Base rate fallacy

What is it?

This fallacy occurs when we disregard important information when making a judgement on how likely something is.

If, for example, we hear that someone loves music, we might think it’s more likely they’re a professional musician than an accountant. However, there are many more accountants than there are professional musicians. Here we have neglected that the base rate for the number of accountants is far higher than the number of musicians, so we were unduly swayed by the information that the person likes music.

How does it happen?

The base rate fallacy occurs when the base rate for one option is substantially higher than for another.


Consider testing for a rare medical condition, such as one that affects only 4% (1 in 25) of a population.

Let’s say there is a test for the condition, but it’s not perfect. If someone has the condition, the test will correctly identify them as being ill around 92% of the time. If someone doesn’t have the condition, the test will correctly identify them as being healthy 75% of the time.

So if we test a group of people, and find that over a quarter of them are diagnosed as being ill, we might expect that most of these people really do have the condition. But we’d be wrong.

In a typical sample of 300 patients, for every 11 people correctly identified as unwell, a further 72 are incorrectly identified as unwell.
The Conversation, CC BY-ND

According to our numbers above, of the 4% of patients who are ill, almost 92% will be correctly diagnosed as ill (that is, about 3.67% of the overall population). But of the 96% of patients who are not ill, 25% will be incorrectly diagnosed as ill (that’s 24% of the overall population).

What this means is that of the approximately 27.67% of the population who are diagnosed as ill, only around 3.67% actually are. So of the people who were diagnosed as ill, only around 13% (that is, 3.67%/27.67%) actually are unwell.

Worryingly, when a famous study asked general practitioners to perform a similar calculation to inform patients of the correct risks associated with mammogram results, just 15% of them did so correctly.

Will Rogers paradox

What is it?

This occurs when moving something from one group to another raises the average of both groups, even though no values actually increase.

The name comes from the American comedian Will Rogers, who joked that “when the Okies left Oklahoma and moved to California, they raised the average intelligence in both states”.

Former New Zealand Prime Minister Rob Muldoon provided a local variant on the joke in the 1980s, regarding migration from his nation into Australia.

How does it happen?

When a datapoint is reclassified from one group to another, if the point is below the average of the group it is leaving, but above the average of the one it is joining, both groups’ averages will increase.


Consider the case of six patients whose life expectancies (in years) have been assessed as being 40, 50, 60, 70, 80 and 90.

The patients who have life expectancies of 40 and 50 have been diagnosed with a medical condition; the other four have not. This gives an average life expectancy within diagnosed patients of 45 years and within non-diagnosed patients of 75 years.

If an improved diagnostic tool is developed that detects the condition in the patient with the 60-year life expectancy, then the average within both groups rises by 5 years.

The Conversation, CC BY-ND

Berkson’s paradox

What is it?

Berkson’s paradox can make it look like there’s an association between two independent variables when there isn’t one.

How does it happen?

This happens when we have a set with two independent variables, which means they should be entirely unrelated. But if we only look at a subset of the whole population, it can look like there is a negative trend between the two variables.

This can occur when the subset is not an unbiased sample of the whole population. It has been frequently cited in medical statistics. For example, if patients only present at a clinic with disease A, disease B or both, then even if the two diseases are independent, a negative association between them may be observed.


Consider the case of a school that recruits students based on both academic and sporting ability. Assume that these two skills are totally independent of each other. That is, in the whole population, an excellent sportsperson is just as likely to be strong or weak academically as is someone who’s poor at sport.

If the school admits only students who are excellent academically, excellent at sport or excellent at both, then within this group it would appear that sporting ability is negatively correlated with academic ability.

To illustrate, assume that every potential student is ranked on both academic and sporting ability from 1 to 10. There are an equal proportion of people in each band for each skill. Knowing a person’s band in either skill does not tell you anything about their likely band in the other.

Assume now that the school only admits students who are at band 9 or 10 in at least one of the skills.

If we look at the whole population, the average academic rank of the weakest sportsperson and the best sportsperson are both equal (5.5).

However, within the set of admitted students, the average academic rank of the elite sportsperson is still that of the whole population (5.5), but the average academic rank of the weakest sportsperson is 9.5, wrongly implying a negative correlation between the two abilities.

The Conversation, CC BY-ND

Multiple comparisons fallacy

What is it?

This is where unexpected trends can occur through random chance alone in a data set with a large number of variables.

How does it happen?

When looking at many variables and mining for trends, it is easy to overlook how many possible trends you are testing. For example, with 1,000 variables, there are almost half a million (1,000×999/2) potential pairs of variables that might appear correlated by pure chance alone.

While each pair is extremely unlikely to look dependent, the chances are that from the half million pairs, quite a few will look dependent.


The Birthday paradox is a classic example of the multiple comparisons fallacy.

In a group of 23 people (assuming each of their birthdays is an independently chosen day of the year with all days equally likely), it is more likely than not that at least two of the group have the same birthday.

People often disbelieve this, recalling that it is rare that they meet someone who shares their own birthday. If you just pick two people, the chance they share a birthday is, of course, low (roughly 1 in 365, which is less than 0.3%).

However, with 23 people there are 253 (23×22/2) pairs of people who might have a common birthday. So by looking across the whole group you are testing to see if any one of these 253 pairings, each of which independently has a 0.3% chance of coinciding, does indeed match. These many possibilities of a pair actually make it statistically very likely for coincidental matches to arise.

For a group of as few as 40 people, it is almost nine times as likely that there is a shared birthday than not.

The probability of no shared birthdays drops as the number of people in a group increases.
The Conversation, CC BY-ND

Stephen Woodcock, Senior Lecturer in Mathematics, University of Technology Sydney

This article was originally published on The Conversation. Read the original article.