This is the first episode of a special Speaking With podcast series titled No Problem Too Big, where a panel of artists and researchers speculate on the end of the world as though it has already happened.
It’s not the world we grew up in. Not since artificial intelligence. The machines have taken control.
Three fearless researchers gather in the post-apocalyptic twilight: a computer scientist, a mechanical engineer and a sci-fi author.
Together, they consider the implications of military robots and autonomous everything, and discover that the most horrifying post-apocalyptic scenario might look something like unrequited robot love.
Joanne Anderton is an award-winning author of speculative fiction stories for anyone who likes their worlds a little different. More information about Joanne and her novels can be found here.
No Problem Too Big is created and hosted by Adam Hulbert, who lectures in media and sonic arts at the University of New South Wales. It is produced with the support of The Conversation and University of New South Wales.
Over the past five years, Africa’s contributions to the world’s research –- that is, new knowledge –- have varied from a low of 0.7% to the present and highest level of 1.1%.
There are many reasons for Africa’s small contribution to world research. One of them, sadly, is that at least some of this new knowledge is produced by African scientists working beyond their own countries and continent. Many have chosen to leave because they feel the facilities and funding opportunities are better than those “at home”.
It’s also important to point out that the sum of knowledge generated each year, including Africa’s contribution to it, is measured using research articles published by scientists and scholars in scientifically recognised journals. This means some of the actual work that’s being done isn’t getting the attention or credit it deserves, yet. The journal system is not a perfect way of assessing scientific productivity. For now, though, it’s a means that can be applied fairly to document peer reviewed research from around the world.
These concerns aside there is, I’m happy to report, much to celebrate about research in Africa. For starters, the world’s largest collection of peer-reviewed, African-published journals, is growing all the time. African Journals Online currently carries 521 titles across a range of subjects and disciplines.
Women researchers are also well represented, though there’s still work to be done: three out of 10 sub-Saharan researchers are women.
The continent’s researchers are working on challenges as varied as astrophysics, malaria, HIV/AIDS and agricultural productivity. They are making significant advances in these and many other critical areas. The projects I talk about here are just a few examples of the remarkable work Africa’s scientists are doing on and for the continent.
A range of research
Africa is establishing itself as global player in astronomical research. The Southern African Large Telescope (SALT) is the largest single optical telescope of its kind in the Southern hemisphere. Work undertaken at this facility, in South Africa’s Northern Cape province, has resulted in the publication of close to 200 research papers.
The telescope has support from and working relationships with universities in 10 countries. Its recent work helped a team of South African and international collaborators to uncover a previously unknown major supercluster in the constellation Vela.
In a very different sphere, Professors Salim and Quarraisha Abdool Karim have won African and international awards for their groundbreaking and lifesaving work in the area of HIV/AIDS. Professor Glenda Gray, the CEO of South Africa’s Medical Research Council, has been honoured by Time magazine as one of the world’s 100 most influential people. She, too, is a pioneer in HIV/AIDS research.
In Kenya, dedicated research institutes are tackling agricultural challenges in areas like crop production and livestock health. This not only boosts Africa’s research output, but contributes greatly to rural development on the continent.
Elsewhere, Nigeria has established a number of research institutes that focus on a range of agricultural challenges. Research is also being undertaken in the important area of oceanography.
Although it operates from the University of Cape Town, the African Climate and Development Initiative has been working as a partner in Mozambique. There it’s addressing the critical – and interrelated – challenges of climate change and adaptation responses for horticulture, cassava and the red meat value chain. This is important work in one of Africa’s poorest countries, which is battling drought and hunger.
And then there’s also research “out of Africa”. This involves discoveries about the human past and the origins of homo sapiens. Historically, this sort of research was often undertaken by people who didn’t come from Africa. More and more, though, African scholars have come to the fore. The scientists who discovered a new human ancestor and mapped a cave system that’s serving up amazing fossil evidence are following in giant footsteps: those of Robert Broom, Raymond Dart and Phillip Tobias.
Research that matters
What does all of this tell us about research in Africa? Perhaps three ideas are worth considering.
First, while Africa and its universities, institutes and scientists need to make far greater contributions to world knowledge, high quality and important research is happening. Its overall contribution might be small, but smart people are undertaking smart and important work.
Secondly, the range of research being undertaken is remarkable in view of the size of Africa’s overall contribution: from galaxies to viruses; from agriculture to malaria; and from drought to oceanography.
And thirdly it is notable, and of great significance, that irrespective of the disciplines involved, the research is tackling both international concerns and those specific to the African continent and its people’s needs.
Yes, 1.1% is a small figure. What’s actually happening, on the other hand, adds up to a pretty impressive score card.
The world of higher and professional education is changing rapidly. Digitally-enabled learning, in all its forms, is here to stay. Over the last five years, massive open online courses (MOOCs) have enabled universities to share their expertise with millions across the world. This shows how rapidly developing digital technologies can make learning accessible.
These new technologies are shaking up traditional classrooms, too. And as the nature of work changes professionals are turning to high level, online courses to keep pace with new demands.
But much of this new technology is the preserve of private sector companies. This means that universities have to work with them. Yet partnerships with for-profit companies still don’t feel right for many in the higher education sphere. Knowledge has long been seen as a public good, and education as a basic right. Many of today’s universities were shaped by the principles of public funding.
This world was changing well before the disruptive impact of digital technologies, with tuition fees rising above the rate of inflation and the emergence of private universities as part of the higher education landscape. But there’s still unease about technology and its role. The reality, though, is that higher education institutions will have to get over their queasiness if they’re to survive in this brave new world.
Universities may not have the know how or the money to match the innovations coming onto the market through private tech companies. The decision by Nasdaq-listed technology education (edtech) company 2U to acquire Cape Town based startup GetSmarter for R1,4bn ($103million) is the largest price tag yet for a South African company working in digital education.
This is an indication of what it would cost a university to set up a full online division. Few institutions will have this money, or the ability to raise it. The alternative is to reconsider the advantages of public-private partnerships, taking care to retain authority over quality. For many universities this could be the only way of keeping pace with the changing world of education.
The story of a start up
The story of how GetSmarter got off the ground is a text book case of how a simple idea, combined with guts and luck, can reap huge rewards.
GetSmarter was launched in 2008 with a tiny budget and offered just one online course, in wine evaluation. By 2016 its annual revenues had grown to about R227 million. The foundation for this expansion has been a wide range of courses developed and offered in partnership with the University of Cape Town and, more recently, the University of the Witwatersrand and Stellenbosch University.
GetSmarter’s key breakthrough into the international realm came with professional programmes in association with the Massachusetts Institute of Technology (MIT) and Cambridge University. GetSmarter’s first course with HarvardX will soon be presented.
After its acquisition was announced I talked to the company’s CEO, Sam Paddock, co-founded with brother Rob. We discussed the lessons for other small digital companies – and for universities that are mulling the value of digital learning.
The Paddock brothers leveraged the cash flow from their father’s niche law firm to launch their first online course. They then used upfront payments for that course and the courses that followed to keep financing their next offerings. In the nine years that followed, edtech has become a crowded and complex field.
GetSmarter’s purchase price has garnered a lot of media attention: it’s high, in US dollar terms, and is a vote of confidence in the company. The price represents a valuation of a company’s assets, intellectual property and know-how, and strategic positioning for the future.
But what does it say about the kinds of investments and partnerships that conventional universities will have to make as they adapt to the full disruption from new digital technologies? The key aspect of GetSmarter’s success is how its partnership with universities has played out. As Paddock points out:
We are starting to realise the potential of public-private partnerships, where the credibility and resources of great universities can be combined with the skills of nimble private operators.
Good news for the digital economy
This acquisition is also good news for South Africa’s digital economy. Paddock says GetSmarter will employ more South African graduates and give them international experience and expertise.
And, he says, ecosystems often develop from one significant investment in an individual company. “This was how Silicon Valley started, as well as London’s ”silicon roundabout“. Cape Town, GetSmarter’s home city, has been trumpeted as South Africa’s own Silicon Valley: ”Silicon Cape“.
Last year, cellphone giant Vodacom announced an investment of R600m to assist in developing South Africa’s digital skills.
GetSmarter’s big win is good news and proof – if universities needed it – that such initiatives can bolster higher education’s offering in a rapidly changing world. Universities in Africa know that they need to keep up with the relentless march of digitally enabled learning. GetSmarter’s journey from bootstrapped startup to a billion rand enterprise is a case study, worthy of attention.
Artificial intelligence is already helping determine your future – whether it’s your Netflix viewing preferences, your suitability for a mortgage or your compatibility with a prospective employer. But can we agree, at least for now, that having an AI determine your guilt or innocence in a court of law is a step too far?
Worryingly, it seems this may already be happening. When American Chief Justice John Roberts recently attended an event, he was asked whether he could forsee a day “when smart machines, driven with artificial intelligences, will assist with courtroom fact finding or, more controversially even, judicial decision making”. He responded: “It’s a day that’s here and it’s putting a significant strain on how the judiciary goes about doing things”.
Roberts might have been referring to the recent case of Eric Loomis, who was sentenced to six years in prison at least in part by the recommendation of a private company’s secret proprietary software. Loomis, who has a criminal history and was sentenced for having fled the police in a stolen car, now asserts that his right to due process was violated as neither he nor his representatives were able to scrutinise or challenge the algorithm behind the recommendation.
The report was produced by a software product called Compas, which is marketed and sold by Nortpointe Inc to courts. The program is one incarnation of a new trend within AI research: ones designed to help judges make “better” – or at least more data-centric – decisions in court.
While specific details of Loomis’ report remain sealed, the document is likely to contain a number of charts and diagrams quantifying Loomis’ life, behaviour and likelihood of re-offending. It may also include his age, race, gender identity, browsing habits and, I don’t know … measurements of his skull. The point is we don’t know.
What we do know is that the prosecutor in the case told the judge that Loomis displayed “a high risk of violence, high risk of recidivism, high pretrial risk.” This is standard stuff when it comes to sentencing. The judge concurred and told Loomis that he was “identified, through the Compas assessment, as an individual who is a high risk to the community”.
The Wisconsin Supreme Court convicted Loomis, adding that the Compas report brought valuable information to their decision, but qualified it by saying he would have received the same sentence without it. But how can we know that for sure? What sort of cognitive biases are involved when an all-powerful “smart” system like Compas suggests what a judge should do?
Now let’s be clear, there is nothing “illegal” about what the Wisconsin court did – it’s just a bad idea under the circumstances. Other courts are free to do the same.
Worryingly, we don’t actually know the extent to which AI and other algorithms are being used in sentencing. My own research indicates that several jurisdictions are “trialling” systems like Compas in closed trials, but that they cannot announce details of their partnerships or where and when they are being used. We also know that there are a number of AI startups that are competing to build similar systems.
However, the use of AI in law doesn’t start and end with sentencing, it starts at investigation. A system called VALCRI has already been developed to perform the labour-intensive aspects of a crime analyst’s job in mere seconds – wading through tonnes of data like texts, lab reports and police documents to highlight things that may warrant further investigation.
The UK’s West Midlands Police will be trialling VALCRI for the next three years using anonymised data – amounting to some 6.5m records. A similar trial is underway from the police in Antwerp, Belgium. However, past AI and deep learning projects involving massive data sets have been have been problematic.
Benefits for the few?
Technology has brought many benefits to the court room, ranging from photocopiers to DNA fingerprinting and sophisticated surveillance techniques. But that doesn’t mean any technology is an improvement.
While using AI in investigations and sentencing could potentially help save time and money, it raises some thorny issues. A report on Compas from ProPublica made clear that black defendants in Broward County Florida “were far more likely than white defendants to be incorrectly judged to be at a higher rate of recidivism”. Recent work by Joanna Bryson, professor of computer science at the University of Bath, highlights that even the most “sophisticated” AIs can inherit the racial and gender biases of those who create them.
What’s more, what is the point of offloading decision making (at least in part) to an algorithm on matters that are uniquely human? Why do we go through the trouble of selecting juries composed of our peers? The standard in law has never been one of perfection, but rather the best that our abilities as mere humans allow us. We make mistakes but, over time, and with practice, we accumulate knowledge on how not to make them again – constantly refining the system.
What Compas, and systems like it, represent is the “black boxing” of the legal system. This must be resisted forcefully. Legal systems depend on continuity of information, transparency and ability to review. What we do not want as a society is a justice system that encourages a race to the bottom for AI startups to deliver products as quickly, cheaply and exclusively as possible. While some AI observers have seen this coming for years, it’s now here – and it’s a terrible idea.
An open source, reviewable version of Compas would be an improvement. However, we must ensure that we first raise standards in the justice system before we begin offloading responsibility to algorithims. AI should not just be an excuse not to invest.
While there is a lot of money to be made in AI, there is also a lot of genuine opportunity. It can change a lot for the better if we get it right, and ensure that its benefits accrue for all and don’t just entrench power at the top of the pyramid.
I have no perfect solutions for all these problems right now. But I do know that when it comes to the role of AI in law we must ask in which context they are being used, for what purposes and with what meaningful oversight. Until those questions can be answered with certainty, be very very sceptical. Or at the very least know some very good lawyers.
Football always divides opinion. As the latest English season draws to a close and the Football League playoffs take centre stage, there will be some that grumble about the format. They will say how “unfair” it is that a club can finish third in the league in the regular season, yet be denied promotion by a club that finished sixth after a late surge. Set that aside though, and you are left with the pure drama. It is win or bust, and prolongs the excitement of the regular season, giving more teams, more to play for in a crescendo of late season fixtures.
The playoffs concept was borrowed from US team sports where this end-of-season competition is a regular feature, attracting huge media exposure and significant commercial interest. In England, for thirty years now, the playoffs have determined the final promotion spot within each division of the Football League. Four teams first try to get to the playoff final at Wembley stadium, then face a nerve-jangling 90 minutes or more to secure a step up the football pyramid.
The inspiration from US sports is important. Put aside the passion, excitement, disappointment and any sense of injustice for a moment. The playoffs can be of huge importance financially. A playoff victory can have the power to stabilise a club’s financial position, clear debts and allow significant investment in players. The pot of gold at the end of this rainbow has largely been filled with TV money. The most recent domestic deal was signed for £5.14 billion. Add in the international rights and this swells to £8.4 billion.
Lower down the leagues, the money on offer is not eye-watering. Our conservative estimates put the prize at around £500,000 for promotion from League Two to League One and around £7m for promotion from League One to the Championship. However, the prize on offer for promotion to the Premier League is staggering and has led to the Championship playoff final being labelled the “richest game in football” with a value of around £170m-£200m. Huddersfield, Reading, Fulham and Sheffield Wednesday are facing off for the jackpot this time around.
The often-quoted £200m figure is a little misleading as it takes into account so-called parachute payments which only kick in if a club is relegated the following season. Clubs will receive a minimum uplift of £120m though, which can be triple or quadruple their turnover. In fact, the chart below shows that when Bournemouth was promoted in 2015, the club saw a six-fold increase in revenue, essentially driven by additional broadcasting fees.
When the prize is so very shiny, straining to reach for it presents a strategic dilemma for clubs. The boost to revenue from promotion can stabilise a club financially, just like it did for Blackpool in 2010, helping it to (theoretically) secure a long-term future. In Blackpool’s case, however, on-field performance was destabilised and supporters became disenfranchised. Seven years later, Blackpool now hope to be promoted back to League One this season, via the playoffs.
Promotion can also increase the level of expectation and create pressure to retain a position in the world’s richest league. The club can get excited and the board can sanction acquisitions that fall outside a reasonable budget and seriously threaten the short and even long-term financial future of the club. This recalls the experience at Queens Park Rangers, which somehow accumulated £143m of losses despite generating about £250m in revenue during their stay in the Premier League. QPR managed to spend a startling £285m on wages and £114m on player purchases, while their level of debt surged to a peak of £194m.
Prepare to fail
The third option is to rein in your ambition, develop a strategic plan, grow incrementally and accept that you may become a yo-yo club like Burnley, or survive by the skin of your teeth like Stoke City.
Either way, the club builds a longer term future at the top table which benefits everyone. Survival through this approach means that a club receives at least another £120m so can build still further and become a stable Premiership club. But even failing and being relegated means a club will still have money to spend, receive a parachute payment (of another £45m or so) and spend a season in the Championship with turnover in excess of three times that of a standard team. This provides a significant competitive advantage over your rivals as Newcastle United showed this year – the Magpies spent big and gained promotion at the first attempt.
Ultimately, the direction of travel comes down to owner objectives, which can differ depending on their background and motivations. One thing that is clear: spending beyond your means does not always guarantee success.
The chart above allows us to examine a club’s transfer spending in the year following promotion. It is a confusing picture, but the red bars show those clubs which were relegated the following season, and demonstrate clearly that spending big is no guarantee of survival. This chart doesn’t show the starting point for each club in terms of player quality, but how you spend it is plainly crucial, and the chart shows too that you can survive without throwing the kitchen sink at player acquisitions.
There is broader evidence that the most successful clubs, with the most money, do tend to outperform, but the trade-off between financial and sporting performance is hazardous. Many clubs now choose to chase multiple and escalating objectives: recall the devastating failure at Leeds United in 2003, when creditors were owed almost £100m after the club chased the dream of playing in the Champions League. You chase that dream at your peril is the warning; plan carefully, and spend wisely is the advice to your board. Relegation doesn’t have to be a trapdoor, but promotion can be a trap.
Forget looking through a telescope at the stars. An astronomer today is more likely to be online: digitally scheduling observations, running them remotely on a telescope in the desert, and downloading the results for analysis.
For many astronomers the first step in doing science is exploring this data computationally. It may sound like a buzzword, but data-driven science is part of a profound shift in fields like astronomy.
A 2015 report by the Australian Academy of Science found that among more than 500 professional astronomers in Australia, around one quarter of their research effort was now computational in nature. Yet many high school and university science, technology and engineering subjects still treat the necessary skills as second-class citizens.
Referring both to the modelling of the world through simulations and the exploration of observational data, computation is central not only to astronomy but a range of sciences, including bioinformatics, computational linguistics and particle physics.
To prepare the next generation, we must develop new teaching methods that recognise data-driven and computational approaches as some of the primary tools of contemporary research.
The era of big data in science
The great empiricists of the 17th century believed that if we used our senses to collect as much data as possible, we would ultimately understand our world.
Although empirical science has a long history, there are some key differences between a traditional approach and the data-driven science we do today.
The change that has perhaps had the most impact is the sheer amount of data that computers can now collect. This has enabled a change in philososphy: data can be gathered to serve many projects rather than just one, and the way we explore and mine data allows us to “plan for serendipity”.
Take the search for new types of astronomical phenomena. Large data sets can yield unexpected results: some modern examples are the discovery of fast radio bursts by astronomer Duncan Lorimer and the discovery of plasma tubes in the Earth’s ionosphere by a former undergraduate student of mine, Cleo Loi. Both of these depended on mining of archival data sets that had been designed for a different purpose.
Many scientists now work collaboratively to design experiments that can serve many projects at once and test different hypotheses. For example, the book outlining the science case for the future Square Kilometre Array Telescope, to be built in South Africa and Australia, has 135 chapters contributed by 1,200 authors.
Our education system needs to change, too
Classic images of science include Albert Einstein writing down the equations of relativity, or Marie Curie discovering radium in her laboratory.
Our understanding of how science works is often formed in high school, where we learn about theory and experiment. We picture these twin pillars working together, with experimental scientists testing theories, and theorists developing new ways to explain empirical results.
Computation, however, is rarely mentioned, and so many key skills are left undeveloped.
To design unbiased experiments and select robust samples, for example, scientists need excellent statistical skills. But often this part of maths takes a back seat in university degrees. To ensure our data-driven experiments and explorations are rigorous, scientists need to know more than just high school statistics.
In fact, to solve problems in this era, scientists also need to develop computational thinking. It’s not just about coding, although that’s a good start. They need to think creatively about algorithms, and how to manage and mine data using sophisticated techniques such as machine learning.
Applying simple algorithms to massive data sets simply doesn’t work, even when you have the power of 10,000-core supercomputers. Switching to more sophisticated techniques from computer science, such as the kd-tree algorithm for matching astronomical objects, can speed up software by orders of magnitude.
Some steps are being taken in the right direction. Many universities are introducing courses and degrees in data science, incorporating statistics and computer science combined with science or business. For example, I recently launched an online course on data-driven astronomy, which aims to teach skills like data management and machine learning in the context of astronomy.
In schools the new Australian Curriculum in Digital Technologies makes coding and computational thinking part of the syllabus from Year 2. This will develop vital skills, but the next step is to integrate modern approaches directly into science classrooms.
Computation has been an important part of science for more than half a century, and the data explosion is making it even more central. By teaching computational thinking as part of science, we can ensure our students are prepared to make the next round of great discoveries.
The stereotype of the chess player is someone who is smart, logical and good at maths. This is why so many parents around the world are keen to get their children playing chess – in the hope that the game might help to boost their son or daughter’s intelligence levels and help them succeed in a wide variety of subjects.
In a previous article, we showed that chess players exhibit, on average, superior cognitive ability compared to non-chess players.
And the skills needed to play chess have also been shown to correlate with several measures of intelligence – such as fluid reasoning, memory, and processing speed.
But while the existence of a relationship between general cognitive ability and chess-skill is clear, is this simply because intelligent people are more likely to engage in the game of chess, or does engaging in chess make people smarter?
The notion that playing chess makes you smarter goes something like this: chess requires concentration and intelligence, and as mathematics and literacy require the same general skills, then practising chess must also improve one’s academic achievement.
With this idea in mind, the Institute of Education conducted a large investigation to test the effects of chess instruction on the academic skills of nearly 4,000 British children.
The recently released results were disappointing – it seemed chess played no effect on children’s attainment levels in mathematics, literacy, or science.
Promptly, the chess community questioned the reliability of the results, particularly given that other studies offer a more optimistic picture about the academic benefits of chess instruction.
Before the results were published, we carried out a review of all the studies in the field. Our results showed some moderate effects of chess instruction on cognitive ability and academic achievement – especially mathematics.
And yet, we still need to be cautious in interpreting these results as a positive indication of the power of chess on cognitive or academic skills. This is because most of the reviewed studies compared the effect of chess with groups doing no alternative activities.
This is a problem because research has shown that the excitement and fun induced by novel activities can cause a positive temporal effect on test scores – a placebo effect.
Crucially, when compared to an alternative activity – such as checkers or sports – chess did not show any significant effect on children’s skills. So, it could well just be that the observed positive effects of chess instruction are merely due to placebo effects.
What all this shows is that it is unlikely chess has a significant impact on overall cognitive ability. So while it might sound like a quick win – that a game of chess can improve a broad range of skills – unfortunately this is not the case.
The fact that skills learned by training do not transfer across different domains seems to be a universal in human cognition. In other words, you get better, at best, at what you train in – which may just sound just like good old fashioned common sense.
But although expecting chess to enhance children’s cognitive ability and overall academic achievement is just wishful thinking, this doesn’t mean it can’t still add value to a child’s education.
Clearly, playing chess involves some level of arithmetical and geometrical skill, and designing mathematical games or exercises with chess material can still be a simple and fun way to help children to learn.
Our own recent research speaks to how hard it may be for the public to see research as useful when produced with an industry partner, even when that company is just one of several collaborators.
What people think of funding sources
We asked our study volunteers what they thought about a proposed research partnership to study the potential risks related to either genetically modified foods or trans fats.
We randomly assigned participants to each evaluate one of 15 different research partnership arrangements – various combinations of scientists from a university, a government agency, a nongovernmental organization and a large food company.
For example, 1/15th of participants were asked to consider a research collaboration that included only university researchers. Another 1/15th of participants considered a research partnership that included both university and government scientists, and so on. In total we presented four conditions where there was a single type of researcher, another six collaborations with two partners, four with three partners and one with all four partners.
When a research team included an industry partner, our participants were generally less likely to think the scientists would consider a full range of evidence and listen to different voices. An industry partner also reduced how much participants believed any resulting data would provide meaningful guidance for making decisions.
At the outset of our work, we thought including a diverse array of partners in a research collaboration might mitigate the negative perceptions that come with industry involvement. But, while including scientists from a nonindustry organization (particularly a nongovernmental organization) made some difference, the effect was small. Adding a government partner provided no substantive additional benefit.
When we asked participants to describe what they thought about the research partnership in their own words, they were skeptical whether an industry partner could ever be trusted to release information that might hurt its profits.
Our results may be even more troubling because we chose a company with a good reputation. We used pretests to select particular examples – of a corporation, as well as a university, government agency and nongovernmental organization – that had relatively high positive ratings and relatively low negative ratings in a test sample.
Can industry do valid science?
You don’t have to look far for real-life examples of poorly conducted or intentionally misleading industry research. The pharmaceutical, chemical, nutrition and petroleum industries have all weathered criticism of their research integrity, and for good reason. These ethically questionable episodes no doubt fuel public skepticism of industry research. Stories of pharmaceutical companies conducting less than rigorous clinical trials for the benefit of their marketing departments, or the tobacco industry steadfastly denying the connection between smoking and cancer in the face of mounting evidence, help explain public concern about industry-funded science.
Can this lack of trust be overcome? Moving forward, it will be essential to address incentives such as short-term profit or individual recognition that can encourage poor research – in any institutional context. By showing how quickly people may judge industry-funded research, our work indicates that it’s critical to think about how the results of that research can be communicated effectively.
Our results should worry those who want research to be evaluated largely on its scientific merits, rather than based upon the affiliations of those involved.
Although relatively little previous scholarship has investigated this topic, we expected to find that including multiple, nonindustry organizations in a scientific partnership might, at least partly, assuage participants’ concerns about industry involvement. This reflects our initial tentative belief that, given the resources and expertise within industry, there must be some way to create public-private partnerships that produce high-quality research which is perceived widely as such.
Our interdisciplinary team – a risk communication scholar, a sociologist, a philosopher of science, a historian of science and a toxicologist – is also examining philosophical arguments and historical precedents for guidance on these issues.
Looking forward, we are planning additional social scientific experiments to examine how specific procedures that research partnerships sometimes use may affect public views about collaborations with industry partners. For example, perhaps open-data policies, transparency initiatives or external reviewer processes may alleviate bias concerns.
Given the central role that industry plays in scientific research and development, it is important to explore strategies for designing multi-sector research collaborations that can generate legitimate, high-quality results while being perceived as legitimate by the public.
It seems obvious. You arrive at the checkouts and see one queue is much longer than the other, so you join the shorter one. But, before long, the people in the bigger line zoom past you and you’ve barely moved towards the exit.
When it comes to queuing, the intuitive choice is often not the fastest one. Why do queues feel like they slow down as soon as you join them? And is there a way to decide beforehand which line is really the best one to join? Mathematicians have been studying these questions for years. So can they help us spend less time waiting in line?
The intuitive strategy seems to be to join the shortest queue. After all, a short queue could indicate it has an efficient server, and a long queue could imply it has an inexperienced server or customers who need a lot of time. But generally this isn’t true.
Without the right information, it could even be disadvantageous to join the shortest queue. For example, if the short queue at the supermarket has two very full trolleys and the long queue has four relatively empty baskets, many people would actually join the longer queue. If the servers are equally efficient, the important quantity here is the number of total items in the queue, not the number of customers. But if the trolleys weren’t very full but the hand baskets were, it wouldn’t be so easy to estimate and the choice wouldn’t be so clear.
This simple example introduces the concept of service time distribution. This is a random variable that measures how long it will take a customer to be served. It contains information about the average (mean) service time and about the standard deviation from the mean, which represents how the service time fluctuates depending on how long different customers need.
The other important variable is how often customers join the queue (the arrival rate). This depends on the average amount of time that passes between two consecutive customers entering the shop. The more people that arrive to use a service at a specific time, the longer the queues will be.
Depending on what these variables are, the shortest queue might be the best one to join – or it might not. For example, in a fish and chip shop you might have two servers both taking orders and accepting money. Then it is most often better to join the shortest queue since the time the servers’ tasks take doesn’t vary much.
Unfortunately, in practice, it’s hard to know exactly what the relevant variables are when you enter a shop. So you can still only guess what the fastest queue to join will be, or rely on tricks of human psychology, such as joining the leftmost queue because most right-handed people automatically turn right.
Did you get it right?
Once you’re in the queue, you’ll want to know whether you made the right choice. For example, is your server the fastest? It is easy to observe the actual queue length and you can try to compare it to the average. This is directly related to the mean and standard deviation of the service time via something called the Pollaczek-Khinchine formula, first established in 1930. This also uses the mean inter-arrival time between customers.
Unfortunately, if you try to measure the time the first person in the queue takes to get served, you’ll likely end up feeling like you chose the wrong line. This is known as Feller’s paradox or the inspection paradox. Technically, this isn’t an actual logical paradox but it does go against our intuition. If you start measuring the time between customers when you join a queue, it is more likely that the first customer you see will take longer than average to be served. This will make you feel like you were unlucky and chose the wrong queue.
The inspection paradox works like this: suppose a bank offers two services. One service takes either zero or five minutes, with equal probability. The other service takes either ten or 20 minutes, again with equal probability. It is equally likely for a customer to choose either service and so the bank’s average service time is 8.75 minutes.
If you join the queue when a customer is in the middle of being served then their service can’t take zero minutes. They must be using either the five, ten or 20 minute service. This pushes the time that customer will take to be served to more than 11 minutes on average, more than the true average for the of 8.75 minutes. In fact, two out of three times you encounter the same situation, the customer will want either the 10 or 20 minute service. This will make it seem like the line is moving more slowly than it should, all because a customer is already there and you have extra information.
So while you can use maths to try to determine the fastest queue, in the absence of accurate data – and for your own peace of mind – you’re often better just taking a gamble and not looking at the other options once you’ve made your mind up.