The economics of self-service checkouts

Gary Mortimer, Queensland University of Technology and Paula Dootson, Queensland University of Technology

Self-checkouts in supermarkets are increasing as businesses battle to reduce costs and increase service efficiency. But looking at the numbers, it isn’t clear that self-service is an easy win for businesses.

Self-checkouts aren’t necessarily faster than other checkouts, don’t result in lower staff numbers, and there are indirect costs such as theft, reduced customer satisfaction and loyalty.

Worldwide, self-checkout terminals are projected to rise from 191,000 in 2013 to 325,000 by 2019. A survey of multiple countries found 90% of respondents had used self-checkouts, with Australia and Italy leading the way.

Employment in the Australian supermarket and grocery industry went down for the first time in 2015-16 and is projected to remain flat for a few years. But staff numbers are projected to rebound again, in part due to the need to curtail growing theft in self-checkouts.

Social trends pushing self-checkout

There are a couple of intertwining trends that explain the rise of self checkouts.

We now visit our supermarkets more frequently than ever before, two to three times per week in fact. This means our basket contains fewer items and being able to wander up to a self-checkout, with little to no wait time, has been an expedient way to shop. Most shoppers consider self-checkouts both fast and easy to use. Although this varies with age – 90% of shoppers aged 18-39 found self-service checkouts easy to use, only 50% of those over 60 years said the same.

Shoppers also gain value from taking control of the transaction – being able to ring up their own goods and pack them the way they want. This is because a sense of control over their own shopping can lead to greater customer satisfaction and intent to use and reuse self-serve technology.

The numbers behind self-checkouts

Wages represent around 9.5% of supermarket revenue in Australia, and reducing wages is one of the reasons proposed for the uptake of self-checkout.

But from a business perspective, moving from “staffed” checkouts to self-serve machines isn’t cheap. A typical setup costs around US$125,000. On top of that there are the costs of integrating the machines into the technology already in place – the software and other systems used to track inventory and sales, and the ongoing costs – to cover breakdowns and maintenance.

But the biggest direct cost to retailers of adopting self-service checkouts is theft. Retail crime in Australia costs the industry over A$4.5 billion each year.

There is reason to believe that rates of theft are higher on self-service machines than regular checkouts. A study of 1 million transactions in the United Kingdom found losses incurred through self-service technology payment systems totalled 3.97% of stock, compared to just 1.47% otherwise. Research shows that one of the drivers of this discrepancy is that everyday customers – those who would not normally steal by any other means – disproportionately steal at self checkouts.

Studies also show that having a human presence around – in this case employees in the self-checkout area, increases the perceived risk of being caught, which reduces “consumer deviance”. This is why retailers have been adding staff to monitor customers, absorbing the additional losses, or passing them on to customers in an “honesty tax”.

Making self-checkouts work

As you can see in this graph, preliminary work by researchers Kate Letheren and Paula Dootson suggests people are less likely to steal from a human employee than an inanimate object. Not only because they will get caught, but because they feel bad about it.

On the other hand, consumers have plenty of justifications to excuse self-checkout theft, which is leading to its normalisation.

To combat this, researcher Paula Dootson is trying to use design to combat deviance. One of the ways is through extreme-personalisation of service to reduce customer anonymity. Anonymity is an undesirable outcome of removing employees and replacing them with technology.

Other ideas are to include moral reminders prior to the opportunity to lie or steal (such as simply reminding people to be honest), and to humanise the machines by encoding human characteristics to trigger empathy.

While self-service technologies will continue to be adopted by businesses broadly, and particularly within the retail sector, it will be important for retailers to take a holistic approach to implementation and loss prevention.

The ConversationSelf-service technology reduces front line staffing costs and increases efficiency by re-distributing displaced staff into other service dominant areas of the business, but it creates unintended costs. These business costs can be direct, in the form of theft, but also indirect costs, like reduce customer satisfaction and loyalty. Something that some supermarkets are focusing on today.

Gary Mortimer, Associate Professor, Queensland University of Technology and Paula Dootson, Research Fellow; PwC Chair in Digital Economy, Queensland University of Technology

This article was originally published on The Conversation. Read the original article.

From little Acorns – a brief history of computer games in the classroom

Rhys James Jones, Swansea University

Play has always been central to growing up, – whether it’s in the street or on a playing field – or in the structured formality of teachers’ quizzes.

These days tablet computers are in nearly every pupil’s hands and children learn through computer games – both in the classroom and at home. Children’s coding initiatives and tiny computers such as the Raspberry Pi and the BBC’s micro:bit have also become big hits in the education world, helping to teach computer basics in playful ways.

But while it’s tempting to see the gamification of education as a new development, there is in fact a long history of children using computer games to help with their learning – which goes right back to the 1970s.

This was the decade during which computers first inched out of the research lab and into everyday life, making the idea of a home or personal computer somewhat closer to a reality. In 1974, Ted Nelson, a US pioneer of information technology, wrote what is often considered “the first personal computer book” – Computer Lib/Dream Machines. It was in this book that with uncanny foresight, Nelson suggested pupils in the future would make use of hyperlinked documents, and touchscreens to widen their knowledge.

Away from Nelson’s speculation, the classroom reality was more mundane. Few schools could afford computers of their own. And for those that could, computer science meant punching instructions onto paper tape – a form of data storage, consisting of a long strip of paper in which holes are punched to store said data.

GCSE Computing – 1970’s style.
The National Museum of Computing

But in the late 1970s, something of a change happened – at least in the UK. And a worried government, concerned about Japanese innovation and threats of automation, commissioned a report from the Manpower Services Commission (MSC) and the BBC to look into how to develop computer literacy initiatives.

Designed to raise computer awareness, these initiatives happily coincided with the rise of microprocessors which were enabling the manufacture of cheaper, smaller machines. And the BBC decided to invite UK companies to submit proposals for a Microcomputer System, to a predefined specification. A system proposed by a young company in Cambridge was chosen and Acorn’s BBC Microcomputer was born.

The BBC micro in all its glory.
Public Domain

Simpler and faster

The BBC Micro, along with some other machines, could be bought by schools at half price via government subsidies.

Their beige cases and red function keys became a familiar sight from primary through to university level. But they were still expensive: a discounted “Beeb” and monitor would cost a school more than £1,000 at today’s prices.

Learning to program was simpler and faster on the Beeb than in years past, with paper tape replaced by a monitor and a beginner’s coding language known as BASIC – which stands for “beginner’s all purpose symbolic instruction code”. This meant many more students were able to engage with computer science.

The rise of the games

Then there were the games. Despite excellent arcade clones and some true originals, the cost of the unsubsidised Beeb made it difficult for it to gain a foothold as a home videogame system.

Perhaps its educational image didn’t help either – and maybe it never quite shook off what comedian Simon Munnery described as

The stench of school … most of the games would be … Isn’t Geography Nice?

The Beebs’ dominance in schools led to a torrent of educational software being released, of varying quality. Indeed, many of these early educational “games” aided nothing more than rote learning.

The stench of educational games.
Author provided

But educational initiatives helped push the boundaries, particularly in science and maths. The best-remembered games were imaginative, often programmed by teachers themselves, and learning happened by stealth. For example, the fondly recalled Granny’s Garden, while limited, took players on a puzzle-solving journey to avoid traps and witches – all rendered in teletext graphics.

Adventure was also central to L: A Mathemagical Journey, which used shades of Lewis Carroll to build players’ numeracy skills, while encouraging them to reflect on their experience.

Straddling home and school, Acorn’s software arm Acornsoft used surprisingly entertaining artificial intelligence techniques to test and extend subject-based learning. Younger, newly literate learners could encounter Podd, a floating head performing various actions at their typed command.

But in the 21st century, it’s not just learning, but the whole education system that has become a game – as schools, classes, teachers and students strive to top the board in league tables and PISA rankings. At the same time, teachers’ unions, children and parents all argue against excessive assessment and testing.

Maybe then we should all learn from the classroom videogame pioneers of the past few decades. Because although it’s clear that game-based learning has a role to play in education, there still needs to be a point to it – within a wider context.

The ConversationAnd while educational games can be creative and innovative, they are at their best when they don’t smell too much of school.

Rhys James Jones, Senior Lecturer in Digital Media, Swansea University

This article was originally published on The Conversation. Read the original article.

A day in the life of a smart-city commuter – and why it’s not so far from reality

Marcin Budka, Bournemouth University

The alarm on your smart phone went off 10 minutes earlier than usual this morning. Parts of the city are closed off in preparation for a popular end of summer event, so congestion is expected to be worse than usual. You’ll need to catch an earlier bus to make it to work on time.

The alarm time is tailored to your morning routine, which is monitored every day by your smart watch. It takes into account the weather forecast (rain expected at 7am), the day of the week (it’s Monday, and traffic is always worse on a Monday), as well as the fact that you went to bed late last night (this morning, you’re likely to be slower than usual). The phone buzzes again – it’s time to leave, if you want to catch that bus.

While walking to the bus stop, your phone suggests a small detour – for some reason, the town square you usually stroll through is very crowded this morning. You pass your favourite coffee shop on your way, and although they have a 20% discount this morning, your phone doesn’t alert you – after all, you’re in a hurry.

After your morning walk, you feel fresh and energised. You check in at the Wi-Fi and Bluetooth-enabled bus stop, which updates the driver of the next bus. He now knows that there are 12 passengers waiting to be picked up, which means he should increase his speed slightly if possible, to give everyone time to board. The bus company is also notified, and are already deploying an extra bus to cope with the high demand along your route. While you wait, you notice a parent with two young children, entertaining themselves with the touch-screen information system installed at the bus stop.

Bus stops of the future.
from www.shutterstock.com

Once the bus arrives, boarding goes smoothly: almost all passengers were using tickets stored on their smart phones, so there was only one time-consuming cash payment. On the bus, you take out a tablet from your bag to catch up on some news and emails using the free on-board Wi-Fi service. You suddenly realise that you forgot to charge your phone, so you connect it to the USB charging point next to the seat. Although the traffic is really slow, you manage to get through most of your work emails, so the time on the bus is by no means wasted.

The moment the bus drops you off in front of your office, your boss informs you of an unplanned visit to a site, so you make a booking with a car-sharing scheme, such as Co-wheels. You secure a car for the journey, with a folding bike in the boot.

Your destination is in the middle of town, so when you arrive on the outskirts you park the shared car in a nearby parking bay (which is actually a member’s unused driveway) and take the bike for the rest of the journey to save time and avoid traffic. Your travel app gives you instructions via your Bluetooth headphones – it suggests how to adjust your speed on the bike, according to your fitness level. Because of your asthma, the app suggests a route that avoids a particularly polluted area.

Sick ride.
Mr.tinDC/Flickr, CC BY-NC-ND

After your meeting, you opt to get a cab back to the office, so that you can answer some emails on the way. With a tap on your smartphone, you order the cab, and in the two minutes it takes to arrive you fold up your bike so that you can return it to the boot of another shared vehicle near your office. You’re in a hurry, so no green reward points for walking today, I’m afraid – but at least you made it to the meeting on time, saving kilograms of CO2 on the way.

Get real

It may sound like fiction, but truth be told, most of the data required to make this day happen are already being collected in one form or another. Your smart phone is able to track your location, speed and even the type of activity that you’re performing at any given time – whether you’re driving, walking or riding a bike.

Meanwhile, fitness trackers and smart watches can monitor your heart rate and physical activity. Your search history and behaviour on social media sites can reveal your interests, tastes and even intentions: for instance, the data created when you look at holiday offers online not only hints at where you want to go, but also when and how much you’re willing to pay for it.

Personal devices aside, the rise of the Internet of Things with distributed networks of all sorts of sensors, which can measure anything from air pollution to traffic intensity, is yet another source of data. Not to mention the constant feed of information available on social media about any topic you care to mention.

The ConversationWith so much data available, it seems as though the picture of our environment is almost complete. But all of these datasets sit in separate systems that don’t interact, managed by different entities which don’t necessarily fancy sharing. So although the technology is already there, our data remains siloed with different organisations, and institutional obstacles stand in the way of attaining this level of service. Whether or not that’s a bad thing, is up to you to decide.

Marcin Budka, Principal Academic in Data Science, Bournemouth University

This article was originally published on The Conversation. Read the original article.

The seven most extreme planets ever discovered

Christian Schroeder, University of Stirling

Scientists recently discovered the hottest planet ever found – with a surface temperature greater than some stars. As the hunt for planets outside our own solar system continues, we have discovered many other worlds with extreme features. And the ongoing exploration of our own solar system has revealed some pretty weird contenders, too. Here are seven of the most extreme.

The hottest

How hot a planet gets depends primarily on how close it is to its host star – and on how hot that star burns. In our own solar system, Mercury is the closest planet to the sun at a mean distance of 57,910,000km. Temperatures on its dayside reach about 430°C, while the sun itself has a surface temperature of 5,500°C.

But stars more massive than the sun burn hotter. The star HD 195689 – also known as KELT-9 – is 2.5 times more massive than the sun and has a surface temperature of almost 10,000°C. Its planet, KELT-9b, is much closer to its host star than Mercury is to the sun.

Though we cannot measure the exact distance from afar, it circles its host star every 1.5 days (Mercury’s orbit takes 88 days). This results in a whopping 4300°C – which is hotter than many of the stars with a lower mass than our sun. The rocky planet Mercury would be a molten droplet of lava at this temperature. KELT-9b, however, is a Jupiter-type gas giant. It is shrivelling away as the molecules in its atmosphere are breaking down to their constituent atoms – and burning off.

The coldest

At a temperature of just 50 degrees above absolute zero – -223°C – OGLE-2005-BLG-390Lb snatches the title of the coldest planet. At about 5.5 times the Earth’s mass it is likely to be a rocky planet too. Though not too distant from its host star at an orbit that would put it somewhere between Mars and Jupiter in our solar system, its host star is a low mass, cool star known as a red dwarf.

Freezing but Earth-like: ESO OGLE BLG Lb.
ESO, CC BY-SA

The planet is popularly referred to as Hoth in reference to an icy planet in the Star Wars franchise. Contrary to its fictional counterpart, however, it won’t be able to sustain much of an atmosphere (nor life, for that matter). This because most of its gases will be frozen solid – adding to the snow on the surface.

The biggest

If a planet can be as hot as a star, what then makes the difference between stars and planets? Stars are so much more massive than planets that they are ignited by fusion processes as a result of the huge gravitational forces in their cores. Common stars like our sun burn by fusing hydrogen into helium. But there is a form of star called a brown dwarf, which are big enough to start some fusion processes but not large enough to sustain them. Planet DENIS-P J082303.1-491201 b with the equally unpronounceable alias 2MASS J08230313-4912012 b has 28.5 times the mass of Jupiter – making it the most massive planet listed in NASA’s exoplanet archive. It is so massive that it is debated whether it still is a planet (it would be a Jupiter-class gas giant) or whether it should actually be classified as a brown dwarf star. Ironically, its host star is a confirmed brown dwarf itself.

The smallest

Just slightly larger than our moon and smaller than Mercury, Kepler-37b is the smallest exoplanet yet discovered. A rocky world, it is closer to its host star than Mercury is to the sun. That means the planet is too hot to support liquid water and hence life on its surface.

The oldest

PSR B1620-26 b, at 12.7 billion years, is the oldest known planet. A gas giant 2.5 times the mass of Jupiter it has been seemingly around forever. Our universe at 13.8 billion years is only a billion years older.

Artist’s impression of the biggest planet known.
NASA and G. Bacon (STScI)

PSR B1620-26 b has two host stars rotating around each other – and it has outseen the lives of both. These are a neutron star and a white dwarf, which are what is left when a star has burned all its fuel and exploded in a supernova. However, as it formed so early in the universe’s history, it probably doesn’t have enough of the heavy elements such as carbon and oxygen (which formed later) needed for life to evolve.

The youngest

The planetary system V830 Tauri is only 2m years old. The host star has the same mass as our sun but twice the radius, which means it has not fully contracted into its final shape yet. The planet – a gas giant with three quarters the mass of Jupiter – is likewise probably still growing. That means it is acquiring more mass by frequently colliding with other planetary bodies like asteroids in its path – making it an unsafe place to be.

The worst weather

Because exoplanets are too far away for us to be able to observe any weather patterns we have to turn our eyes back to our solar system. If you have seen the giant swirling hurricanes photographed by the Juno spacecraft flying over Jupiter’s poles, the largest planet in our solar system is certainly a good contender. However, the title goes to Venus. A planet the same size of Earth, it is shrouded in clouds of sulfuric acid.

The ConversationThe atmosphere moves around the planet much faster than the planet rotates, with winds reaching hurricane speeds of 360km/h. Double-eyed cyclones are sustained above each pole. Its atmosphere is almost 100 times denser than Earth’s and made up of over 95% carbon dioxide. The resulting greenhouse effect creates hellish temperatures of at least 462°C on the surface, which is actually hotter than Mercury. Though bone-dry and hostile to life, the heat may explain why Venus has fewer volcanoes than Earth.

Christian Schroeder, Lecturer in Environmental Science and Planetary Exploration, University of Stirling

This article was originally published on The Conversation. Read the original article.

Australia’s digital strategy needs major readjustment

Ron Johnston, University of Sydney

Australia ranks 15 out of 63 nations when it comes to digital competitiveness, according to a new report from the International Institute for Management Development (IMD). While we’re in the top 20, the result highlights serious structural flaws in our economy that will impact our future performance and living standards.

According to the IMD, Australia has also fallen four places to 21st in the world in economic competitiveness. On both scores, lead performers like Hong Kong, Switzerland and Singapore are very different from Australia, not just in their size or geography but because of a deep commitment to growing their competitiveness and technological capabilities.

Being 15th in digital competitiveness is worrisome. On most measures included in the score, Australia is steadily falling behind and changing this trajectory will take time and commitment.

Are we digitally competitive?

IMD’s analysis of digital competitiveness is based on three (somewhat opaque) performance characteristics:

  • Knowledge: the capacity to understand and learn new technologies, which includes talent, training and education, and scientific performance,
  • The technology environment: encompassing regulatory and technological frameworks, and capital, and
  • Future readiness: based on adaptive attitudes, business agility and IT integration.

According to a summary of the IMD report by the Committee for Economic Development of Australia (CEDA is the official Australian partner for the yearbook), we have some areas of high comparative performance. These include the net flow of international students (in which we lead the world), e-participation and e-government (in which we rank 2nd respectively), and ease of starting a business (we place 5th).

But by many other measures we are at the bottom of the pack. Australia rates 45th when it comes to digital and technological skills. There’s hardly been silence on this issue: the Australian Computer Society, among many others, has long emphasised the growing labour market for IT skills, and the need to enhance training.

Singapore is surpassing Australia when it comes to digital competitiveness.
Etienne Valois/Flickr, CC BY-NC-ND

In education, Australia has a global ranking of 51st, down 20 places since 2013. In my view, this is substantially due to two factors. The first is the telling ranking of 52nd for the pupil-teacher ratio in tertiary education, which raises questions about the adequacy of university funding.

The second is a very low level of employee training, where we rank 43rd. The National Centre for Vocational Education and Research (NCVER) has argued that the growth of casual employment, together with outsourcing, has had a significant impact on vocational education and training (VET) in the workplace. As it states,

There has been a shift in the balance of responsibility for VET in Australia. Employers using labour hire or outsourcing have tried to shift the burden of training onto the labour-hire firm or the outsourced service provider. However, these organisations are in turn trying to minimise any investment in training. At the same time the government’s role in direct provision of generalist and comprehensive trade and vocational training has declined in favour of support for a training market and user choice.

Given regular reports of the failings of Australia’s slow internet and broadband rollout, it comes as no surprise that Australia ranks 40th for internet bandwidth speed and 54th in communications technology. What chance for a “smart country” when we cannot invest in the necessary infrastructure?

Finally, despite the almost daily reports of cyber insecurity, and announcements of investment by government, our current ranking on cybersecurity is an alarming 40th. We clearly lag well behind most other countries in preparing for this new threat.

So what might be done?

Specific policies focused on these failings are not the answer. Australia’s innovation policy has suffered for years from fragmentation, short-term measures, changes of emphasis and an almost indecent desire to “clean the slate”.

Rather, as has been emphasised by the Academy of Technology and Engineering (ATSE), each of these elements needs to be seen as interconnected, and afforded support over many years. The ATSE has said,

Australia needs a suite of complementary measures to incentivise innovation which are delivered at sufficient scale, with sufficient funding, and with the long-term support and stability necessary to be effective.

For Australia, the difference between us and Singapore is all too evident.

Its government-affiliated Committee on the Future Economy released a commendable seven-point national economic strategy in 2017. The group suggested, among other points, substantial measures to boost trade and investment through a “a Global Innovation Alliance”, the requirement for companies to play a stronger role in developing their workers and further building digital capabilities.

In contrast, we have little problem taking on 5 to 10 year projects to expand the housing supply, build roads, airports and dams, but seem to baulk at investment in what has become the biggest driver of economic competitiveness – the generation and application of knowledge.

The root cause of Australia’s continuing decline in competitiveness may well be what Ross Garnaut and others have labelled the country’s “great complacency” – the “she’ll be right” attitude that assumes because we have prospered in the past, it must inevitably continue.

The ConversationSuch critics will be proven correct if we continue to imagine our future wealth is a matter of providence, as opposed to welcoming major reform and investment in education.

Ron Johnston, Executive Director, Australian Centre for Innovation, University of Sydney

This article was originally published on The Conversation. Read the original article.

Playing a science-based video game? It might be all wrong

Alex Leith, Michigan State University

You look down from the sky, manipulating the world and seeing how it responds to your changes. You are able to alter vegetation and climate while watching their effects on the surrounding organisms. In this way, and many others, digital games provide excellent opportunities for players to learn about complicated subjects, including the concept of evolution through natural selection. Even games designed for fun and not specifically for education can provide rich, concise, dynamic representations of complex science, technology, engineering and math topics.

Since I was young, digital games have successfully supplemented the educational process in a range of topics, including math, science and biology. Research shows that if these games are going to actually teach those concepts, they must represent them accurately. Games that include incorrect depictions teach the wrong lessons.

Since Charles Darwin and Alfred Russel Wallace, evolution has been understood as a process based on genetic differences between individual organisms of the same species. There are three key principles:

  1. Organisms with genetic advantages for their environment are more likely to survive and reproduce, while organisms whose genes make their bodies less suited will die more quickly and reproduce less effectively.
  2. Because these characteristics are genetic, they are passed on to offspring.
  3. Organisms with genes that improve their survival will have more successful offspring, which will in turn pass on their advantages to the next generation.

Some colleagues and I looked into how well current games could serve as educational tools, specifically about evolution. We examined how Darwinian evolution was represented in 22 games, which we located either through game databases like GameSpot or IGN, or through Google searches. Most games got evolution at least partly wrong. Only five accurately represented all three key principles of evolution.

The five that got it right

A screen-capture of a Norn interacting with its environment in ‘Creatures 2,’ developed by Creatures Labs and published by Mindscape.
Wikipedia

Creatures” provides a rare example of the three principles. In that game, players create cartoon-like creatures called “norns,” through a process that allows norns to be altered not just in terms of appearance, but at the genetic level. For the most accurate representation of evolution, the game offers a play mode called “wolfling run.” In that mode, players cannot directly affect their norns, but can observe their relative fitness for a particular in-game scenario. The potential variations in both norn creation and the environment they must survive in provide for an astonishing number of evolutionary possibilities.

Maxis, best known for creating the “SimCity” game series, and its spinoff “The Sims” collection, also made a set of games called “SimEarth” and “SimLife.” Like “SimCity,” both give players top-down control of a world. “SimEarth” was designed for players to make major changes to the weather, landscape and animals to create an environment. Players were then able to see how the animals would fare in this created environment. “SimLife” was more specific: it has players engage with the animals (rather than merely creating them) to learn about the biology surrounding their survival.

A screen-capture of ‘Who Wants to Live a Million Years,’ playable on the Science Channel website.
from sciencechannel.com

We also found two academically oriented games that loosely presented the three mechanics of evolution: “Selection Game” and “Who Wants to Live a Million Years” (which was later renamed “Charles Darwin’s Game of Survival”). The two games were designed to be simple tools that could be played quickly in places like museums. Despite the limited mechanics present in such games, they still clearly show each element of the evolution process.

Market success doesn’t mean accuracy

The most commercially popular game we found didn’t quite get evolution right. “Spore” left out something many other games did, too: Organisms’ genetic differences didn’t affect their survival rates. Instead, organisms whose genes were unfit for the environment would not necessarily die more often, in keeping with evolutionary principles. Rather, players could intervene and increase an organism’s likelihood for success by, say, helping it move more intelligently and strategically, beyond the scope of its genetically predisposed movements.

Nevertheless, “Spore” does a reasonable job presenting the broader concept of evolution to players, and is the best such game made this century. (“Creatures,” “SimEarth,” and “SimLife” are all from the 1990s.) “Spore” is also still available for purchase, so it is the only game readily usable by the average educator or student.

But other findings were disappointing. Most games inaccurately portrayed evolution, usually in the same way Spore did – allowing player intervention to save organisms that were unfit for survival.

For these other games, evolution becomes more akin to mutation during a single organism’s life than a process that occurs through generations. In “E.V.O.: Search for Eden” and “L.O.L.: Lack of Love,” players earn points they can spend to modify their organisms. In “Eco,” at the end of each level, the player arbitrarily changes an attribute, though not necessarily one that affects an organism’s survival prospects. In each of these cases, what the game calls “evolution” is actually external genetic manipulation, rather than inheriting particular traits.

The ConversationThese inaccuracies may confuse those unsure of what evolution actually is. If other scientific subjects are similarly poorly depicted in video games, the potential educational benefits of these games could be lost. However, as game designers become more adept at modeling scientific themes, it could herald an educational revolution.

Alex Leith, Doctoral Candidate in Media and Information Studies, Michigan State University

This article was originally published on The Conversation. Read the original article.

Mission to the sun will protect us from devastating solar storms and help us travel deeper into space

David Jess, Queen’s University Belfast

From prayer and sacrifice to sunbathing, humans have worshipped the sun since time immemorial. And it’s no wonder. At around 150m km away, it is close enough to provide the light, heat and energy to sustain the entire human race. But despite the fact that our parent star has been studied extensively with modern telescopes – both from home and in space – there’s a lot we don’t know about it.

This is why NASA has recently announced plans to launch a revolutionary probe, set to lift-off in 2018, that will literally touch it. Initially dubbed the Solar Probe Plus mission, the spacecraft has now been renamed the Parker Solar Probe. This is to honour physicist Eugene Parker who carried out important work on the solar wind – a stream of charged particles from the sun.

Helios 2 mission.
NASA

There have been many missions to investigate the sun. In 1976, the Helios 2 spacecraft came as close as 43m km from the sun’s atmosphere. But the $1.5 billion Parker probe will travel to just 6m km above the solar surface – some nine times closer than any spacecraft has ever gone before. This will open a new era of understanding as, for the first time, sensors will be able to detect and analyse phenomena as they occur in the sun.

While the cruising altitude of the mission may sound like a safe distance at millions of kilometres, the sun’s immense energy will relentlessly bombard the payload with heat. An 11.5cm thick carbon composite shroud, similar to what modern Formula 1 race cars employ in their high-performance braking systems, will shield the sensitive equipment. This will be crucial as temperatures will soar beyond 1,400°C.

At these extreme temperatures, the solar arrays that power the spacecraft will retract. This manoeuvre will allow the instruments and power sources to remain close to room temperature in the shadow of the carbon composite shield. Just as well, as the spacecraft will experience radiation 475 times more intense than Earth orbit.

Any errors in the planned spacecraft trajectories could result in the probe sinking deeper into the sun’s atmosphere, which is several million degrees hot. This could ultimately destroy the spacecraft.

Solar science

So what can we learn from this risky mission? The dynamic activity brought about by supercharged particles and radiation being released from the sun – encountering the Earth as they pass through the inner solar system – is called space weather. The consequences of space weather can be catastrophic, including the loss of satellite communications, changes to the orbits of spacecraft around Earth and damaging surges throughout global power grids. Most important is the risk to astronauts exposed to the powerful ionising radiation.

The devastating cost of such fierce electromagnetic storms has been estimated at $2 trillion, resulting in space weather being formally listed in the UK’s National Risk Registry.

Parker probe.
NASA

The new solar probe will revolutionise our understanding of what conditions are necessary in the sun’s atmosphere to generate severe bouts of space weather by making direct measurements of the magnetic fields, plasma densities and atmosphere temperatures for the first time. In a similar way to how an elastic band can snap following excessive stretching, it is believed that the continual twisting and churning of the magnetic field lines that permeate the solar atmosphere may give rise to particle acceleration and radiation bombardment. Once the magnetic fields break, we can experience severe space weather.

Unfortunately, we presently have no direct method of sampling the sun’s magnetic fields. Scientists are attempting to uncover new techniques that will allow the twists, strengths and directions of the sun’s powerful fields to be determined, but so far they can’t provide an accurate enough understanding. This is where the Parker probe will provide a new age of understanding, since it will be able to sample the sun’s powerful magnetic fields while there.

Round-the-clock observations and direct measurements of the atmospheric conditions responsible for increased levels of space weather are paramount in order to provide crucial warning of imminent solar threats. An instrument suite on-board the probe, the FIELDS suite, will provide such unprecedented information. Scientists can then feed this into intensive computer models, ultimately allowing space, aviation, power and telecommunication authorities to be alerted when potentially devastating space weather is imminent.

Of course, understanding the origins of space weather also has implications for other important areas of astrophysical research. It will allow space agencies to better protect astronauts during future manned missions to Mars, where the thinner Martian atmosphere offers little protection to incoming solar radiation.

The ConversationAlso, by being able to accurately model the effects of the streaming solar wind, future spacecraft will be able to effectively use solar sails to help them reach further into the depths of the solar system, perhaps eventually opening up the possibility of truly interstellar travel.

David Jess, Lecturer and STFC Ernest Rutherford Fellow, Queen’s University Belfast

This article was originally published on The Conversation. Read the original article.

Research transparency: 5 questions about open science answered

Elizabeth Gilbert, The Medical University of South Carolina and Katie Corker, Grand Valley State University

What is “open science”?

Open science is a set of practices designed to make scientific processes and results more transparent and accessible to people outside the research team. It includes making complete research materials, data and lab procedures freely available online to anyone. Many scientists are also proponents of open access, a parallel movement involving making research articles available to read without a subscription or access fee.

Why are researchers interested in open science? What problems does it aim to address?

Recent research finds that many published scientific findings might not be reliable. For example, researchers have reported being able to replicate only 40 percent or less of cancer biology results, and a large-scale attempt to replicate 100 recent psychology studies successfully reproduced fewer than half of the original results.

This has come to be called a “reproducibility crisis.” It’s pushed many scientists to look for ways to improve their research practices and increase study reliability. Practicing open science is one way to do so. When scientists share their underlying materials and data, other scientists can more easily evaluate and attempt to replicate them.

Also, open science can help speed scientific discovery. When scientists share their materials and data, others can use and analyze them in new ways, potentially leading to new discoveries. Some journals are specifically dedicated to publishing data sets for reuse (Scientific Data; Journal of Open Psychology Data). A paper in the latter has already been cited 17 times in under three years – nearly all these citations represent new discoveries, sometimes on topics unrelated to the original research.

Wait – open science sounds just like the way I learned in school that science works. How can this be new?

Under the status quo, science is shared through a single vehicle: Researchers publish journal articles summarizing their studies’ methods and results. The key word here is summary; to write a clear and succinct article, important details may be omitted. Journal articles are vetted via the peer review process, in which an editor and a few experts assess them for quality before publication. But – perhaps surprisingly – the primary data and materials underlying the article are almost never reviewed.

Historically, this made some sense because journal pages were limited, and storing and sharing materials and data were difficult. But with computers and the internet, it’s much easier to practice open science. It’s now feasible to store large quantities of information on personal computers, and online repositories to share study materials and data are becoming more common. Recently, some journals have even begun to require or reward open science practices like publicly posting materials and data.

Open science makes sharing data the default.
Bacho via Shutterstock.com

There are still some difficulties sharing extremely large data sets and physical materials (such as the specific liquid solutions a chemist might use), and some scientists might have good reasons to keep some information private (for instance, trade secrets or study participants’ personal information). But as time passes, more and more scientists will likely practice open science. And, in turn, science will improve.

Some do view the open science movement as a return to science’s core values. Most researchers over time have valued transparency as a key ingredient in evaluating the truth of a claim. Now with technology’s help it is much easier to share everything.

Why isn’t open science the default? What incentives work against open science practices?

Two major forces work against adoption of open science practices: habits and reward structures. First, most established researchers have been practicing closed science for years, even decades, and changing these old habits requires some upfront time and effort. Technology is helping speed this process of adopting open habits, but behavioral change is hard.

Second, scientists, like other humans, tend to repeat behaviors that are rewarded and avoid those that are punished. Journal editors have tended to favor publishing papers that tell a tidy story with perfectly clear results. This has led researchers to craft their papers to be free from blemish, omitting “failed” studies that don’t clearly support their theories. But real data are often messy, so being fully transparent can open up researchers to critique.

Additionally, some researchers are afraid of being “scooped” – they worry someone will steal their idea and publish first. Or they fear that others will unfairly benefit from using shared data or materials without putting in as much effort.

Taken together, some researchers worry they will be punished for their openness and are skeptical that the perceived increase in workload that comes with adopting open science habits is needed and worthwhile. We believe scientists must continue to develop systems to allay fears and reward openness.

I’m not a scientist; why should I care?

Open access is the cousin to open science – the idea is that research should be freely available to all, not hidden behind paywalls.
h_pampel, CC BY-SA

Science benefits everyone. If you’re reading this article now on a computer, or have ever benefited from an antibiotic, or kicked a bad habit following a psychologist’s advice, then you are a consumer of science. Open science (and its cousin, open access) means that anyone – including teachers, policymakers, journalists and other nonscientists – can access and evaluate study information.

The ConversationConsidering automatic enrollment in a 401k at work or whether to have that elective screening procedure at the doctor? Want to ensure your tax dollars are spent on policies and programs that actually work? Access to high-quality research evidence matters to you. Open materials and open data facilitate reuse of scientific products, increasing the value of every tax dollar invested. Improving science’s reliability and speed benefits us all.

Elizabeth Gilbert, Postdoctoral Research Fellow in Psychiatry and Behavioral Sciences, The Medical University of South Carolina and Katie Corker, Assistant Professor of Psychology, Grand Valley State University

This article was originally published on The Conversation. Read the original article.

Are esports the next major league sport?

Galen Clavio, Indiana University

In late 2016, a sports championship event was held in Chicago, drawing 43 million viewers during the series finals. That was 12 million more people than watched the 2016 NBA Finals.

It wasn’t soccer, or football, or even the World Series of Poker. Instead, it was the “League of Legends” World Finals, an esports competition.

Video games have been popular for more than 30 years, but competitive gaming, or esports, has recently emerged as a spectator activity that can draw thousands of attendees and viewers. Major sports networks such as ESPN, Fox Sports, MLB Advanced Media and the Big Ten Network have started broadcasting esports competitions, often partnering with major gaming companies like EA Sports, Riot and Blizzard. What is driving this phenomenon, and where is it taking us next?

Wide popularity

At first glance, the idea seems crazy, particularly to older consumers. Why would anyone want to watch other people playing video games? As a researcher focused on user experiences with social media, I have been watching the esports phenomenon develop over the last few years. My current work, with Matthew Zimmerman from Mississippi State University, looks at why users watch esports. Our preliminary findings suggest that esports spectators often play the games themselves, using the viewing process as a way to learn more about the games in question and improve their own skills as players.

In addition, many spectators take genuine pleasure in watching others play, finding the competitive culture immersive and experiencing watching esports very similarly to how they watch traditional sports.

Esports viewing has increased markedly over the past few years: The global market grew to US$696 million in 2016, and may exceed $1 billion by 2019. Media payments for rights to cover the events total nearly $100 million of that; consumers are paying $64 million for event tickets and merchandise. Most of the rest comes from advertising and sponsorship spending. The combined markets of China and North America account for more than half of global esports revenues.

A key attraction of esports is that regular people can play the very same games as the esports stars, often in real-time multiplayer tournaments. Millions of people play “Overwatch,” “League of Legends” and “Dota 2” in their own homes, and many of them participate in collaborative games and battles on communal video game servers or networks such as Steam. Familiar with the games, eager to learn new techniques and excited to celebrate expertise, these at-home players are very interested in watching top-level players in action.

Game 1 of the Grand Finals for the 2016 League of Legends World Championships.

Sean Morrison, a digital media associate for ESPN who specializes in esports coverage, told me he isn’t surprised by the surge in esports attention.

“I think the growth of esports is a generational shift more so than people suddenly becoming interested in video games,” says Morrison. “This generation of teens grew up on YouTube, watching streams, communing on internet forums – you name it. And esports is big business, too; it’s natural that people would wonder what the big deal is. All the hype kind of fuels itself, and that, combined with how many people have now grown up with this as a form of normal entertainment, has made it so big.”

Michael Sherman, college esports lead for Riot Games, the makers of “League of Legends” and other games, agrees.

“Watching video games is a very social behavior. Now you as a spectator have an opportunity to see the best people play. Aspirationally, you watch and say ‘I want to do that,‘” Sherman said to me. “It’s different from traditional sports like the NFL. I don’t watch football and go outside and throw the ball around. In esports, a lot of people watch and then they go play.”

An easy daily fix

While large sports media properties such as ESPN and the Big Ten Network have staked out territory in the esports world, many spectators get their daily fix from Twitch.Tv, a personal streaming service that specializes in video game streams. Twitch allows users to broadcast their own gameplay, while also hosting esports competitions and other video game shows. The service, which was purchased by Amazon for almost $1 billion in 2014, has helped esports to grow, by allowing gamers and viewers to directly connect with each other.

Twitch capitalizes on the very familiar practice of communal game watching. Over time, many video gamers have gotten used to watching others playing games while waiting for their turn with the controller. Twitch globalizes that experience, and – just as friends together in front of a TV can comment on each other’s play – lets viewers and the player interact directly online.

This is a boost beyond what many games allow. It’s quite common for games to have online components where players can take on opponents from anywhere in the world. But only on Twitch and similar esports platforms can nonplayers watch the action. Twitch’s elite gamers generated $60 million in subscriptions and advertising revenue in 2015 alone, per a CNBC report.

League and game growth

The interest and money have been encouraging the adaptation of games into leagues and sanctioned sports for years. Worldwide leagues exist for “Call of Duty,” “FIFA,” “Overwatch” and “Halo.”

The final match of the 2016 FIFA Interactive World Cup.

Lately, college teams have been getting in on the action. “The biggest development has been universities adopting ‘League of Legends’ as a sport,” says Riot’s Sherman. “In 2014, Robert Morris University was the first school to launch a varsity program. Now we’re up to about 25 schools.”

University-based teams allow several important elements of sport organization to coalesce in the esports marketplace. These teams feature young, enthusiastic gamers who are good enough to be competitive internationally, and institutions of higher learning who are keen to utilize the marketing potential of a rapidly developing sport to spread their brand. Esports have existed for many years outside of the official university environment, but official sanctioning by universities could help to boost the visibility of esports, as well as the games played in competitions.

“We announced in August that we had 100 million active monthly players globally for ‘League of Legends,’” Sherman says. “That was up from 64 million two years prior.”

While “League of Legends” continues to expand on the collegiate level, “Overwatch” has an eye on further changing the esports marketplace.

The Overwatch League, likely to launch in 2018, looks to have existing sports franchises in major cities across the globe own esports teams as well. The game designer, Blizzard, wants to create fan interest based on geographical and cultural relevance. The Overwatch League would also include regular broadcasts of matches on both TV and internet-based channels, as well as player contracts.

ESPN’s Morrison expects the Overwatch League model to help spur on esports spectator base growth. “‘Overwatch’ is going to blow up in the next couple of years,” he said. “Between the Overwatch League, which is going to be more like traditional sports than any league before it, and the number of competitive series popping up within it, ‘Overwatch’ will likely become the number one esports title before long. Multiplayer online battle arena (MOBA) games like ‘League of Legends’ have long been the center of the esports universe, but games like ‘Overwatch’ that combine MOBA elements in hero choice with faster-paced gameplay are becoming a mainstay.”

The ConversationThe sports media landscape continues to change, and esports seem to be a natural evolution of that process. Competitive video gaming was hard to conceive of 20 years ago, and even harder to conceive of as a spectator sport. But broadband internet, online video, social media and shared gaming experiences have taken esports to the brink of worldwide acceptance as a legitimate form of consumer entertainment. The next five years promise to be fascinating to watch – or to play.

Galen Clavio, Associate Professor of Sports Media; Director of the National Sports Journalism Center, Indiana University

This article was originally published on The Conversation. Read the original article.

Could cold spot in the sky be a bruise from a collision with a parallel universe?

Ivan Baldry, Liverpool John Moores University

Scientists have long tried to explain the origin of a mysterious, large and anomalously cold region of the sky. In 2015, they came close to figuring it out as a study showed it to be a “supervoid” in which the density of galaxies is much lower than it is in the rest of the universe. However, other studies haven’t managed to replicate the result.

Now new research led by Durham University, submitted for publication in the Monthly Notices of the Royal Astronomical Society, suggests the supervoid theory doesn’t hold up. Intriguingly, that leaves open a pretty wild possibility – the cold spot might be the evidence of a collision with a parallel universe. But before you get too excited, let’s look at how likely that would actually be.

The cold spot can be seen in maps of the “cosmic microwave background” (CMB), which is the radiation left over from the birth of the universe. The CMB is like a photograph of what the universe looked like when it was 380,000 years old and had a temperature of 3,000 degrees Kelvin. What we find is that it is very smooth with temperature deviations of less than one part in 10,000. These deviations can be explained pretty well by our models of how the hot universe evolved up to an age of 380,000 years.

CMB as observed by Planck.
ESA and the Planck Collaboration, CC BY-SA

However the cold spot is harder to work out. It is an area of the sky about five degrees across that is colder by one part in 18,000. This is readily expected for some areas covering about one degree – but not five. The CMB should look much smoother on such large scales.

The power of galaxy data

So what caused it? There are two main possibilities. One is that it could be caused by a supervoid that the light has travelled through. But it could also be a genuine cold region from the early universe. The authors of the new research tried to find out by comparing new data on galaxies around the cold spot with data from a different region of the sky. The new data was obtained by the Anglo-Australian Telescope, the other by the GAMA survey.

The GAMA survey, and other surveys like it, take the “spectra” of thousands of galaxies. Spectra are images of light captured from a galaxy and spread out according to its wavelengths. This provides a pattern of lines emitted by the different elements in the galaxy. The further away the galaxy is, the more the expansion of the universe shifts these lines to appear at longer wavelengths than they would appear on Earth. The size of this so-called “redshift” therefore gives the distance to the galaxy. Spectra coupled with positions on the sky can give us 3D maps of galaxy distributions.

But the researchers concluded that there simply isn’t a large enough void of galaxies to explain the cold spot – there was nothing too special about the galaxy distribution in front of the cold spot compared to elsewhere.

So if the cold spot is not caused by a supervoid, it must be that there was a genuinely large cold region that the CMB light came from. But what could that be? One of the more exotic explanations is that there was a collision between universes in a very early phase.

Controversial interpretation

The idea that we live in a “multiverse” made up of an infinite number of parallel universes has long been considered a possibility. But physicists still disagree about whether it could represent a physical reality or whether it’s just a mathematical quirk. It is a consequence of important theories like quantum mechanics, string theory and inflation.

Quantum mechanics oddly states that any particle can exist in “superposition” – which means it can be in many different states simultaneously (such as locations). This sounds bizarre but it has been observed in laboratories. For example, electrons can travel through two slits at the same time – when we are not watching. But the minute we observe each slit to catch this behaviour, the particle chooses just one. That is why, in the famous “Shroedinger’s cat” thought experiment, an animal can be alive and dead at the same time.

But how can we live with such strange implications? One way to interpret it is to choose to accept that all possibilities are true, but that they exist in different universes.

Miaowtiverse.
Robert Couse-Baker/Flickr, CC BY-SA

So, if there is mathematical backing for the existence of parallel universes, is it so crazy to think that the cold spot is an imprint of a colliding universe? Actually, it is extremely unlikely.

There is no particular reason why we should just now be seeing the imprint of a colliding universe. From what we know about how the universe formed so far, it seems likely that it is much larger than what we can observe. So even if there are parallel universes and we had collided with one of them – unlikely in itself – the chances that we’d be able to see it in the part of the universe that we happen to be able to observe on the sky are staggeringly small.

The paper also notes that a cold region of this size could occur by chance within our standard model of cosmology – with a 1%-2% likelihood. While that does make it unlikely, too, it is based on a model that has been well tested so we cannot rule it out just yet. Another potential explanation is in the natural fluctuations in mass density which give rise to the CMB temperature fluctuations. We know these exist on all scales but they tend to get smaller toward large scales, which means they may not be able to create a cold region as big as the cold spot. But this may simply mean that we have to rethink how such fluctuations are created.

The ConversationIt seems that the cold spot in the sky will continue to be a mystery for some time. Although many of the explanations out there seem unlikely, we don’t necessarily have to dismiss them as pure fantasy. And even if it takes time to find out, we should still revel in how far cosmology has come in the last 20 years. There’s now a detailed theory explaining, for the most part, the glorious temperature maps of the CMB and the cosmic web of galaxies which span across billions of light years.

Ivan Baldry, Professor of Astrophysics, Liverpool John Moores University

This article was originally published on The Conversation. Read the original article.