Monthly Archives: June 2017

Explainer: how law enforcement decodes your photos

Richard Matthews, University of Adelaide

For as long as humans have been making images, we have also been manipulating them.

Complex darkroom techniques were once required to modify images but now anyone with a smartphone can apply hundreds of changes using freely available tools.

While this may be convenient for your Instagram feed, it presents a unique challenge for law enforcement. Images cannot always be trusted as an accurate depiction of what occurred.

For example, I recently analysed several photos for the RSPCA showing a duck with a knife embedded in its head to determine if they were photoshopped.

Authorities are increasingly asking images to be verified by forensic experts, but how is this done and where is it headed?

The image pipeline

Analysts currently rely on knowledge of the “image pipeline” to inspect and validate images.

This pipeline is often broken down into six key areas:

  1. Physics: shadows, lighting and reflections
  2. Geometry: vanishing points, distances within the image and 3D models
  3. Optical: lens distortion or aberrations
  4. Image Sensor: fixed pattern noise and colour filter defects
  5. File format: metadata, file compression, thumbnails and markers
  6. Pixel: scaling, cropping, cloned or resaving
The image pipeline with each section numerically allocated based on features that investigators are likely to analyse.
Richard Matthews, Author provided

It is often the unseen that begins our investigation rather than the seen. Here we’ll be focusing on the metadata captured in images (level 5 in the schema above).

File format forensics: metadata

When an image is saved, the file typically contains data about the image, known as metadata.

There are more than 460 metadata tags within the exchangeable image file format for digital still cameras (EXIF 2.3). This specification helps cameras use formats that can be exchanged between devices – for example, ensuring an iPhone photo appears correctly on a Samsung device.

Tags can include image size, location data, a smaller thumbnail of the image and even the make and model of the camera.

Determining which camera took what photo

In a recent investigation, we were able to validate a group of images known as the Byethorne duck.

The images supplied by the RSPCA to The Advertiser showed a duck with a knife impaled into its head. Accusations soon emerged that the image was photoshopped.

We inspected the images using Phil Harvey’s ExifTool and were able to determine that four of the images (left above) were taken by one camera, with the remainder taken by another.

This was verified using sensor pattern noise and statistical methods. We extracted a unique fingerprint from each image using signal processing filters and compared how similar they were to each another.

A high value indicates they are very similar and probably correlated, while a low value indicates that they are dissimilar and unlikely to be correlated.

When we compared four of the five image fingerprints, we obtained values well above 2,000. Given they’re correlated, we can say the images likely came from the same camera.

When we tested the fifth image, the similarity value we obtained was close to zero.

Metadata of one of the images from the left above and the image from the right. These show two separate unique image identifiers which were correlated to a phone camera firmware.
Richard Matthews, Author provided

The unique image ID field also contained the camera firmware number. By cross referencing with image and sensor size also contained in the metadata, we suggested that either a Samsung Galaxy S7 or S7 Edge was used to capture the first four images and a Samsung Galaxy S5 was used to capture the fifth.

The time the images were taken was also shown in the metadata, allowing a timeline of when the images were taken and by who to emerge.

Time the photos were taken and by which camera.
Richard Matthews, Author provided

Since the photos were taken by two different cameras across the span of around one hour, it is highly unlikely the images were fake.

An RSPCA spokesperson confirmed it received images of the duck from two separate people, which aligns with these findings. To date, there has been insufficient evidence to determine the identity of a perpetrator.

Finding a person’s location from an image

The camera model isn’t the only thing that can be obtained from metadata.

A selection of books on the author’s desk.
Richard Matthews, Author provided

We can see where my office is located by analysing this image of books taken at my desk.

Metadata viewed in ExifTool for the bookshelf image.
Richard Matthews, Author provided

The GPS coordinates are embedded directly in the image metadata. By placing these coordinates into Google Maps, the exact location of my office is displayed.

The precise location of Richard’s office at the University of Adelaide is discovered using the metadata contained within the bookshelf photo.
Richard Matthews, Author provided

This obvious privacy concern is why Facebook, for example, typically removes metadata from uploaded images.

According to a Facebook spokesperson, information including GPS data is automatically removed from photos uploaded onto the platform to protect people “from accidentally sharing private information, such as their location”.

The future of image forensics

Metadata is never used in isolation.

Authenticating an image to ensure it hasn’t been modified and upholding the chain of custody – the paper trail or provenance documentation that goes along with a piece of evidence – is increasingly important to police.

In the future, tools to assist police with this could include audit logs built directly into the camera, or the insertion of a watermark.

I am currently expanding on previous research that suggests each image sensor (the electronic device that actually takes the image) has a unique fingerprint due to the way it reacts non-uniformly to light.

Next time you take a photo, just think about the story it could tell.

But what happened to the duck? A spokesperson at the RSPCA said:

The ConversationWe believe the knife may have dislodged shortly after the photos were taken. A duck believed to be the same duck in the photograph has been viewed swimming and behaving normally in the days after giving us the belief that the knife did not penetrate deeply enough to cause significant injury.

Richard Matthews, PhD Candidate, University of Adelaide

This article was originally published on The Conversation. Read the original article.

New evidence that volcanic eruptions triggered the dawn of the dinosaurs

Tamsin Mather, University of Oxford and Lawrence Percival, University of Oxford

The dinosaurs may have volcanoes to thank for their domination of the planet, at least according to one theory. Most scientists think that a severe bout of volcanic activity 200m years ago may have led to the mass extinction that cleared the way for the dinosaurs’ rise. Now we – with a team of colleagues – have discovered new evidence that strengthens this idea: a global geological “fingerprint” indicating volcanic gases were affecting the whole world at the time of the extinction.

Geologists have previously discovered that the Earth’s crust hosts massive amounts of volcanic rock from the end of the Triassic period, 200m years ago. We know from the fossil record that, at about the same time, a very large proportion of Earth’s species died out, which made space for the remaining dinosaurs (and other species) to flourish. As volcanoes can produce large amounts of carbon dioxide (CO2), it’s possible that the volcanic activity that left these massive lava flows behind also provoked global climate change that led to this mass extinction.

What was missing was evidence that the volcanic activity really had such a worldwide impact. By examining geological records from all over the world, we discovered that large amounts of mercury were released into the atmosphere at around the same time as the extinction. As mercury is also released by volcanoes, this suggests the volcanic eruptions really were severe enough to affect the whole world and potentially cause the mass extinction.

The Central Atlantic Magmatic Province (CAMP).
Williamborg/Wikimedia, CC BY-SA

The volcanic rocks cover a huge area, across four present-day continents. They are the remains of a huge episode of heightened volcanic activity that lasted about a million years known as the Central Atlantic Magmatic Province (CAMP).

Previous studies have shown that this volcanism might have occurred in pulses. But we didn’t know how the timing and frequency of these emissions compared to the timing of the extinction event and the subsequent recovery of life. Or whether the volcanoes had a worldwide effect. So we decided to look for a “fingerprint” of the eruptions in the same kind of sediments that record the mass extinction.

Mercury marker

Modern volcanoes emit a large number of gases, most famously sulphur dioxide and CO2, but also trace quantities of the metal mercury. This mercury can stay in the atmosphere for between six months and two years and that means it can be distributed around the world before eventually being deposited in sediments at the bottom of lakes, rivers, and seas.

These same sediments record evidence of bouts of climate change and mass extinction. So, if a sediment layer that records a mass extinction also features unusually high mercury concentrations, we can deduce that volcanic activity likely coincided with (and maybe caused) that extinction.

Working with colleagues from the universities of Exeter and Southampton, we investigated six sedimentary records of the end-Triassic extinction for mercury concentrations. These records were from the UK, Austria, Argentina, Greenland, Canada and Morocco. This spread over four continents and both hemispheres gave us global insight into the impact of volcanic gas emissions during the end-Triassic mass extinction.

Emissions culprit.

Volcanic link

We found that five of the six records showed a large increase in mercury content beginning at the end of the Triassic period, with a distinct spike in mercury at the layer corresponding to the extinction itself. The extinction layer in the Morocco sample also overlaps with the volcanic rocks from the CAMP. This meant we could tie this large emission of mercury into the global atmosphere to a specific volcanic event, even though the eruption was around 200m years ago.

What’s more, this evidence reinforces the conclusion that mercury spikes found elsewhere in the geological record were caused by volcanic activity. We found other mercury peaks between the extinction layer and the layer that marked the start of the Jurassic period, which occurred approximately 100,000 to 200,000 years later. This suggests that multiple episodes of tremendous volcanic activity took place during and immediately after the end-Triassic extinction.

More importantly, we were able to show the elevated mercury emissions matched previously established increases in the amount of CO2 in the atmosphere. This strongly supports the theory that the CO2 emissions thought to cause the end-Triassic extinction came from volcanoes.

The ConversationThis link between increased atmospheric mercury and CO2 at the same time as the end-Triassic extinction offers fundamental insights into some of the factors governing the evolution of life on our planet. And, from a geological point of view, it highlights the potential of mercury to help explain other extinction events in Earth’s history.

Tamsin Mather, Professor of Earth Sciences, University of Oxford and Lawrence Percival, PhD Candidate, Department of Earth Sciences, University of Oxford

This article was originally published on The Conversation. Read the original article.

Elon Musk releases details of plan to colonise Mars – here’s what a planetary expert thinks

Andrew Coates, UCL

Elon Musk, the founder of SpaceX and Tesla, has released new details of his vision to colonise parts of the solar system, including Mars, Jupiter’s moon Europa and Saturn’s moon Enceladus. His gung ho plans – designed to make humans a multi-planetary species in case civilisation collapses – include launching flights to Mars as early as 2023.

The details, just published in the journal New Space, are certainly ambitious. But are they realistic? As someone who works on solar system exploration, and the European Space Agency’s new Mars rover in particular, I find them incredible in several ways.

First of all, let’s not dismiss Musk as a Silicon Valley daydreamer. He has had tremendous success with rocket launches to space already. His paper proposes several interesting ways of trying to get to Mars and beyond – and he aims to build a “self-sustaining city” on the red planet.

Musk outlining initial plans in 2016.

The idea depends on getting cheaper access to space – the paper says the cost of trips to Mars must be lowered by “five million percent”. An important part of this will be reusable space technology. This is an excellent idea that Musk is already putting into practice with impressive landings of rocket stages back on Earth – undoubtedly a huge technological step.

Making fuel on Mars and stations beyond it is something he also proposes, to make the costs feasible. Experiments towards this are underway, demonstrating that choosing the right propellant is key. The MOXIE experiment on the NASA 2020 rover will investigate whether we can produce oxygen from atmospheric CO2 on Mars. This may be possible. But Musk would like to make methane as well – it would be cheaper and more reusable. This is a tricky reaction which requires a lot of energy.

Yet, so far, it’s all fairly doable. But the plans then get more and more incredible. Musk wants to launch enormous spaceships into orbit around Earth where they will be refuelled several times using boosters launched from the ground while waiting to head to Mars. Each will be designed to take 100 people and Musk wants to launch 1,000 such ships in the space of 40 to 100 years, enabling a million people to leave Earth.

There would also be interplanetary fuel-filling stations on bodies such as Enceladus, Europa and even Saturn’s moon Titan, where there may have been, or may still be, life. Fuel would be produced and stored on these moons. The aim of these would be to enable us to travel deeper into space to places such as the Kuiper belt and the Oort cloud.

The “Red Dragon” capsule is proposed as a potential lander on such missions, using propulsion in combination with other technology rather than parachutes as most Mars missions do. Musk plans to test such a landing on Mars in 2020 with an unmanned mission. But it’s unclear whether it’s doable and the fuel requirements are huge.

Pie in the sky?

There are three hugely important things that Musk misses or dismisses in the paper. Missions such as the ExoMars 2020 rover – and plans to return samples to Earth – will search for signs of life on Mars. And we must await the results before potentially contaminating Mars with humans and their waste. Planetary bodies are covered by “planetary protection” rules to avoid contamination and it’s important for science that all future missions follow them.

Musk inspecting a heat shield at the SpaceX factory.
Steve Jurvetson/Flickr, CC BY

Another problem is that Musk dismisses one of the main technical challenges of being on the Martian surface: the temperature. In just two sentences he concludes:

It is a little cold, but we can warm it up. It has a very helpful atmosphere, which, being primarily CO2 with some nitrogen and argon and a few other trace elements, means that we can grow plants on Mars just by compressing the atmosphere.

In reality, the temperature on Mars drops from about 0°C during the day to nearly -120°C at night. Operating in such low temperatures is already extremely difficult for small landers and rovers. In fact, it is an issue that has been solved with heaters in the design for the 300kg ExoMars 2020 rover – but the amount of power required would likely be a show-stopper for a “self-sustaining city”.

Musk doesn’t give any details for how to warm the planet up or compress the atmosphere – each of which are enormous engineering challenges. Previously, science fiction writers have suggested “terraforming” – possibly involving melting its icecaps. This is not only changing the environment forever but would also be challenging in that there is no magnetic field on Mars to help retain the new atmosphere that such manipulation would create. Mars has been losing its atmosphere gradually for 3.8 billion years – which means it would be hard to keep any such warmed-up atmosphere from escaping into space.

The final major problem is that there is no mention of radiation beyond Earth’s magnetic cocoon. The journey to and life on Mars would be vulnerable to potentially fatal cosmic rays from our galaxy and from solar flares. Forecasting for solar flares is in its infancy. With current shielding technology, just a round-trip manned mission to Mars would expose the astronauts to up to four times the advised career limits for astronauts of radiation. It could also harm unmanned spacecraft. Work is underway on predicting space weather and developing better shielding. This would mitigate some of the problems – but we are not there yet.


For missions further afield, there are also questions about temperature and radiation in using Europa and Enceladus as filling stations – with no proper engineering studies assessing them. These moons are bathed in the strongest radiation belts in the solar system. What’s more, I’d question whether it is helpful to see these exciting scientific targets, arguably even more likely than Mars to host current life, as “propellant depots”.

The plans for going further to the Kuiper belt and Oort cloud with humans is firmly in the science fiction arena – it is simply too far and we have no infrastructure. In fact, if Musk really wants to create a new home for humans, the moon may be his best bet – it’s closer after all, which would make it much cheaper.

The ConversationThat said, aiming high usually means we will achieve something – and Musk’s latest plans may help pave the way for later exploration.

Andrew Coates, Professor of Physics, Deputy Director (Solar System) at the Mullard Space Science Laboratory, UCL

This article was originally published on The Conversation. Read the original article.

Could asteroids bombard the Earth to cause a mass extinction in 10 million years?

Sanna Alwmark, Lund University and Matthias Meier, Swiss Federal Institute of Technology Zurich

Scientists have spent decades debating whether asteroids and comets hit the Earth at regular intervals. At the same time, a few studies have found evidence that the large extinction events on Earth – such as the one that wiped out the dinosaurs 66m years ago – repeat themselves every 26m to 30m years. Given that there’s good evidence that an asteroid triggered the dinosaur extinction, it makes sense to ask whether showers of asteroids could be to blame for regular extinction events.

The question is extremely important – if we could prove that this is the case, then we might be able to predict and even prevent asteroids causing mass extinctions in the future. We have tried to find out the answer.

Today, there are approximately 190 impact craters from asteroids and comets on Earth. They range in size from only a few meters to more than 100km across. And they formed anywhere between a few years ago and more than two billion years ago. Only a few, like the famous “Meteor crater” in Arizona, are visible to the untrained eye, but scientists have learned to recognise impact craters even if they are covered by lakes, the ocean or thick layers of sediment.

Meteor crater, Arizona.
Kevin Walsh/wikipedia, CC BY-SA

But have these craters formed as a result of regular asteroid collisions? And if so, why? There have been many suggestions, but most prominently, some scientists have suggested that the sun has a companion star (called “Nemesis”) on a very wide orbit, which approaches the solar system every 26m to 30m years and thereby triggers showers of comets.

Nemesis would be a red/brown dwarf star – a faint type of star – orbiting the sun at a distance of about 1.5 light years. This is not an impossible idea, since the majority of stars actually belong to systems with more than one star. However, despite searching for it for decades, astronomers have failed to observe it, and think they can now exclude its existence.

Difficult dating

Yet, the idea of periodic impacts persists. There are other suggestions. One idea is based on the observation that the sun moves up and down slightly as it orbits the galaxy, crossing the galactic disk every 30m years or so. Some have suggested that this could somehow trigger comet showers.

But is there any evidence that asteroid impacts occur at regular intervals? Most research so far has failed to show this. But that doesn’t mean it isn’t the case – it’s tricky getting the statistics right. There are a lot of variables involved: craters disappear as they age, and some are never found in the first place as they are on the ocean floor. Rocks from some periods are easier to find than from others. And determining the ages of the craters is difficult.

A recent study claimed to have found evidence of periodicity. However, the crater age data it used included many craters with poorly known, or even incorrect and outdated ages. The methods used to determine age – based on radioactive decay or looking at microscopic fossils with known ages – are continuously improved by scientists. Therefore, today, the age of an impact event can be improved significantly from an initial analysis made, say, ten or 20 years ago.

Another problem involves impacts that have near identical ages with exactly the same uncertainty in age: known as “clustered ages”. The age of an impact crater may be, for example, 65.5 ± 0.5m years while another is be 66.1 ± 0.5m years. In this case, both craters might have the same true age of 65.8m years. Such craters have in some instances been produced by impacts of asteroids accompanied by small moons, or by asteroids that broke up in the Earth’s atmosphere.

The Manicouagan crater in Canada seen from the International Space Station/
NASA/Chris Hadfield

The double impact craters they produce can make it look like they hit a time when there were lots of asteroid impacts, when actually the craters were formed in the same event. In some cases, clustered impact craters are spaced too far apart to be explained as double impacts. So how could we explain them? The occasional collision of asteroids in the asteroid belt between Mars and Jupiter might trigger short-lived “showers” of asteroids impacting the Earth. Only a few of these showers are necessary to lead to the false impression of periodicity.

Fresh approach

In contrast to previous studies, we restricted our statistical analysis to 22 impact craters with very well defined ages from the past 260m years. In fact, these all have age uncertainties of less than 0.8%. We also accounted for impacts with clustered ages.

Our article, recently published in Monthly Notices of the Royal Astronomical Society, shows that, to the best of our current knowledge, asteroid impacts do not happen at regular intervals – they seem to occur randomly.

Of course, we can’t be sure that there isn’t any periodicity. But the good news is that, as more impact craters are dated with robust ages, the statistical analysis we did can be repeated over and over again – if there is such a pattern, it should become visible at some point.

The ConversationThat means that there is presently no way to predict when a large asteroid collision may once again threaten life on Earth. But then when it comes to facing the apocalypse, maybe not knowing is not so bad after all …

Sanna Alwmark, Doctoral Candidate of Lithosphere and Biosphere Science, Lund University and Matthias Meier, Swiss National Science Foundation Ambzione Fellow in Geochemistry, Astrophysics, Swiss Federal Institute of Technology Zurich

This article was originally published on The Conversation. Read the original article.

What is Operations Research (OR)?

This post was originally posted on a University of Nottingham blog.

What is Operations Research (OR)?

The terms Operations Research (American term) and Operational Research (European term) are used interchangeably. The discipline is also referred to as:

  • Management Science (most often used in a Business Management sense)
  • Decision Science (less frequently used, but is used most often when statistics are involved)
  • Analytics (a relatively new term but is increasingly used)

Operations Research has close links with Mathematics and Computer Science. It draws on many areas to solve the various problems that it is presented with. Included in these are

  • Optimization (drawing on mathematical programming and and areas such as Linear Programming)
  • Modelling
  • Simulation
  • Heuristics
  • Meta-heuristics
  • Hyper-heuristics
  • Evolutionary Computation
  • Game Theory
  • Statistics


A Traveling Salesman Problem solution for USA (Figure credit: David Applegate, Robert Bixby, Vasek Chvatal and William Cook)
A Traveling Salesman Problem solution for USA (Figure credit: David Applegate, Robert Bixby, Vasek Chvatal and William Cook)

The essence of Operations Research is to provide (ideally) optimal, or near optimal solutions to complex decision problems. Probably, the most well known problem (at least in the scientific arena) is the Traveling Salesman Problem (TSP) which can be described as follows:

A salesman has to visit a number of cities. He can choose which one he starts at, but he must complete his tour at the same city. He must visit every other city exactly once. The aim is to minimize the distance traveled.

Whilst being very easy to describe, the TSP gets very difficult to solve (at least in polynomial time) due to the fact that the number of possible tours grows exponentially with the number of cities (the actual number of tours is n!/2 (we divide by two as a tour in one direction is the same as a tour in the opposite direction)).

Historical Details

Like many things, especially in Computer Science, many of its origins can be traced back to the second world war, necessity being the mother of invention, although some would argue that OR’s roots can be traced back beyond this point. Given the subject, you’d expect that many people would have documented the history of the subject and, indeed, this is the case. I have provided below some sources which the interested reader might want to follow.

  • [1] Gass S.I. and Assad A.A. An Annotated Timeline of Operations Research: An Informal History, Springer. ISBN-10: 1402081162, ISBN-13: 978-1402081163
  • [2] Historical Origins of Operations Research,, last accessed 2nd Mar 2013
  • [3] Gass, S. I., A.A. Assad. History of operations research. J. Geunes, ed. INFORMS TutORials in Operations Research, Vol. 8. INFORMS, Hanover, MD, pp. 1–14

Why is OR so hard?

The type of combinatorial explosion we see in problems such as the TSP often underpins the problems that we face in OR. In fact, the problems where is is easy to verify (i.e. in polynomial time) if a solution is correct but to find the optimal solution cannot be done (we suspect) in polynomial time is often at the heart of the problems we are trying to solve in OR.

These problems are NP-Complete (in fact NP-Hard, in the way we are presenting the TSP as it is an opimization problem – but we can easily convert it to an NP-Complete problem by framing it as a decision proble (e.g. “is there a route that is less than n length?”)). That is, we can easily verify a solution is correct (given a TSP solution, we can easily add up the distances to verify that the solution we have been given is correct) but we do not know of a polynomial time algorithm that is guaranteed to return an optimal solution. Indeed, proving P=NP (or not) is one of the Millenium Problems and if you are able to do it, you will receive a prize of $1M USD.

There are some common problems that you will often come across in OR. We have already mentioned the TSP.

The Vehicle Routing Problem!
The Vehicle Routing Problem!

The Vehicle Routing Problem (VRP) is another classic OR problem. As the name suggests, this problem is about scheduling deliveries for vehicles. The classic version is the Capacitated Vehicle Routing Problem (where we minimize total distance traveled, but have to respect vehicle capacities) but there are many variants, such as VRPTW (Vehicle Routing with Time Windows), where deliveries have to be made at certain times. In fact, VRP and TSP are very closely related.

Another classic problem is graph coloring. That is, given a graph with various connections between the nodes you have to try and color the nodes, using as few colors as possible, such that no nodes which are connected have the same color. This problem has an obvious application is coloring maps but you might be surprised to know that it underpins many (many, many) other problems. As an example, university examination timetabling (i.e. scheduling the exams for our students) can be modeled (and solved) as a graph coloring problem. There are almost an infinite number of problems that can be modeled as a graph coloring problem.

Second to the TSP (and this is debatable, it might be first), with respect to the number of papers written, machine/job shop scheduling problem. This problem, in its simplest form, looks at scheduling factories.

Given a number of machines, and a number of processes that have to be gone through to produce a product, what is the best way to utilize the machine(s) to maximize the throughput?

Graph Colouring Problem
Graph Colouring Problem

Like the graph coloring problem, Job Shop Scheduling (JSP) and Flow Shop Scheduling (FSP) can be used to represent many other problems, that are about as far away from the factory floor as you can imagine (how about having telescope(s) in space and trying to schedule their usage for various scientists).


If we could prove that P=NP (which most people think unlikely) then we would be able to find the optimal solution to many of the important problems in OR. That is, we would have a polynomial time algorithm that would give us an optimal solution in a reasonable time. Of course, it might still take a long time but this better than an exponential time algorithm that might take millions of years to return the optimal solution, even on the fastest computers. In fact, there are many problems (or many problems of sufficient size) where we would have only considered a small number of the possible solutions even if we started the algorithm when the dinosaurs were roaming the earth.

However, there are sophisticated algorithms (such as linear programming) that are increasingly able to solve moderately sized problems to optimality.

When these fail (or we find it difficult to model the problem in sufficient detail to use a mathematical programming approach) we tend to use either heuristics, meta-heuristics, hyper-heuristics or evolutionary computation.

The definition of these is not formal (in that, we could argue where they blur at the edges) but:

  • Heuristics tend to be one pass algorithms and are quite quick.
  • Meta-heuristics are based on phenomena seen in the real world. Things like tabu search (based on memory) and simulated annealing (based on the way we cool metal).
  • Hyper-heuristics are a development of meta-heuristics (although their roots, strangely, can be traced back to before the term meta-heuristics was coined). They are based on the idea of exploring the heuristic space, rather than searching the solution space directly.
  • Evolutionary Computation are algorithms that are based on Darwin’s principles of natural evolution (survival of the fittest) where we have a population of solutions which compete against each other for survival. Common algorithms in this domain include genetic algorithms, genetic programming, honey-bee mating algorithms and particle swam optimisation.


Where do we publish?

If you are looking for journals that you might want to consider publishing in then Thomas Reuters, Web of Knowledge, Journal Citation Reports has a specific category for Operations Research & Management Science. For the 2011 journal rankings, this category contained 77 journals. Of course, not all of them will be suitable for a given piece of research but these 77 journals all most (if not all) areas of Operations Research.


Want to know more?

There are too many resources to list here, and a serch on a bibliographic search engine such as Science Direct is likley to throw up more references than you would imagine.

But, youtube has a god set of videos where you can Learn About OR.

A couple of videos that caught my eye are OR in Sport and OR in Transport


About the author

Graham Kendall is a Professor of Computer Science who works in the Automated Scheduling, Optimisation and Planning Research Group (ASAP). He is a Fellow of the OR Society, as well as an Associate Editor of the Journal of Operational Research Society (in addition to several other journals). He has published widely in Operations Research, as well as other areas. His publications can be seen here.

He has over 30 years experience in OR, both in industry and academia.

Graham is currently based on the University of Nottingham’s Malaysia Campus (UNMC) where he is the Vice-Provost of Research and Knowledge Transfer.

Contact details:



[1] Gass S.I. and Assad A.A. (Author) An Annotated Timeline of Operations Research: An Informal History, Springer. ISBN-10: 1402081162, ISBN-13: 978-1402081163

[2] Historical Origins of Operations Research,, last accessed 2nd Mar 2013

[3] Gass, S. I., A.A. Assad. History of operations research. J. Geunes, ed. INFORMS TutORials in Operations Research, Vol. 8. INFORMS, Hanover, MD, pp. 1–14


How to backup life on Earth ahead of any doomsday event

Jonathan Roberts, Queensland University of Technology

There are ten asteroids that the space organisation NASA said this month have been classified as “potentially hazardous” based on their size and their orbits in our Solar system.

NASA has now identified 693 near-Earth objects thanks to the Wide-field Infrared Survey Explorer spacecraft that’s been looking for potential threats to Earth since 2013.

The organisation doesn’t specify what kind of hazard these ten asteroids pose. But Earth has been hit by objects in the past, with devastating effects. Scientists largely agree that it was an asteroid or comet impact that started the chain of events that wiped out the dinosaurs around 60 million years ago.

This animation shows asteroids and comets observed by the Near-Earth Object Wide-field Survey Explorer (NEOWISE) mission.

Every year several previously unseen asteroids whizz past Earth, sometimes with only with a few days’ warning. This year two of these asteroids came very close to Earth, with one in May sailing past only 15,000km away. On cosmic scales, that was a very close shave.

But impacts from objects in space are just one of several ways that humanity and most of life on Earth could suddenly disappear.

We are already observing that extinctions are happening now at an unprecedented rate. In 2014 it was estimated that the extinction rate is now 1,000 times greater than before humans were on the Earth. The estimated number of extinctions ranges from 200 to 2,000 species per year.

From all of this very worrying data, it would not be a stretch to say that we are currently within a doomsday scenario. Of course, the “day” is longer than 24 hours but may be instead in the order of a century or two.

So what can we do about this potential prospect of impending doom? We can try to avoid some of the likely scenarios. We should act to tackle climate change and we can develop new asteroid-tracking systems and put in place a means to deflect an asteroid on a collision course with Earth.

But the threats we face are so unpredictable that we need to have a backup plan. We need to plan for the time after our doomsday and think about how a post-apocalyptic Earth may recover and humanity will flourish again.

A backup plan

Some efforts to backup life on our planet have already started. Since the 1970s scientists around the world began to store seeds of potentially endangered plants. There are now dozens of seed banks or vaults scattered around the world.

The most famous is the Svalbard Global Seed Vault, located on a remote Norwegian island about 1,300km from the North Pole. The location was deliberately chosen to afford the project safe and secure long-term storage in cold and dry rock vaults.

A risk of thawing at the Svalbard Global Seed Vault.
Flickr/Landbruks og matdepartementet, CC BY-ND

But there were reports earlier this year that the vault had suffered issues with water from the surrounding melting permafrost (caused by global warming) gaining entry to parts of the structure.

Less common are vaults for storing biological material from animals. There are a handful of so-called frozen zoos around the world. They store embryos, eggs, sperm and more recently DNA of endangered animals. So far, sperm, eggs and embryos that have been frozen for roughly 20 years have been shown to be viable.

All of the storage methods that involve freezing have the same problem that the material is at risk of thawing out if the freezing methods fail. Storing frozen biological material for centuries or even millennia on Earth is not realistic.

Humans can now sequence a whole genome of a living organism and the cost has reduced to the point where it costs less than US$1,000 to sequence the human genome. This process effectively turns the information from any organism’s cells into data.

If future scientists can create living DNA from the genome data and can then create living organisms from that DNA, then having the data alone may be sufficient to backup the Earth’s living organisms.

Where to store the backups?

But where should humanity store the backups? As French president Emmanuel Macron said recently, “there is no plan B because there is no planet B”, echoing 2014 comments from Ban Ki-moon when he was secretary general of the United Nations.

Backing up on Earth seems a high-risk strategy, equivalent to having a computer backup on an external hard drive that sits right next to your computer.

So given that the motivation for backing up Earth’s organisms is the likelihood of Earth itself suffering a catastrophe, it follows that our planet is not the best location for the backups. The partial flooding of the Svalbard Global Seed Vault illustrates that perfectly.

Perhaps the obvious place to locate the backups is in space.

Seeds have already been taken to space for short periods (six months) to test their viability back on Earth. These experiments so far have been motivated by the desire to eventually grow plants in space itself, on space stations, or on Mars.

Space is a harsh environment for biological material, where cells are exposed to potentially very high doses of radiation that will damage DNA. Storage of seeds in low Earth orbit is desirable as Earth’s magnetic field provides some protection from space radiation. Storage outside of this zone and in deep space would require other methods of radiation protection.

The other question is how you would get seeds and other biological material safely back to Earth after a global disaster. Now we get to the robotics that can help, as autonomous re-entry of biological material from orbit is totally feasible.

The tricky part is for our orbiting bio-backup to know when its cargo is required and where to send it to. Perhaps we need a global limited robot crew – such as David in the recent Alien films – that would wake up the orbiter when it is needed.

‘Hello, I’m David.’

Alternatively, it could be staffed by a rotating crew of wardens similar to the International Space Station. These people could carry out other important scientific work too.

Other locations in space for storage of biological material or data include the Moon, and the moons of our solar system’s gas planets asteroids or deep space itself on free flying spacecraft. Such projects have been proposed and groups around the world have begun planning such ventures.

The ConversationSo it seems that some people have already accepted the fate of humanity version 1.0 and that it will end sometime in the relative near term. The movement to create our backup ready for humanity version 2.0 has already begun.

Jonathan Roberts, Professor in Robotics, Queensland University of Technology

This article was originally published on The Conversation. Read the original article.

The new space race: why we need a human mission to Mars

Malcolm Walter, UNSW

If we want to know whether there is life beyond Earth then the quickest way to answer that question is to explore Mars. That exploration is currently being done by remote space probes sent from Earth.

The race is on though to send human explorers to Mars and a number of Earth-bound projects are trying to learn what life would be like on the red planet.

But the notion of any one-way human mission to Mars is nonsensical, as is the thought that we should colonise Mars simply because we are making a mess of Earth.

The first suggestion is pointless and unethical – we would be sending astronauts to their certain death – while the second would be a licence for us to continue polluting our home planet.

I believe we should go to Mars because of what we can learn from the red planet, and from developing the technologies to get people there safely.

The SpaceX entrepreneur Elon Musk last September outlined his vision for a mission to send people to Mars by 2022. But first he is planning to send people around the Moon.

I think Musk will send two space tourists around the Moon and back to Earth, not in 2018 as he has predicted, but probably within a decade. He has not yet experimented with having passengers aboard a rocket.

Our journey into space

It’s worth looking at how we got to where we are now in terms of humans in space and space exploration.

More than a billion people watched Apollo 11’s Neil Armstrong take humankind’s first step on another world.

The first footprint on another world was made by US astronaut Neil Armstrong on July 20, 1969 (US time) when he left the Eagle lunar lander and stepped onto the Moon.

One small step…

The Moon is as far as humans have explored in space but we’ve sent probes to explore the other planets in our Solar system, including Mars.

Several failed attempts were made to send a probe to Mars but the US Mariner 4 was the first to successfully photograph another planet from space when it made a flyby of Mars in July 1965.

The red planet Mars.

The USSR’s Mars 2 orbited Mars for three months in 1971 but its lander module crashed onto the planet. The lander of the Mars 3 mission also failed.

NASA’s Viking 1 performed the first successful landing on Mars, on July 20, 1976, followed by Viking 2 on September 3, 1976.

The dunes of Mars as seen by Viking 1.

The Viking missions were the first to search for life on that planet, since when others such as the Spirit and Opportunity rovers, which landed days apart in January 2004, have looked to see if Mars could have had life in the past.

No evidence of life has been found so far, but the techniques available now are far more advanced and we know much more about the planet. We do have abundant evidence of water on Mars.

The benefits of space exploration

Apart from looking for life, why bother with a mission to send humans to Mars? Many aspects of our modern lives would not be possible if it were not for our interest in space.

We rely on satellites for communication, timing and positioning. Satellites help to keep us safe from severe weather, especially in Australia.

The Apollo and other NASA missions led to developments in micro-electronincs that later made it into household devices such as calculators and home computers.

NASA has detailed many of the spinoffs it says stem from its research for exploration of space, which even include the dustbuster.

The modern household dustbuster has its origins in the Apollo Moon missions.
Shutterstock/Sergey Mironov

Intangible, but critical nonetheless, is the inspiration we derive from space exploration. It can be very significant in attracting young people to science and engineering, something needed more and more as our economies continue to transition to an ever higher-tech future.

In the US there was a large spike in tertiary enrolments in science and engineering during the Apollo missions to the Moon.

A new space race

We are using more and more sophisticated craft to explore Mars. It is a broadly international venture involving NASA, the European Space Agency (22 member nations), the Russian Federal Space Agency, the Indian Space Research Organisation, the China National Space Administration, and the Japan Aerospace Exploration Agency.

We are witnessing not only collaboration but competition. Which nation (or company?) will first return to the Moon and then land astronauts on Mars? It is beginning to look like a new space race.

Why focus on Mars? We already know that early in its history, more than three billion years ago, Mars had a surface environment much like that of Earth at the same time, featuring volcanoes, lakes, hot springs, and perhaps even an ocean in the northern hemisphere.

This animation shows how the surface of Mars might have appeared billions of years ago.

Life on Earth then was microbial, the evidence for which is preserved in 3.5 billion year old rocks in the Pilbara region of Western Australia.

So we are searching for microbes on Mars. Despite being microscopic, bacteria and their cousins the Archaea are complex organisms. Methane already discovered in the atmosphere of Mars hints at the presence of such life but is not definitive.

If there ever was life on Mars it may still be there, underground where it will be protected from cosmic and ultraviolet radiation. From time to time it might emerge on the surface in some of the gullies that seem to result from the breaching of underground aquifers.

It might not seem exciting to discover former or living microbes, but if we can demonstrate that they represent an independent origin of life the consequences will be profound.

We will be able to predict confidently that there will be life all over the universe. Somewhere out there will be intelligent beings. What might happen then currently lies in the realm of science fiction.

The ConversationThe future lies in more missions to Mars. So far all missions have been one-way and robotic, but plans are underway for a mission to return samples from Mars, and sometime this century there will be astronauts on Mars, not in “colonies” but in research bases like those in Antarctica. It is inevitable.

Malcolm Walter, Professor of Astrobiology, UNSW

This article was originally published on The Conversation. Read the original article.

Who feels the pain of science research budget cuts?

Bruce Weinberg, The Ohio State University

Science funding is intended to support the production of new knowledge and ideas that develop new technologies, improve medical treatments and strengthen the economy. The idea goes back to influential engineer Vannevar Bush, who headed the U.S. Office of Scientific Research and Development during World War II. And the evidence is that science funding does have these effects.

But, at a practical level, science funding from all sources supports research projects, the people who work on them and the businesses that provide the equipment, materials and services used to carry them out. Given current proposed cuts to federal science funding – the Trump administration has, for instance, proposed a 20 percent reduction for the National Institutes of Health – it’s important to know what types of people and businesses are touched by sponsored research projects. This information provides a window into the likely effects of funding cuts.

Most existing research into the effects of science funding tries to quantify research artifacts, such as publications and patents, rather than tracking people. I’ve helped to start an emerging project called the UMETRICS initiative which takes a novel approach to thinking about innovation and science. At its core, UMETRICS views people as key to understanding science and innovation – people conduct research, people are the vectors by which ideas move around and, ultimately, people are one of the primary “products” of the research enterprise.

UMETRICS identifies people employed on scientific projects at universities and the purchases made to carry out those projects. It then tracks people to the businesses and universities that hire them, and purchases to the vendors from which they come. Since UMETRICS relies entirely on administrative data provided by member universities (now around 50), the U.S. Census Bureau and other naturally occurring data, there are no reporting errors, sample coverage concerns or burden for people. It covers essentially all federal research funding as well as some funding from private foundations.

Who does research funding support?

Our administrative data allow us to identify everyone employed on research projects, not just those who appear as authors on research articles. This is valuable because we’re able to identify students and staff, who may be less likely to author papers than faculty and postdocs but who turn out to be an important part of the workforce on funded research projects. It’s like taking into account everyone who works in a particular store, not just the manager and owner.

We compared the distribution of people supported on research projects at some of the largest National Science Foundation (NSF) Divisions and National Institutes of Health (NIH) Institutes and Centers. Together, the NSF and NIH support close to 70 percent of federally funded academic R&D.

The striking thing is that the majority of people employed on research projects are somewhere in the training pipeline, whether undergraduates; graduate students, who are particularly prevalent at NSF; or postdocs, who are more prevalent at NIH. Staff frequently constitute 40 percent of the NIH-supported workforce, but faculty are a relatively small portion of the workforce at all NIH Institutes and NSF Divisions.

Based on these results, it seems likely that changes in federal research funding will have substantial effects on trainees, which would naturally have implications for the future STEM workforce.

What happens to STEM doctoral recipients?

Given the importance of trainees in the research workforce, we have focused much of our research on graduate students.

We mapped the universities in our sample and the share of the graduate students in each state one year after graduation. Our data show that many grad students contribute to local economies – 12.7 percent are within 50 miles of the universities where they trained. For six of our eight universities, more people stayed in state than went to any other single state. At the same time, graduate students fan out nationally, with both coasts, Illinois and Texas all being common destinations.

The doctoral recipients in our sample are also more likely to take jobs at establishments that are engines of the knowledge economy. They are heavily overrepresented in industries such as electronics, semiconductors, computers and pharmaceuticals, and underrepresented in industries such as restaurants, grocery stores and hotels. Doctoral degree recipients are almost four times as likely as the average U.S. worker to be employed by an R&D-performing firm (44 percent versus 12.6 percent). And, the establishments where the doctoral degree recipients work have a median payroll of over US$90,000 per worker compared to $33,000 for all U.S. establishments and $61,000 for establishments owned by R&D performing firms.

We also studied initial earnings by field and find that earnings of doctoral degree recipients are highest in engineering; math and computer science; and physics. Among the STEM fields, the lowest earnings are in biology and health, but our data also suggest that many of the people in these fields take postdoc positions that have low earnings, which may improve long-run earnings prospects. Interestingly, we find that women have substantially lower earnings than men, but these differences are entirely accounted for by field of study, marital status and presence of children.

Taken as a whole, our research indicates that the workers trained on research projects play a critical role in the industries and at companies critical for our new, knowledge economy.

What purchases do research projects drive?

Researchers need to buy the equipment they use to do their science.
Michael Pereckas, CC BY-SA

Another way in which sponsored research projects affect the economy in the short run is through purchases of equipment, supplies and services. Economist Paula Stephan writes eloquently of these transactions, which range from purchasing computers and software, to reagents, medical imaging equipment or telescopes, even to lab mice and rats.

Still unpublished work studying the vendors who sell to sponsored research projects at universities shows that many of the firms selling to sponsored research projects are frequently high-tech and often local. Moreover, firms that are vendors to university research projects are more likely to open new establishments near their campus customers. Thus, there is some evidence that research projects directly stimulate local economies.

The ConversationSo while the goal of sponsored research projects is to develop new knowledge, they also support the training of highly skilled STEM workers and support activity at businesses. The UMETRICS initiative allows us to see just which people and businesses are touched by sponsored research projects, providing a window into the short-run effects of research funding as well as hinting at its long-run value.

Bruce Weinberg, Professor of Economics, The Ohio State University

This article was originally published on The Conversation. Read the original article.

The maths of life and death: our secret weapon in the fight against disease

Christian Yates, University of Bath

Maths is the language of science. It crops up everywhere from physics to engineering and chemistry – aiding us in understanding the origins of the universe and building bridges that won’t collapse in the wind. Perhaps a little more surprisingly, maths is also increasingly integral to biology.

For hundreds of years mathematics has been used, to great effect, to model relatively simple physical systems. Newton’s universal law of gravitation is a fine example. Relatively simple observations led to a rule which, with great accuracy, describes the motion of celestial bodies billions of miles away. Traditionally, biology has been viewed as too complicated to submit to such mathematical treatment.

Biological systems are often classified as “complex”. Complexity in this sense means that, due to the complicated interaction of many sub-components, biological systems can exhibit what we call emergent behaviour – the system as a whole demonstrates properties which the individual components acting alone cannot. This biocomplexity has often been mistaken for vitalism, the misconception that biological processes are dependent on a force or principle distinct from the laws of physics and chemistry. Consequently, it has been assumed that complex biological systems are not amenable to mathematical treatment.

How did the leopard get his spots?
VarnaK /

There were some early dissenters. Famous computer scientist and World War II code-breaker Alan Turing was one of the first to suggest that biological phenomena could be studied and understood mathematically. In 1952 he proposed a pair of beautiful mathematical equations which provide an explanation for how pigmentation patterns might form on animals’ coats.

Examples of different types of Turing patterns.
© Kit Yates

Not only was his work beautiful, it was also counter-intuitive – the sort of work that only a brilliant mind like Turing’s could ever have dreamed up. Even more of a pity, then, that he was so poorly treated under the draconian anti-homosexuality laws of the time. After a course of “corrective” hormone treatment, he killed himself just two years later.

An emerging field

Since then, the field of mathematical biology has exploded. In recent years, increasingly detailed experimental procedures have lead to a huge influx in the biological data available to scientists. This data is being used to generate hypotheses about the complexity of previously abstruse biological systems. In order to test these hypotheses, they must be written down in the form of a model which can be interrogated to determine whether it correctly mimics the biological observations. Mathematics is the natural language in which to do this.

In addition, the advent of, and subsequent increase in, computational ability over the last 60 years has enabled us to suggest and then interrogate complex mathematical models of biological systems. The realisation that biological systems can be treated mathematically, coupled with the computational ability to build and investigate detailed biological models, has led to the dramatic increase in the popularity of mathematical biology.

Maths has become a vital weapon in the scientific armoury we have to tackle some of the most pressing questions in medical, biological and ecological science in the 21st century. By describing biological systems mathematically and then using the resulting models, we can gain insights that are impossible to access though experiments and verbal reasoning alone. Mathematical biology is incredibly important if we want to change biology from a descriptive into a predictive science – giving us power, for example, to avert pandemics or to alter the effects of debilitating diseases.

A new weapon

Over the last 50 years, for example, mathematical biologists have built increasingly complex computational representations of the heart’s physiology. Today, these highly sophisticated models are being used in an attempt to understand better the complicated functioning of the human heart. Computer simulations of heart function allow us to make predictions about how the heart will interact with candidate drugs, designed to improve its function, without having to undertake expensive and potentially risky clinical trials.

We use mathematical biology to study disease as well. On an individual scale, researchers have elucidated the mechanisms by which our immune systems battles with viruses through mathematical immunology and suggested potential interventions for tipping the scales in our favour. On a wider scale, mathematical biologists have proposed mechanisms that can be used to control the spread of deadly epidemics like Ebola, and to ensure the finite resources dedicated to this purpose are employed in the most efficient way possible.

Mathematical biology is even being used to inform policy. There has been research done on fisheries for example, using mathematical modelling to set realistic quotas in order to ensure we do not overfish our seas and that we protect some of our most important species.

A day’s haul.

The increased comprehension gleaned by taking a mathematical approach can lead to better understanding of biology at a range of different scales. At the Centre for Mathematical Biology in Bath, for example, we study a number of pressing biological problems. At one end of the spectrum, we try to develop strategies for averting the devastating effects of locust plagues comprising up to a billion individuals. At the other end, we try to elucidate the mechanisms that give rise to the correct development of the embryo.

Although mathematical biology has traditionally been the domain of applied mathematicians, it is clear that mathematicians who self-classify as pure have a role to play in the mathematical biology revolution. The pure discipline of topology is being used to understand the knotty problem of DNA packing and algebraic geometry is being used to select the most appropriate model of biochemical interaction networks.

As the profile of mathematical biology continues to rise, emerging and established scientists from disciplines across the scientific spectrum will be drawn to tackle the rich range of important and novel problems that biology has to offer.

The ConversationTuring’s revolutionary idea, although not fully appreciated in his time, demonstrated that there was no need to appeal to vitalism – the god in the machine – to understand biological processes. Chemical and physical laws encoded in mathematics, or “mathematical biology” as we now call it, could do just fine.

Christian Yates, Senior Lecturer in Mathematical Biology, University of Bath

This article was originally published on The Conversation. Read the original article.

The economics of self-service checkouts

Gary Mortimer, Queensland University of Technology and Paula Dootson, Queensland University of Technology

Self-checkouts in supermarkets are increasing as businesses battle to reduce costs and increase service efficiency. But looking at the numbers, it isn’t clear that self-service is an easy win for businesses.

Self-checkouts aren’t necessarily faster than other checkouts, don’t result in lower staff numbers, and there are indirect costs such as theft, reduced customer satisfaction and loyalty.

Worldwide, self-checkout terminals are projected to rise from 191,000 in 2013 to 325,000 by 2019. A survey of multiple countries found 90% of respondents had used self-checkouts, with Australia and Italy leading the way.

Employment in the Australian supermarket and grocery industry went down for the first time in 2015-16 and is projected to remain flat for a few years. But staff numbers are projected to rebound again, in part due to the need to curtail growing theft in self-checkouts.

Social trends pushing self-checkout

There are a couple of intertwining trends that explain the rise of self checkouts.

We now visit our supermarkets more frequently than ever before, two to three times per week in fact. This means our basket contains fewer items and being able to wander up to a self-checkout, with little to no wait time, has been an expedient way to shop. Most shoppers consider self-checkouts both fast and easy to use. Although this varies with age – 90% of shoppers aged 18-39 found self-service checkouts easy to use, only 50% of those over 60 years said the same.

Shoppers also gain value from taking control of the transaction – being able to ring up their own goods and pack them the way they want. This is because a sense of control over their own shopping can lead to greater customer satisfaction and intent to use and reuse self-serve technology.

The numbers behind self-checkouts

Wages represent around 9.5% of supermarket revenue in Australia, and reducing wages is one of the reasons proposed for the uptake of self-checkout.

But from a business perspective, moving from “staffed” checkouts to self-serve machines isn’t cheap. A typical setup costs around US$125,000. On top of that there are the costs of integrating the machines into the technology already in place – the software and other systems used to track inventory and sales, and the ongoing costs – to cover breakdowns and maintenance.

But the biggest direct cost to retailers of adopting self-service checkouts is theft. Retail crime in Australia costs the industry over A$4.5 billion each year.

There is reason to believe that rates of theft are higher on self-service machines than regular checkouts. A study of 1 million transactions in the United Kingdom found losses incurred through self-service technology payment systems totalled 3.97% of stock, compared to just 1.47% otherwise. Research shows that one of the drivers of this discrepancy is that everyday customers – those who would not normally steal by any other means – disproportionately steal at self checkouts.

Studies also show that having a human presence around – in this case employees in the self-checkout area, increases the perceived risk of being caught, which reduces “consumer deviance”. This is why retailers have been adding staff to monitor customers, absorbing the additional losses, or passing them on to customers in an “honesty tax”.

Making self-checkouts work

As you can see in this graph, preliminary work by researchers Kate Letheren and Paula Dootson suggests people are less likely to steal from a human employee than an inanimate object. Not only because they will get caught, but because they feel bad about it.

On the other hand, consumers have plenty of justifications to excuse self-checkout theft, which is leading to its normalisation.

To combat this, researcher Paula Dootson is trying to use design to combat deviance. One of the ways is through extreme-personalisation of service to reduce customer anonymity. Anonymity is an undesirable outcome of removing employees and replacing them with technology.

Other ideas are to include moral reminders prior to the opportunity to lie or steal (such as simply reminding people to be honest), and to humanise the machines by encoding human characteristics to trigger empathy.

While self-service technologies will continue to be adopted by businesses broadly, and particularly within the retail sector, it will be important for retailers to take a holistic approach to implementation and loss prevention.

The ConversationSelf-service technology reduces front line staffing costs and increases efficiency by re-distributing displaced staff into other service dominant areas of the business, but it creates unintended costs. These business costs can be direct, in the form of theft, but also indirect costs, like reduce customer satisfaction and loyalty. Something that some supermarkets are focusing on today.

Gary Mortimer, Associate Professor, Queensland University of Technology and Paula Dootson, Research Fellow; PwC Chair in Digital Economy, Queensland University of Technology

This article was originally published on The Conversation. Read the original article.