Author Archives: Graham

Could asteroids bombard the Earth to cause a mass extinction in 10 million years?

Sanna Alwmark, Lund University and Matthias Meier, Swiss Federal Institute of Technology Zurich

Scientists have spent decades debating whether asteroids and comets hit the Earth at regular intervals. At the same time, a few studies have found evidence that the large extinction events on Earth – such as the one that wiped out the dinosaurs 66m years ago – repeat themselves every 26m to 30m years. Given that there’s good evidence that an asteroid triggered the dinosaur extinction, it makes sense to ask whether showers of asteroids could be to blame for regular extinction events.

The question is extremely important – if we could prove that this is the case, then we might be able to predict and even prevent asteroids causing mass extinctions in the future. We have tried to find out the answer.

Today, there are approximately 190 impact craters from asteroids and comets on Earth. They range in size from only a few meters to more than 100km across. And they formed anywhere between a few years ago and more than two billion years ago. Only a few, like the famous “Meteor crater” in Arizona, are visible to the untrained eye, but scientists have learned to recognise impact craters even if they are covered by lakes, the ocean or thick layers of sediment.

Meteor crater, Arizona.
Kevin Walsh/wikipedia, CC BY-SA

But have these craters formed as a result of regular asteroid collisions? And if so, why? There have been many suggestions, but most prominently, some scientists have suggested that the sun has a companion star (called “Nemesis”) on a very wide orbit, which approaches the solar system every 26m to 30m years and thereby triggers showers of comets.

Nemesis would be a red/brown dwarf star – a faint type of star – orbiting the sun at a distance of about 1.5 light years. This is not an impossible idea, since the majority of stars actually belong to systems with more than one star. However, despite searching for it for decades, astronomers have failed to observe it, and think they can now exclude its existence.

Difficult dating

Yet, the idea of periodic impacts persists. There are other suggestions. One idea is based on the observation that the sun moves up and down slightly as it orbits the galaxy, crossing the galactic disk every 30m years or so. Some have suggested that this could somehow trigger comet showers.

But is there any evidence that asteroid impacts occur at regular intervals? Most research so far has failed to show this. But that doesn’t mean it isn’t the case – it’s tricky getting the statistics right. There are a lot of variables involved: craters disappear as they age, and some are never found in the first place as they are on the ocean floor. Rocks from some periods are easier to find than from others. And determining the ages of the craters is difficult.

A recent study claimed to have found evidence of periodicity. However, the crater age data it used included many craters with poorly known, or even incorrect and outdated ages. The methods used to determine age – based on radioactive decay or looking at microscopic fossils with known ages – are continuously improved by scientists. Therefore, today, the age of an impact event can be improved significantly from an initial analysis made, say, ten or 20 years ago.

Another problem involves impacts that have near identical ages with exactly the same uncertainty in age: known as “clustered ages”. The age of an impact crater may be, for example, 65.5 ± 0.5m years while another is be 66.1 ± 0.5m years. In this case, both craters might have the same true age of 65.8m years. Such craters have in some instances been produced by impacts of asteroids accompanied by small moons, or by asteroids that broke up in the Earth’s atmosphere.

The Manicouagan crater in Canada seen from the International Space Station/
NASA/Chris Hadfield

The double impact craters they produce can make it look like they hit a time when there were lots of asteroid impacts, when actually the craters were formed in the same event. In some cases, clustered impact craters are spaced too far apart to be explained as double impacts. So how could we explain them? The occasional collision of asteroids in the asteroid belt between Mars and Jupiter might trigger short-lived “showers” of asteroids impacting the Earth. Only a few of these showers are necessary to lead to the false impression of periodicity.

Fresh approach

In contrast to previous studies, we restricted our statistical analysis to 22 impact craters with very well defined ages from the past 260m years. In fact, these all have age uncertainties of less than 0.8%. We also accounted for impacts with clustered ages.

Our article, recently published in Monthly Notices of the Royal Astronomical Society, shows that, to the best of our current knowledge, asteroid impacts do not happen at regular intervals – they seem to occur randomly.

Of course, we can’t be sure that there isn’t any periodicity. But the good news is that, as more impact craters are dated with robust ages, the statistical analysis we did can be repeated over and over again – if there is such a pattern, it should become visible at some point.

The ConversationThat means that there is presently no way to predict when a large asteroid collision may once again threaten life on Earth. But then when it comes to facing the apocalypse, maybe not knowing is not so bad after all …

Sanna Alwmark, Doctoral Candidate of Lithosphere and Biosphere Science, Lund University and Matthias Meier, Swiss National Science Foundation Ambzione Fellow in Geochemistry, Astrophysics, Swiss Federal Institute of Technology Zurich

This article was originally published on The Conversation. Read the original article.

What is Operations Research (OR)?

This post was originally posted on a University of Nottingham blog.


What is Operations Research (OR)?

The terms Operations Research (American term) and Operational Research (European term) are used interchangeably. The discipline is also referred to as:

  • Management Science (most often used in a Business Management sense)
  • Decision Science (less frequently used, but is used most often when statistics are involved)
  • Analytics (a relatively new term but is increasingly used)

Operations Research has close links with Mathematics and Computer Science. It draws on many areas to solve the various problems that it is presented with. Included in these are

  • Optimization (drawing on mathematical programming and and areas such as Linear Programming)
  • Modelling
  • Simulation
  • Heuristics
  • Meta-heuristics
  • Hyper-heuristics
  • Evolutionary Computation
  • Game Theory
  • Statistics

 

A Traveling Salesman Problem solution for USA (Figure credit: David Applegate, Robert Bixby, Vasek Chvatal and William Cook)
A Traveling Salesman Problem solution for USA (Figure credit: David Applegate, Robert Bixby, Vasek Chvatal and William Cook)

The essence of Operations Research is to provide (ideally) optimal, or near optimal solutions to complex decision problems. Probably, the most well known problem (at least in the scientific arena) is the Traveling Salesman Problem (TSP) which can be described as follows:

A salesman has to visit a number of cities. He can choose which one he starts at, but he must complete his tour at the same city. He must visit every other city exactly once. The aim is to minimize the distance traveled.

Whilst being very easy to describe, the TSP gets very difficult to solve (at least in polynomial time) due to the fact that the number of possible tours grows exponentially with the number of cities (the actual number of tours is n!/2 (we divide by two as a tour in one direction is the same as a tour in the opposite direction)).

Historical Details

Like many things, especially in Computer Science, many of its origins can be traced back to the second world war, necessity being the mother of invention, although some would argue that OR’s roots can be traced back beyond this point. Given the subject, you’d expect that many people would have documented the history of the subject and, indeed, this is the case. I have provided below some sources which the interested reader might want to follow.

  • [1] Gass S.I. and Assad A.A. An Annotated Timeline of Operations Research: An Informal History, Springer. ISBN-10: 1402081162, ISBN-13: 978-1402081163
  • [2] Historical Origins of Operations Research, http://en.wikipedia.org/wiki/Operations_research#Historical_origins, last accessed 2nd Mar 2013
  • [3] Gass, S. I., A.A. Assad. History of operations research. J. Geunes, ed. INFORMS TutORials in Operations Research, Vol. 8. INFORMS, Hanover, MD, pp. 1–14

Why is OR so hard?

The type of combinatorial explosion we see in problems such as the TSP often underpins the problems that we face in OR. In fact, the problems where is is easy to verify (i.e. in polynomial time) if a solution is correct but to find the optimal solution cannot be done (we suspect) in polynomial time is often at the heart of the problems we are trying to solve in OR.

These problems are NP-Complete (in fact NP-Hard, in the way we are presenting the TSP as it is an opimization problem – but we can easily convert it to an NP-Complete problem by framing it as a decision proble (e.g. “is there a route that is less than n length?”)). That is, we can easily verify a solution is correct (given a TSP solution, we can easily add up the distances to verify that the solution we have been given is correct) but we do not know of a polynomial time algorithm that is guaranteed to return an optimal solution. Indeed, proving P=NP (or not) is one of the Millenium Problems and if you are able to do it, you will receive a prize of $1M USD.

There are some common problems that you will often come across in OR. We have already mentioned the TSP.

The Vehicle Routing Problem!
The Vehicle Routing Problem!

The Vehicle Routing Problem (VRP) is another classic OR problem. As the name suggests, this problem is about scheduling deliveries for vehicles. The classic version is the Capacitated Vehicle Routing Problem (where we minimize total distance traveled, but have to respect vehicle capacities) but there are many variants, such as VRPTW (Vehicle Routing with Time Windows), where deliveries have to be made at certain times. In fact, VRP and TSP are very closely related.

Another classic problem is graph coloring. That is, given a graph with various connections between the nodes you have to try and color the nodes, using as few colors as possible, such that no nodes which are connected have the same color. This problem has an obvious application is coloring maps but you might be surprised to know that it underpins many (many, many) other problems. As an example, university examination timetabling (i.e. scheduling the exams for our students) can be modeled (and solved) as a graph coloring problem. There are almost an infinite number of problems that can be modeled as a graph coloring problem.

Second to the TSP (and this is debatable, it might be first), with respect to the number of papers written, machine/job shop scheduling problem. This problem, in its simplest form, looks at scheduling factories.

Given a number of machines, and a number of processes that have to be gone through to produce a product, what is the best way to utilize the machine(s) to maximize the throughput?

Graph Colouring Problem
Graph Colouring Problem

Like the graph coloring problem, Job Shop Scheduling (JSP) and Flow Shop Scheduling (FSP) can be used to represent many other problems, that are about as far away from the factory floor as you can imagine (how about having telescope(s) in space and trying to schedule their usage for various scientists).

Methodologies

If we could prove that P=NP (which most people think unlikely) then we would be able to find the optimal solution to many of the important problems in OR. That is, we would have a polynomial time algorithm that would give us an optimal solution in a reasonable time. Of course, it might still take a long time but this better than an exponential time algorithm that might take millions of years to return the optimal solution, even on the fastest computers. In fact, there are many problems (or many problems of sufficient size) where we would have only considered a small number of the possible solutions even if we started the algorithm when the dinosaurs were roaming the earth.

However, there are sophisticated algorithms (such as linear programming) that are increasingly able to solve moderately sized problems to optimality.

When these fail (or we find it difficult to model the problem in sufficient detail to use a mathematical programming approach) we tend to use either heuristics, meta-heuristics, hyper-heuristics or evolutionary computation.

The definition of these is not formal (in that, we could argue where they blur at the edges) but:

  • Heuristics tend to be one pass algorithms and are quite quick.
  • Meta-heuristics are based on phenomena seen in the real world. Things like tabu search (based on memory) and simulated annealing (based on the way we cool metal).
  • Hyper-heuristics are a development of meta-heuristics (although their roots, strangely, can be traced back to before the term meta-heuristics was coined). They are based on the idea of exploring the heuristic space, rather than searching the solution space directly.
  • Evolutionary Computation are algorithms that are based on Darwin’s principles of natural evolution (survival of the fittest) where we have a population of solutions which compete against each other for survival. Common algorithms in this domain include genetic algorithms, genetic programming, honey-bee mating algorithms and particle swam optimisation.

 

Where do we publish?

If you are looking for journals that you might want to consider publishing in then Thomas Reuters, Web of Knowledge, Journal Citation Reports has a specific category for Operations Research & Management Science. For the 2011 journal rankings, this category contained 77 journals. Of course, not all of them will be suitable for a given piece of research but these 77 journals all most (if not all) areas of Operations Research.

 

Want to know more?

There are too many resources to list here, and a serch on a bibliographic search engine such as Science Direct is likley to throw up more references than you would imagine.

But, youtube has a god set of videos where you can Learn About OR.

A couple of videos that caught my eye are OR in Sport and OR in Transport

 

About the author

Graham Kendall is a Professor of Computer Science who works in the Automated Scheduling, Optimisation and Planning Research Group (ASAP). He is a Fellow of the OR Society, as well as an Associate Editor of the Journal of Operational Research Society (in addition to several other journals). He has published widely in Operations Research, as well as other areas. His publications can be seen here.

He has over 30 years experience in OR, both in industry and academia.

Graham is currently based on the University of Nottingham’s Malaysia Campus (UNMC) where he is the Vice-Provost of Research and Knowledge Transfer.

Contact details:

 

References

[1] Gass S.I. and Assad A.A. (Author) An Annotated Timeline of Operations Research: An Informal History, Springer. ISBN-10: 1402081162, ISBN-13: 978-1402081163

[2] Historical Origins of Operations Research, http://en.wikipedia.org/wiki/Operations_research#Historical_origins, last accessed 2nd Mar 2013

[3] Gass, S. I., A.A. Assad. History of operations research. J. Geunes, ed. INFORMS TutORials in Operations Research, Vol. 8. INFORMS, Hanover, MD, pp. 1–14

 

How to backup life on Earth ahead of any doomsday event

Jonathan Roberts, Queensland University of Technology

There are ten asteroids that the space organisation NASA said this month have been classified as “potentially hazardous” based on their size and their orbits in our Solar system.

NASA has now identified 693 near-Earth objects thanks to the Wide-field Infrared Survey Explorer spacecraft that’s been looking for potential threats to Earth since 2013.

The organisation doesn’t specify what kind of hazard these ten asteroids pose. But Earth has been hit by objects in the past, with devastating effects. Scientists largely agree that it was an asteroid or comet impact that started the chain of events that wiped out the dinosaurs around 60 million years ago.

This animation shows asteroids and comets observed by the Near-Earth Object Wide-field Survey Explorer (NEOWISE) mission.

Every year several previously unseen asteroids whizz past Earth, sometimes with only with a few days’ warning. This year two of these asteroids came very close to Earth, with one in May sailing past only 15,000km away. On cosmic scales, that was a very close shave.

But impacts from objects in space are just one of several ways that humanity and most of life on Earth could suddenly disappear.

We are already observing that extinctions are happening now at an unprecedented rate. In 2014 it was estimated that the extinction rate is now 1,000 times greater than before humans were on the Earth. The estimated number of extinctions ranges from 200 to 2,000 species per year.

From all of this very worrying data, it would not be a stretch to say that we are currently within a doomsday scenario. Of course, the “day” is longer than 24 hours but may be instead in the order of a century or two.

So what can we do about this potential prospect of impending doom? We can try to avoid some of the likely scenarios. We should act to tackle climate change and we can develop new asteroid-tracking systems and put in place a means to deflect an asteroid on a collision course with Earth.

But the threats we face are so unpredictable that we need to have a backup plan. We need to plan for the time after our doomsday and think about how a post-apocalyptic Earth may recover and humanity will flourish again.

A backup plan

Some efforts to backup life on our planet have already started. Since the 1970s scientists around the world began to store seeds of potentially endangered plants. There are now dozens of seed banks or vaults scattered around the world.

The most famous is the Svalbard Global Seed Vault, located on a remote Norwegian island about 1,300km from the North Pole. The location was deliberately chosen to afford the project safe and secure long-term storage in cold and dry rock vaults.

A risk of thawing at the Svalbard Global Seed Vault.
Flickr/Landbruks og matdepartementet, CC BY-ND

But there were reports earlier this year that the vault had suffered issues with water from the surrounding melting permafrost (caused by global warming) gaining entry to parts of the structure.

Less common are vaults for storing biological material from animals. There are a handful of so-called frozen zoos around the world. They store embryos, eggs, sperm and more recently DNA of endangered animals. So far, sperm, eggs and embryos that have been frozen for roughly 20 years have been shown to be viable.

All of the storage methods that involve freezing have the same problem that the material is at risk of thawing out if the freezing methods fail. Storing frozen biological material for centuries or even millennia on Earth is not realistic.

Humans can now sequence a whole genome of a living organism and the cost has reduced to the point where it costs less than US$1,000 to sequence the human genome. This process effectively turns the information from any organism’s cells into data.

If future scientists can create living DNA from the genome data and can then create living organisms from that DNA, then having the data alone may be sufficient to backup the Earth’s living organisms.

Where to store the backups?

But where should humanity store the backups? As French president Emmanuel Macron said recently, “there is no plan B because there is no planet B”, echoing 2014 comments from Ban Ki-moon when he was secretary general of the United Nations.

Backing up on Earth seems a high-risk strategy, equivalent to having a computer backup on an external hard drive that sits right next to your computer.

So given that the motivation for backing up Earth’s organisms is the likelihood of Earth itself suffering a catastrophe, it follows that our planet is not the best location for the backups. The partial flooding of the Svalbard Global Seed Vault illustrates that perfectly.

Perhaps the obvious place to locate the backups is in space.

Seeds have already been taken to space for short periods (six months) to test their viability back on Earth. These experiments so far have been motivated by the desire to eventually grow plants in space itself, on space stations, or on Mars.

Space is a harsh environment for biological material, where cells are exposed to potentially very high doses of radiation that will damage DNA. Storage of seeds in low Earth orbit is desirable as Earth’s magnetic field provides some protection from space radiation. Storage outside of this zone and in deep space would require other methods of radiation protection.

The other question is how you would get seeds and other biological material safely back to Earth after a global disaster. Now we get to the robotics that can help, as autonomous re-entry of biological material from orbit is totally feasible.

The tricky part is for our orbiting bio-backup to know when its cargo is required and where to send it to. Perhaps we need a global limited robot crew – such as David in the recent Alien films – that would wake up the orbiter when it is needed.

‘Hello, I’m David.’

Alternatively, it could be staffed by a rotating crew of wardens similar to the International Space Station. These people could carry out other important scientific work too.

Other locations in space for storage of biological material or data include the Moon, and the moons of our solar system’s gas planets asteroids or deep space itself on free flying spacecraft. Such projects have been proposed and groups around the world have begun planning such ventures.

The ConversationSo it seems that some people have already accepted the fate of humanity version 1.0 and that it will end sometime in the relative near term. The movement to create our backup ready for humanity version 2.0 has already begun.

Jonathan Roberts, Professor in Robotics, Queensland University of Technology

This article was originally published on The Conversation. Read the original article.

The new space race: why we need a human mission to Mars

Malcolm Walter, UNSW

If we want to know whether there is life beyond Earth then the quickest way to answer that question is to explore Mars. That exploration is currently being done by remote space probes sent from Earth.

The race is on though to send human explorers to Mars and a number of Earth-bound projects are trying to learn what life would be like on the red planet.

But the notion of any one-way human mission to Mars is nonsensical, as is the thought that we should colonise Mars simply because we are making a mess of Earth.

The first suggestion is pointless and unethical – we would be sending astronauts to their certain death – while the second would be a licence for us to continue polluting our home planet.

I believe we should go to Mars because of what we can learn from the red planet, and from developing the technologies to get people there safely.

The SpaceX entrepreneur Elon Musk last September outlined his vision for a mission to send people to Mars by 2022. But first he is planning to send people around the Moon.

I think Musk will send two space tourists around the Moon and back to Earth, not in 2018 as he has predicted, but probably within a decade. He has not yet experimented with having passengers aboard a rocket.

Our journey into space

It’s worth looking at how we got to where we are now in terms of humans in space and space exploration.

More than a billion people watched Apollo 11’s Neil Armstrong take humankind’s first step on another world.
NASA

The first footprint on another world was made by US astronaut Neil Armstrong on July 20, 1969 (US time) when he left the Eagle lunar lander and stepped onto the Moon.

One small step…

The Moon is as far as humans have explored in space but we’ve sent probes to explore the other planets in our Solar system, including Mars.

Several failed attempts were made to send a probe to Mars but the US Mariner 4 was the first to successfully photograph another planet from space when it made a flyby of Mars in July 1965.

The red planet Mars.
NASA

The USSR’s Mars 2 orbited Mars for three months in 1971 but its lander module crashed onto the planet. The lander of the Mars 3 mission also failed.

NASA’s Viking 1 performed the first successful landing on Mars, on July 20, 1976, followed by Viking 2 on September 3, 1976.

The dunes of Mars as seen by Viking 1.
NASA/JPL

The Viking missions were the first to search for life on that planet, since when others such as the Spirit and Opportunity rovers, which landed days apart in January 2004, have looked to see if Mars could have had life in the past.

No evidence of life has been found so far, but the techniques available now are far more advanced and we know much more about the planet. We do have abundant evidence of water on Mars.

The benefits of space exploration

Apart from looking for life, why bother with a mission to send humans to Mars? Many aspects of our modern lives would not be possible if it were not for our interest in space.

We rely on satellites for communication, timing and positioning. Satellites help to keep us safe from severe weather, especially in Australia.

The Apollo and other NASA missions led to developments in micro-electronincs that later made it into household devices such as calculators and home computers.

NASA has detailed many of the spinoffs it says stem from its research for exploration of space, which even include the dustbuster.

The modern household dustbuster has its origins in the Apollo Moon missions.
Shutterstock/Sergey Mironov

Intangible, but critical nonetheless, is the inspiration we derive from space exploration. It can be very significant in attracting young people to science and engineering, something needed more and more as our economies continue to transition to an ever higher-tech future.

In the US there was a large spike in tertiary enrolments in science and engineering during the Apollo missions to the Moon.

A new space race

We are using more and more sophisticated craft to explore Mars. It is a broadly international venture involving NASA, the European Space Agency (22 member nations), the Russian Federal Space Agency, the Indian Space Research Organisation, the China National Space Administration, and the Japan Aerospace Exploration Agency.

We are witnessing not only collaboration but competition. Which nation (or company?) will first return to the Moon and then land astronauts on Mars? It is beginning to look like a new space race.

Why focus on Mars? We already know that early in its history, more than three billion years ago, Mars had a surface environment much like that of Earth at the same time, featuring volcanoes, lakes, hot springs, and perhaps even an ocean in the northern hemisphere.

This animation shows how the surface of Mars might have appeared billions of years ago.

Life on Earth then was microbial, the evidence for which is preserved in 3.5 billion year old rocks in the Pilbara region of Western Australia.

So we are searching for microbes on Mars. Despite being microscopic, bacteria and their cousins the Archaea are complex organisms. Methane already discovered in the atmosphere of Mars hints at the presence of such life but is not definitive.

If there ever was life on Mars it may still be there, underground where it will be protected from cosmic and ultraviolet radiation. From time to time it might emerge on the surface in some of the gullies that seem to result from the breaching of underground aquifers.

It might not seem exciting to discover former or living microbes, but if we can demonstrate that they represent an independent origin of life the consequences will be profound.

We will be able to predict confidently that there will be life all over the universe. Somewhere out there will be intelligent beings. What might happen then currently lies in the realm of science fiction.

The ConversationThe future lies in more missions to Mars. So far all missions have been one-way and robotic, but plans are underway for a mission to return samples from Mars, and sometime this century there will be astronauts on Mars, not in “colonies” but in research bases like those in Antarctica. It is inevitable.

Malcolm Walter, Professor of Astrobiology, UNSW

This article was originally published on The Conversation. Read the original article.

Who feels the pain of science research budget cuts?

Bruce Weinberg, The Ohio State University

Science funding is intended to support the production of new knowledge and ideas that develop new technologies, improve medical treatments and strengthen the economy. The idea goes back to influential engineer Vannevar Bush, who headed the U.S. Office of Scientific Research and Development during World War II. And the evidence is that science funding does have these effects.

But, at a practical level, science funding from all sources supports research projects, the people who work on them and the businesses that provide the equipment, materials and services used to carry them out. Given current proposed cuts to federal science funding – the Trump administration has, for instance, proposed a 20 percent reduction for the National Institutes of Health – it’s important to know what types of people and businesses are touched by sponsored research projects. This information provides a window into the likely effects of funding cuts.

Most existing research into the effects of science funding tries to quantify research artifacts, such as publications and patents, rather than tracking people. I’ve helped to start an emerging project called the UMETRICS initiative which takes a novel approach to thinking about innovation and science. At its core, UMETRICS views people as key to understanding science and innovation – people conduct research, people are the vectors by which ideas move around and, ultimately, people are one of the primary “products” of the research enterprise.

UMETRICS identifies people employed on scientific projects at universities and the purchases made to carry out those projects. It then tracks people to the businesses and universities that hire them, and purchases to the vendors from which they come. Since UMETRICS relies entirely on administrative data provided by member universities (now around 50), the U.S. Census Bureau and other naturally occurring data, there are no reporting errors, sample coverage concerns or burden for people. It covers essentially all federal research funding as well as some funding from private foundations.

Who does research funding support?

Our administrative data allow us to identify everyone employed on research projects, not just those who appear as authors on research articles. This is valuable because we’re able to identify students and staff, who may be less likely to author papers than faculty and postdocs but who turn out to be an important part of the workforce on funded research projects. It’s like taking into account everyone who works in a particular store, not just the manager and owner.

We compared the distribution of people supported on research projects at some of the largest National Science Foundation (NSF) Divisions and National Institutes of Health (NIH) Institutes and Centers. Together, the NSF and NIH support close to 70 percent of federally funded academic R&D.

The striking thing is that the majority of people employed on research projects are somewhere in the training pipeline, whether undergraduates; graduate students, who are particularly prevalent at NSF; or postdocs, who are more prevalent at NIH. Staff frequently constitute 40 percent of the NIH-supported workforce, but faculty are a relatively small portion of the workforce at all NIH Institutes and NSF Divisions.

Based on these results, it seems likely that changes in federal research funding will have substantial effects on trainees, which would naturally have implications for the future STEM workforce.

What happens to STEM doctoral recipients?

Given the importance of trainees in the research workforce, we have focused much of our research on graduate students.

We mapped the universities in our sample and the share of the graduate students in each state one year after graduation. Our data show that many grad students contribute to local economies – 12.7 percent are within 50 miles of the universities where they trained. For six of our eight universities, more people stayed in state than went to any other single state. At the same time, graduate students fan out nationally, with both coasts, Illinois and Texas all being common destinations.

The doctoral recipients in our sample are also more likely to take jobs at establishments that are engines of the knowledge economy. They are heavily overrepresented in industries such as electronics, semiconductors, computers and pharmaceuticals, and underrepresented in industries such as restaurants, grocery stores and hotels. Doctoral degree recipients are almost four times as likely as the average U.S. worker to be employed by an R&D-performing firm (44 percent versus 12.6 percent). And, the establishments where the doctoral degree recipients work have a median payroll of over US$90,000 per worker compared to $33,000 for all U.S. establishments and $61,000 for establishments owned by R&D performing firms.

We also studied initial earnings by field and find that earnings of doctoral degree recipients are highest in engineering; math and computer science; and physics. Among the STEM fields, the lowest earnings are in biology and health, but our data also suggest that many of the people in these fields take postdoc positions that have low earnings, which may improve long-run earnings prospects. Interestingly, we find that women have substantially lower earnings than men, but these differences are entirely accounted for by field of study, marital status and presence of children.

Taken as a whole, our research indicates that the workers trained on research projects play a critical role in the industries and at companies critical for our new, knowledge economy.

What purchases do research projects drive?

Researchers need to buy the equipment they use to do their science.
Michael Pereckas, CC BY-SA

Another way in which sponsored research projects affect the economy in the short run is through purchases of equipment, supplies and services. Economist Paula Stephan writes eloquently of these transactions, which range from purchasing computers and software, to reagents, medical imaging equipment or telescopes, even to lab mice and rats.

Still unpublished work studying the vendors who sell to sponsored research projects at universities shows that many of the firms selling to sponsored research projects are frequently high-tech and often local. Moreover, firms that are vendors to university research projects are more likely to open new establishments near their campus customers. Thus, there is some evidence that research projects directly stimulate local economies.

The ConversationSo while the goal of sponsored research projects is to develop new knowledge, they also support the training of highly skilled STEM workers and support activity at businesses. The UMETRICS initiative allows us to see just which people and businesses are touched by sponsored research projects, providing a window into the short-run effects of research funding as well as hinting at its long-run value.

Bruce Weinberg, Professor of Economics, The Ohio State University

This article was originally published on The Conversation. Read the original article.

The maths of life and death: our secret weapon in the fight against disease

Christian Yates, University of Bath

Maths is the language of science. It crops up everywhere from physics to engineering and chemistry – aiding us in understanding the origins of the universe and building bridges that won’t collapse in the wind. Perhaps a little more surprisingly, maths is also increasingly integral to biology.

For hundreds of years mathematics has been used, to great effect, to model relatively simple physical systems. Newton’s universal law of gravitation is a fine example. Relatively simple observations led to a rule which, with great accuracy, describes the motion of celestial bodies billions of miles away. Traditionally, biology has been viewed as too complicated to submit to such mathematical treatment.

Biological systems are often classified as “complex”. Complexity in this sense means that, due to the complicated interaction of many sub-components, biological systems can exhibit what we call emergent behaviour – the system as a whole demonstrates properties which the individual components acting alone cannot. This biocomplexity has often been mistaken for vitalism, the misconception that biological processes are dependent on a force or principle distinct from the laws of physics and chemistry. Consequently, it has been assumed that complex biological systems are not amenable to mathematical treatment.

How did the leopard get his spots?
VarnaK / Shutterstock.com

There were some early dissenters. Famous computer scientist and World War II code-breaker Alan Turing was one of the first to suggest that biological phenomena could be studied and understood mathematically. In 1952 he proposed a pair of beautiful mathematical equations which provide an explanation for how pigmentation patterns might form on animals’ coats.

Examples of different types of Turing patterns.
© Kit Yates

Not only was his work beautiful, it was also counter-intuitive – the sort of work that only a brilliant mind like Turing’s could ever have dreamed up. Even more of a pity, then, that he was so poorly treated under the draconian anti-homosexuality laws of the time. After a course of “corrective” hormone treatment, he killed himself just two years later.

An emerging field

Since then, the field of mathematical biology has exploded. In recent years, increasingly detailed experimental procedures have lead to a huge influx in the biological data available to scientists. This data is being used to generate hypotheses about the complexity of previously abstruse biological systems. In order to test these hypotheses, they must be written down in the form of a model which can be interrogated to determine whether it correctly mimics the biological observations. Mathematics is the natural language in which to do this.

In addition, the advent of, and subsequent increase in, computational ability over the last 60 years has enabled us to suggest and then interrogate complex mathematical models of biological systems. The realisation that biological systems can be treated mathematically, coupled with the computational ability to build and investigate detailed biological models, has led to the dramatic increase in the popularity of mathematical biology.

Maths has become a vital weapon in the scientific armoury we have to tackle some of the most pressing questions in medical, biological and ecological science in the 21st century. By describing biological systems mathematically and then using the resulting models, we can gain insights that are impossible to access though experiments and verbal reasoning alone. Mathematical biology is incredibly important if we want to change biology from a descriptive into a predictive science – giving us power, for example, to avert pandemics or to alter the effects of debilitating diseases.

A new weapon

Over the last 50 years, for example, mathematical biologists have built increasingly complex computational representations of the heart’s physiology. Today, these highly sophisticated models are being used in an attempt to understand better the complicated functioning of the human heart. Computer simulations of heart function allow us to make predictions about how the heart will interact with candidate drugs, designed to improve its function, without having to undertake expensive and potentially risky clinical trials.

We use mathematical biology to study disease as well. On an individual scale, researchers have elucidated the mechanisms by which our immune systems battles with viruses through mathematical immunology and suggested potential interventions for tipping the scales in our favour. On a wider scale, mathematical biologists have proposed mechanisms that can be used to control the spread of deadly epidemics like Ebola, and to ensure the finite resources dedicated to this purpose are employed in the most efficient way possible.

Mathematical biology is even being used to inform policy. There has been research done on fisheries for example, using mathematical modelling to set realistic quotas in order to ensure we do not overfish our seas and that we protect some of our most important species.

A day’s haul.
AJP / Shutterstock.com

The increased comprehension gleaned by taking a mathematical approach can lead to better understanding of biology at a range of different scales. At the Centre for Mathematical Biology in Bath, for example, we study a number of pressing biological problems. At one end of the spectrum, we try to develop strategies for averting the devastating effects of locust plagues comprising up to a billion individuals. At the other end, we try to elucidate the mechanisms that give rise to the correct development of the embryo.

Although mathematical biology has traditionally been the domain of applied mathematicians, it is clear that mathematicians who self-classify as pure have a role to play in the mathematical biology revolution. The pure discipline of topology is being used to understand the knotty problem of DNA packing and algebraic geometry is being used to select the most appropriate model of biochemical interaction networks.

As the profile of mathematical biology continues to rise, emerging and established scientists from disciplines across the scientific spectrum will be drawn to tackle the rich range of important and novel problems that biology has to offer.

The ConversationTuring’s revolutionary idea, although not fully appreciated in his time, demonstrated that there was no need to appeal to vitalism – the god in the machine – to understand biological processes. Chemical and physical laws encoded in mathematics, or “mathematical biology” as we now call it, could do just fine.

Christian Yates, Senior Lecturer in Mathematical Biology, University of Bath

This article was originally published on The Conversation. Read the original article.

The economics of self-service checkouts

Gary Mortimer, Queensland University of Technology and Paula Dootson, Queensland University of Technology

Self-checkouts in supermarkets are increasing as businesses battle to reduce costs and increase service efficiency. But looking at the numbers, it isn’t clear that self-service is an easy win for businesses.

Self-checkouts aren’t necessarily faster than other checkouts, don’t result in lower staff numbers, and there are indirect costs such as theft, reduced customer satisfaction and loyalty.

Worldwide, self-checkout terminals are projected to rise from 191,000 in 2013 to 325,000 by 2019. A survey of multiple countries found 90% of respondents had used self-checkouts, with Australia and Italy leading the way.

Employment in the Australian supermarket and grocery industry went down for the first time in 2015-16 and is projected to remain flat for a few years. But staff numbers are projected to rebound again, in part due to the need to curtail growing theft in self-checkouts.

Social trends pushing self-checkout

There are a couple of intertwining trends that explain the rise of self checkouts.

We now visit our supermarkets more frequently than ever before, two to three times per week in fact. This means our basket contains fewer items and being able to wander up to a self-checkout, with little to no wait time, has been an expedient way to shop. Most shoppers consider self-checkouts both fast and easy to use. Although this varies with age – 90% of shoppers aged 18-39 found self-service checkouts easy to use, only 50% of those over 60 years said the same.

Shoppers also gain value from taking control of the transaction – being able to ring up their own goods and pack them the way they want. This is because a sense of control over their own shopping can lead to greater customer satisfaction and intent to use and reuse self-serve technology.

The numbers behind self-checkouts

Wages represent around 9.5% of supermarket revenue in Australia, and reducing wages is one of the reasons proposed for the uptake of self-checkout.

But from a business perspective, moving from “staffed” checkouts to self-serve machines isn’t cheap. A typical setup costs around US$125,000. On top of that there are the costs of integrating the machines into the technology already in place – the software and other systems used to track inventory and sales, and the ongoing costs – to cover breakdowns and maintenance.

But the biggest direct cost to retailers of adopting self-service checkouts is theft. Retail crime in Australia costs the industry over A$4.5 billion each year.

There is reason to believe that rates of theft are higher on self-service machines than regular checkouts. A study of 1 million transactions in the United Kingdom found losses incurred through self-service technology payment systems totalled 3.97% of stock, compared to just 1.47% otherwise. Research shows that one of the drivers of this discrepancy is that everyday customers – those who would not normally steal by any other means – disproportionately steal at self checkouts.

Studies also show that having a human presence around – in this case employees in the self-checkout area, increases the perceived risk of being caught, which reduces “consumer deviance”. This is why retailers have been adding staff to monitor customers, absorbing the additional losses, or passing them on to customers in an “honesty tax”.

Making self-checkouts work

As you can see in this graph, preliminary work by researchers Kate Letheren and Paula Dootson suggests people are less likely to steal from a human employee than an inanimate object. Not only because they will get caught, but because they feel bad about it.

On the other hand, consumers have plenty of justifications to excuse self-checkout theft, which is leading to its normalisation.

To combat this, researcher Paula Dootson is trying to use design to combat deviance. One of the ways is through extreme-personalisation of service to reduce customer anonymity. Anonymity is an undesirable outcome of removing employees and replacing them with technology.

Other ideas are to include moral reminders prior to the opportunity to lie or steal (such as simply reminding people to be honest), and to humanise the machines by encoding human characteristics to trigger empathy.

While self-service technologies will continue to be adopted by businesses broadly, and particularly within the retail sector, it will be important for retailers to take a holistic approach to implementation and loss prevention.

The ConversationSelf-service technology reduces front line staffing costs and increases efficiency by re-distributing displaced staff into other service dominant areas of the business, but it creates unintended costs. These business costs can be direct, in the form of theft, but also indirect costs, like reduce customer satisfaction and loyalty. Something that some supermarkets are focusing on today.

Gary Mortimer, Associate Professor, Queensland University of Technology and Paula Dootson, Research Fellow; PwC Chair in Digital Economy, Queensland University of Technology

This article was originally published on The Conversation. Read the original article.

From little Acorns – a brief history of computer games in the classroom

Rhys James Jones, Swansea University

Play has always been central to growing up, – whether it’s in the street or on a playing field – or in the structured formality of teachers’ quizzes.

These days tablet computers are in nearly every pupil’s hands and children learn through computer games – both in the classroom and at home. Children’s coding initiatives and tiny computers such as the Raspberry Pi and the BBC’s micro:bit have also become big hits in the education world, helping to teach computer basics in playful ways.

But while it’s tempting to see the gamification of education as a new development, there is in fact a long history of children using computer games to help with their learning – which goes right back to the 1970s.

This was the decade during which computers first inched out of the research lab and into everyday life, making the idea of a home or personal computer somewhat closer to a reality. In 1974, Ted Nelson, a US pioneer of information technology, wrote what is often considered “the first personal computer book” – Computer Lib/Dream Machines. It was in this book that with uncanny foresight, Nelson suggested pupils in the future would make use of hyperlinked documents, and touchscreens to widen their knowledge.

Away from Nelson’s speculation, the classroom reality was more mundane. Few schools could afford computers of their own. And for those that could, computer science meant punching instructions onto paper tape – a form of data storage, consisting of a long strip of paper in which holes are punched to store said data.

GCSE Computing – 1970’s style.
The National Museum of Computing

But in the late 1970s, something of a change happened – at least in the UK. And a worried government, concerned about Japanese innovation and threats of automation, commissioned a report from the Manpower Services Commission (MSC) and the BBC to look into how to develop computer literacy initiatives.

Designed to raise computer awareness, these initiatives happily coincided with the rise of microprocessors which were enabling the manufacture of cheaper, smaller machines. And the BBC decided to invite UK companies to submit proposals for a Microcomputer System, to a predefined specification. A system proposed by a young company in Cambridge was chosen and Acorn’s BBC Microcomputer was born.

The BBC micro in all its glory.
Public Domain

Simpler and faster

The BBC Micro, along with some other machines, could be bought by schools at half price via government subsidies.

Their beige cases and red function keys became a familiar sight from primary through to university level. But they were still expensive: a discounted “Beeb” and monitor would cost a school more than £1,000 at today’s prices.

Learning to program was simpler and faster on the Beeb than in years past, with paper tape replaced by a monitor and a beginner’s coding language known as BASIC – which stands for “beginner’s all purpose symbolic instruction code”. This meant many more students were able to engage with computer science.

The rise of the games

Then there were the games. Despite excellent arcade clones and some true originals, the cost of the unsubsidised Beeb made it difficult for it to gain a foothold as a home videogame system.

Perhaps its educational image didn’t help either – and maybe it never quite shook off what comedian Simon Munnery described as

The stench of school … most of the games would be … Isn’t Geography Nice?

The Beebs’ dominance in schools led to a torrent of educational software being released, of varying quality. Indeed, many of these early educational “games” aided nothing more than rote learning.

The stench of educational games.
Author provided

But educational initiatives helped push the boundaries, particularly in science and maths. The best-remembered games were imaginative, often programmed by teachers themselves, and learning happened by stealth. For example, the fondly recalled Granny’s Garden, while limited, took players on a puzzle-solving journey to avoid traps and witches – all rendered in teletext graphics.

Adventure was also central to L: A Mathemagical Journey, which used shades of Lewis Carroll to build players’ numeracy skills, while encouraging them to reflect on their experience.

Straddling home and school, Acorn’s software arm Acornsoft used surprisingly entertaining artificial intelligence techniques to test and extend subject-based learning. Younger, newly literate learners could encounter Podd, a floating head performing various actions at their typed command.

But in the 21st century, it’s not just learning, but the whole education system that has become a game – as schools, classes, teachers and students strive to top the board in league tables and PISA rankings. At the same time, teachers’ unions, children and parents all argue against excessive assessment and testing.

Maybe then we should all learn from the classroom videogame pioneers of the past few decades. Because although it’s clear that game-based learning has a role to play in education, there still needs to be a point to it – within a wider context.

The ConversationAnd while educational games can be creative and innovative, they are at their best when they don’t smell too much of school.

Rhys James Jones, Senior Lecturer in Digital Media, Swansea University

This article was originally published on The Conversation. Read the original article.

A day in the life of a smart-city commuter – and why it’s not so far from reality

Marcin Budka, Bournemouth University

The alarm on your smart phone went off 10 minutes earlier than usual this morning. Parts of the city are closed off in preparation for a popular end of summer event, so congestion is expected to be worse than usual. You’ll need to catch an earlier bus to make it to work on time.

The alarm time is tailored to your morning routine, which is monitored every day by your smart watch. It takes into account the weather forecast (rain expected at 7am), the day of the week (it’s Monday, and traffic is always worse on a Monday), as well as the fact that you went to bed late last night (this morning, you’re likely to be slower than usual). The phone buzzes again – it’s time to leave, if you want to catch that bus.

While walking to the bus stop, your phone suggests a small detour – for some reason, the town square you usually stroll through is very crowded this morning. You pass your favourite coffee shop on your way, and although they have a 20% discount this morning, your phone doesn’t alert you – after all, you’re in a hurry.

After your morning walk, you feel fresh and energised. You check in at the Wi-Fi and Bluetooth-enabled bus stop, which updates the driver of the next bus. He now knows that there are 12 passengers waiting to be picked up, which means he should increase his speed slightly if possible, to give everyone time to board. The bus company is also notified, and are already deploying an extra bus to cope with the high demand along your route. While you wait, you notice a parent with two young children, entertaining themselves with the touch-screen information system installed at the bus stop.

Bus stops of the future.
from www.shutterstock.com

Once the bus arrives, boarding goes smoothly: almost all passengers were using tickets stored on their smart phones, so there was only one time-consuming cash payment. On the bus, you take out a tablet from your bag to catch up on some news and emails using the free on-board Wi-Fi service. You suddenly realise that you forgot to charge your phone, so you connect it to the USB charging point next to the seat. Although the traffic is really slow, you manage to get through most of your work emails, so the time on the bus is by no means wasted.

The moment the bus drops you off in front of your office, your boss informs you of an unplanned visit to a site, so you make a booking with a car-sharing scheme, such as Co-wheels. You secure a car for the journey, with a folding bike in the boot.

Your destination is in the middle of town, so when you arrive on the outskirts you park the shared car in a nearby parking bay (which is actually a member’s unused driveway) and take the bike for the rest of the journey to save time and avoid traffic. Your travel app gives you instructions via your Bluetooth headphones – it suggests how to adjust your speed on the bike, according to your fitness level. Because of your asthma, the app suggests a route that avoids a particularly polluted area.

Sick ride.
Mr.tinDC/Flickr, CC BY-NC-ND

After your meeting, you opt to get a cab back to the office, so that you can answer some emails on the way. With a tap on your smartphone, you order the cab, and in the two minutes it takes to arrive you fold up your bike so that you can return it to the boot of another shared vehicle near your office. You’re in a hurry, so no green reward points for walking today, I’m afraid – but at least you made it to the meeting on time, saving kilograms of CO2 on the way.

Get real

It may sound like fiction, but truth be told, most of the data required to make this day happen are already being collected in one form or another. Your smart phone is able to track your location, speed and even the type of activity that you’re performing at any given time – whether you’re driving, walking or riding a bike.

Meanwhile, fitness trackers and smart watches can monitor your heart rate and physical activity. Your search history and behaviour on social media sites can reveal your interests, tastes and even intentions: for instance, the data created when you look at holiday offers online not only hints at where you want to go, but also when and how much you’re willing to pay for it.

Personal devices aside, the rise of the Internet of Things with distributed networks of all sorts of sensors, which can measure anything from air pollution to traffic intensity, is yet another source of data. Not to mention the constant feed of information available on social media about any topic you care to mention.

The ConversationWith so much data available, it seems as though the picture of our environment is almost complete. But all of these datasets sit in separate systems that don’t interact, managed by different entities which don’t necessarily fancy sharing. So although the technology is already there, our data remains siloed with different organisations, and institutional obstacles stand in the way of attaining this level of service. Whether or not that’s a bad thing, is up to you to decide.

Marcin Budka, Principal Academic in Data Science, Bournemouth University

This article was originally published on The Conversation. Read the original article.

The seven most extreme planets ever discovered

Christian Schroeder, University of Stirling

Scientists recently discovered the hottest planet ever found – with a surface temperature greater than some stars. As the hunt for planets outside our own solar system continues, we have discovered many other worlds with extreme features. And the ongoing exploration of our own solar system has revealed some pretty weird contenders, too. Here are seven of the most extreme.

The hottest

How hot a planet gets depends primarily on how close it is to its host star – and on how hot that star burns. In our own solar system, Mercury is the closest planet to the sun at a mean distance of 57,910,000km. Temperatures on its dayside reach about 430°C, while the sun itself has a surface temperature of 5,500°C.

But stars more massive than the sun burn hotter. The star HD 195689 – also known as KELT-9 – is 2.5 times more massive than the sun and has a surface temperature of almost 10,000°C. Its planet, KELT-9b, is much closer to its host star than Mercury is to the sun.

Though we cannot measure the exact distance from afar, it circles its host star every 1.5 days (Mercury’s orbit takes 88 days). This results in a whopping 4300°C – which is hotter than many of the stars with a lower mass than our sun. The rocky planet Mercury would be a molten droplet of lava at this temperature. KELT-9b, however, is a Jupiter-type gas giant. It is shrivelling away as the molecules in its atmosphere are breaking down to their constituent atoms – and burning off.

The coldest

At a temperature of just 50 degrees above absolute zero – -223°C – OGLE-2005-BLG-390Lb snatches the title of the coldest planet. At about 5.5 times the Earth’s mass it is likely to be a rocky planet too. Though not too distant from its host star at an orbit that would put it somewhere between Mars and Jupiter in our solar system, its host star is a low mass, cool star known as a red dwarf.

Freezing but Earth-like: ESO OGLE BLG Lb.
ESO, CC BY-SA

The planet is popularly referred to as Hoth in reference to an icy planet in the Star Wars franchise. Contrary to its fictional counterpart, however, it won’t be able to sustain much of an atmosphere (nor life, for that matter). This because most of its gases will be frozen solid – adding to the snow on the surface.

The biggest

If a planet can be as hot as a star, what then makes the difference between stars and planets? Stars are so much more massive than planets that they are ignited by fusion processes as a result of the huge gravitational forces in their cores. Common stars like our sun burn by fusing hydrogen into helium. But there is a form of star called a brown dwarf, which are big enough to start some fusion processes but not large enough to sustain them. Planet DENIS-P J082303.1-491201 b with the equally unpronounceable alias 2MASS J08230313-4912012 b has 28.5 times the mass of Jupiter – making it the most massive planet listed in NASA’s exoplanet archive. It is so massive that it is debated whether it still is a planet (it would be a Jupiter-class gas giant) or whether it should actually be classified as a brown dwarf star. Ironically, its host star is a confirmed brown dwarf itself.

The smallest

Just slightly larger than our moon and smaller than Mercury, Kepler-37b is the smallest exoplanet yet discovered. A rocky world, it is closer to its host star than Mercury is to the sun. That means the planet is too hot to support liquid water and hence life on its surface.

The oldest

PSR B1620-26 b, at 12.7 billion years, is the oldest known planet. A gas giant 2.5 times the mass of Jupiter it has been seemingly around forever. Our universe at 13.8 billion years is only a billion years older.

Artist’s impression of the biggest planet known.
NASA and G. Bacon (STScI)

PSR B1620-26 b has two host stars rotating around each other – and it has outseen the lives of both. These are a neutron star and a white dwarf, which are what is left when a star has burned all its fuel and exploded in a supernova. However, as it formed so early in the universe’s history, it probably doesn’t have enough of the heavy elements such as carbon and oxygen (which formed later) needed for life to evolve.

The youngest

The planetary system V830 Tauri is only 2m years old. The host star has the same mass as our sun but twice the radius, which means it has not fully contracted into its final shape yet. The planet – a gas giant with three quarters the mass of Jupiter – is likewise probably still growing. That means it is acquiring more mass by frequently colliding with other planetary bodies like asteroids in its path – making it an unsafe place to be.

The worst weather

Because exoplanets are too far away for us to be able to observe any weather patterns we have to turn our eyes back to our solar system. If you have seen the giant swirling hurricanes photographed by the Juno spacecraft flying over Jupiter’s poles, the largest planet in our solar system is certainly a good contender. However, the title goes to Venus. A planet the same size of Earth, it is shrouded in clouds of sulfuric acid.

The ConversationThe atmosphere moves around the planet much faster than the planet rotates, with winds reaching hurricane speeds of 360km/h. Double-eyed cyclones are sustained above each pole. Its atmosphere is almost 100 times denser than Earth’s and made up of over 95% carbon dioxide. The resulting greenhouse effect creates hellish temperatures of at least 462°C on the surface, which is actually hotter than Mercury. Though bone-dry and hostile to life, the heat may explain why Venus has fewer volcanoes than Earth.

Christian Schroeder, Lecturer in Environmental Science and Planetary Exploration, University of Stirling

This article was originally published on The Conversation. Read the original article.