A simpler way with risk

One of the things that sticks in researchers’ minds when they read about the ERC programmes and one of the things that is firmly embedded in the myths and legends that surround the programmes is that the panels are looking to fund high risk/high gain projects.  This is an aspect of the work that needs to be thought about from a number of different perspectives and thought about carefully. I have covered it in earlier posts where I drew attention to the fact that its importance can be overstated and that there was no need for researchers to go searching for work to do just for the sake of being able to say that it carried high risk.  Nor is there any point in just labelling regular incremental and developmental work as being high risk when it patently isn’t – not that this kind of work is likely to win anyway as it is one of the things they are in fact pretty good at sniffing out.

What wins is work that promises, and is able to demonstrate that it is capable of making, a good clear step beyond the current frontier of the state-of-the-art.  Frontiers are always places of risk and stepping over them is always risky as they are places where it is always possible that it might be difficult or even impossible to cross.  The people and ideas arriving at the frontier might always be imposters, frauds and charlatans (I have over the year met some outrageous and deluded charlatans in my work on projects – money attracts the opportunist as well as the serious scientists, of course).  And we need to produce our papers and bona fides to support our claims that we are who we are and that we know what we know and can and will do as we say – the job of the proposal is to do this work of proving us credible as we face the judgement of those who can either hold open or bar the door to  progress in our career and also, hopefully, in knowledge.  So, in ERC projects in particular we find ourselves invited to approach the risky edges of things that we know and things yet unknown.  But, there is no need to dramatise this situation which is common to all new things.

And in this post what I’ll try to do is to explain why the moment of  stepping across the frontier of the state-of-the-art is the precise site of the risk to be found in science projects of this kind and is the basis for the explanation of the riskiness of the work.  It is both easier to do and to locate it in the development of the proposal than most researchers make it.

Risk, in a nutshell, is inherent to science if it is described in a way that is likely to persuade the ERC evaluators and we don’t really need to go outside of the description of the scientific project to do the right kind of job on the description of risk.  The proposal should focus above all, possibly exclusively, on the factors that make the science risky i.e., the intrinsic risks that result from the possibility of not being able to make the step across the frontier of the state-of-the-art which come from running up against the limits of knowledge or which result from the work simply returning nil results or demonstrating that the underpinning hypotheses are in fact false.

However, researchers normally put forward what I’ll call extrinsic risks i.e., things other than the riskiness of the ideas themselves, something other than the promise to go beyond the state-of-the-art. Normally these are issues around access to data, access to records, the size of the samples, the accuracy of machines etc. which are something to do with the project entity and the implementation rather than strictly to do with the ideas and advances in knowledge.

Extrinsic risk is often dealt with at the end of the presentation as a separate sub-section and down at the level of resources and timing – it is mostly to do with rolling out the raft of activities that make up the project and along with the milestones and other control points at which risk will be managed in practice.  This is all necessary and helpful and part of a complete description of the project which is what you should be aiming to give the evaluator in the B1 where they ask for a complete snapshot of the work.

This type of risk information can be captured in a table – a quick search for a ‘risk table’ or ‘risk log’ will throw up plenty of simple examples to use when thinking about doing this part of the planning work.  The critical thing is to put a number against each risk – commit to saying that is 10 or 90% likely to come about and what the contingency plans for it are.  This can all be done quickly, it is sketching after all, and part of the rhetorical work of project making that shows that it is all worked out and that all the ends are tied up.

The really important risks in an ERC proposal – the ones that are really to do with the ideas rather than with the completeness of the project entity are slightly different and I’ll make them seem more different than they are in practice to try to illustrate what I mean here. Of course, there are deep links between the practical issue of project implementation and the development of the ideas but I’ll put that to one side for a moment.  I will call these more important risks intrinsic as they are at the heart of the ideas that the project is trying to sell and are, therefore, at the very core of the project.  These intrinsic risks are, I believe, the kind of risk that the ERC is actually thinking about when risk is mentioned as it comes out of their vision of what science is like.

A few posts ago we looked at the work of Karl Popper to help understand which are the most powerful concepts to use when writing the objective statements.  Here I’ll draw on Popper again to look more closely and carefully about what risk means in ERC projects.

All science is based on conjecture which may turn out to be false and depends on making detailed and precise predictions about the unknown which can be tested.  If the predictions turn out to be wrong then the scientific theories used to make them are false. If they turn out to be correct then we can say that they are not yet disproven, and not very much more.  But the most important idea is that to be scientific the ideas must be falsifiable, must be open to being disproven by experiment and observation of the phenomena dealt with in the conjectures.

And here, I think, is the place in the proposal where the question of risk actually has its most natural home i.e., close to the core of the work in the objectives set in the state-of-the-art rather than down with the implementational details where it can look like an afterthought if it is addressed at all.

The objective statements should be set out in such precise detail that they amount to a crystal clear conjecture about the future – this I have tried to make clear in recent posts.  The objectives are a promise about a new and better future and are phrased around the simple idea that ‘at the end of the project I will for the first time be able to predict and control these phenomena or these events or this system to this measurable degree of accuracy and the benefit to science will be that it opens these new horizons on the future’.

And a precise future statement which contains sufficient detail is risky, eminently falsifiable and so more valuable to knowledge and has the added benefit of being very easy to evaluate. In setting out clear and committed objective statements the researcher is demonstrating the extent to which they are promising to go beyond the state-of-the-art which is immediately and intrinsically a risky thing to do. The objectives, therefore, are the core of the riskiness of the project and the place where the high risk/high gain nature of the work needs to be argued for hardest and the risk is about transgressing the state-of-the-art.

This should also make it even more obvious to everyone thinking of writing for this programme that the state-of-the-art (subject of a number of previous posts in this blog) is the foundation stone of the whole edifice.  We need to get the state-of-the-art to a very high level because it is here that the evaluators will look first to falsify the risky predictions they contain.  We need to be able to show that there is nothing obvious in the background that will make our work either impossible or redundant – impossible in the sense that the ideas have already been falsified or redundant in the sense that it has already, or is already being done.

I’d suggest therefore, that researchers deal with the question of risk at the objectives stage as that is where it clearest if we present it as the extent to which the project objectives make risky predictions.  I think we might well also benefit from some implementation and management risk assessment in the later phase of the work and there is can probably sit with the planning stuff about milestones and deliverables etc. as it is a slightly different idea of what risk is.

But the core of it is in the objectives and it inheres in the detail with which they are crafted and the precision of the promises that they make.  If we do a good enough job there then all we need to do is to draw attention to it, to point at the objectives set in the state of the art and draw out the uncertainty of going from one to the other.

Each different funding agency has a different approach to the level of scientific risk that it seeks to fund in its project portfolio and this one of the defining characteristics that differentiate one area of project funding from others whether at national or international programme level.  Some programmes fund applied work in which scientific risk is not really what they are interested in promoting and so won’t be funded.  In these applied programmes the risks tend to be about the practicalities of implementation and take up associated with achieving project impacts.  In ERC projects the appetite for risk is higher, so they say, which would imply that they are comfortable with the fact that most of the science they fund will not reach its objectives – if it were genuinely risky then this would be the case, most would not reach objectives but the few that did would be worth the risks of this approach to funding as the ones that reach their ambitious objectives would have the potential to transform their field.

One of the weaknesses of ERC is that there is no evaluation information, or none available publicly, that would allow us to say if this appetite for risk and tolerance of projects that don’t reach their stated objectives is operational.  And so we have to assume that it is to some extent, although other partial indicators suggest that, in fact, they can’t really evaluate projects on the margins of knowledge well enough to make really risky work a sensible bet in this competition.  It is likely to be the case that good, solid and fairly conventional work that is easily recognisable to a wide range of readers is probably a safer bet here than very focused and specialised work which will transform a part of a field that not very many people will understand or see the significance of.

Also, the attributes of risk are not clear cut or well defined and so each evaluator will bring a different take on what it is depending on their background. But, bearing all this in mind, and not becoming obsessed by the idea of risk in the least we can do an effective job on the really important topic of intrinsic risk by setting out the state-of-the-art in great detail and locating the objectives in the context of that and then by showing how ambitious and risky they ideas are.  We can also put some numbers and percentages against the likelihood that the objectives won’t be reached and also point out the benefits to science if the conjectures contained in the objectives are proven false.  So, it is focused on the objectives set in state-of-the-art and doesn’t have to be anything any more dramatic than pointing to the fact that these are important questions to which we genuinely don’t know the answers, that it really is a step over the edge of knowledge which is a very risky place to be.