
The short history of riverfly sampling is littered with abbreviations. Sampling of aquatic insects by volunteers or ‘citizen scientists’ really got going in the noughties, supported by fly identification courses; I even attended one myself and surveyed my local river for a while. Gradually the collection of data was formalised and catalogued by a body called the Riverfly Partnership, now headquartered in the Freshwater Biological Association (FBA). The Partnership coordinates the Anglers’ Riverfly Monitoring Initiative (ARMI). This comes with the ARMI protocol for sampling and quantifying bug populations. To complicate matters there are other scoring systems — BWMP, ASPT, LIFE, and more.

There are two potential problems with riverfly monitoring. One is the time needed to perform the surveys; the other concerns the way counts of sampled fly are recorded. Rivers are not homogenous bodies of water. Just as fish are not evenly spread throughout a river, insects can be assumed unevenly distributed too. Most surveys use the three-minute kick-sweep sample. A net is positioned behind someone’s foot which then agitates the bottom for three minutes. The net can be moved around during the three minutes to try and get a representative sample for the area. This is typically done once at a given site. If the small area sampled is not representative of the local insect population, then the data acquired may not truly reflect the population at the site. As far as I know, only Cyril Bennett, a trained amateur entomologist has attempted multiple sampling.
Repeated sampling of an area increases the time needed to complete the survey. Moreover, the counting of insects is very time-consuming, especially when there are many species in large numbers. This is the main reason why exact counts are not made; instead numbers are recorded to the nearest factor of 10, i.e. 1 to 9, 10 to 99, 100 to 999, etc., that is on a log scale, where 1 represents the first group (fewer than 10), 2 the second group, and so on. Sometimes these are assigned A and B, dispensing with the fiction of counts. Most of the scoring systems use this method. Fisheries biologists will tell you that this is adequate for recording fly abundance. This may be so — if you have thousands in a sample you don’t really need to know the exact number, particularly when it would take someone hours to count the individuals. However, log scales might hide important information. Fly monitoring’s main purpose is to keep an eye on pollution levels. If an important indicator species dropped from, say, 99 individuals to 10 between sample dates, the change would not be detected.
At one of the riverfly conferences held at the Natural History Museum some years ago, someone challenged Steve Ormerod, a biologist at Cardiff University, about this method. He responded that he was editor of some journal or other, the implication being he knew best. Not the way science should be done, but the way some scientists are, unfortunately. As it happened, Cyril Bennett took the opposite view. But the problem of hidden information is not the only one. Biologists are not typically good with statistics, although many like to think they are, and they take some enormous liberties with figures. So the numbers that represent the logarithmic groups, actually powers of 10, are often added and averaged. This can lead to absurdities. Suppose two insect species are estimated at just under a hundred each and therefore both classified as group 2. Add together and you get 4, which implies an abundance of 10 to the power 4, i.e. about 200 becomes 10,000. I’ve also seen statistical testing performed on such figures, a total nonsense. A recent example of dubious analysis appears in Stephen Brooks et al. The paper compares BWMP and ARMI scores using the wrong kind of analysis. In the first place the comparison is pointless because any measures derived from fly counts should be correlated (unless one or more are useless). Second, figures based on ranks should not be compared using methods intended for numerical data.

No doubt the biologists would say the ARMI figures are only meant to give an indication of abundance. In truth they cannot make up their minds. Sometimes the numbers are treated quantitatively, sometimes qualitatively (groups with no numerical meaning). The latest scoring method for riverfly abundance is the ‘Extended’ scheme, which scores several classes of bugs according to sensitivity to pollution. Abundance groups similar to the AMRI scheme get a number between 0 and 5 which can be plus or minus. Despite using the old log categories, this replaces any pretence of numerical measure with a ranking system. Yet it is still compared against the AMRI scheme.
In an earlier post, I referred to one of the riverfly conferences I attended some years ago at which Peter Lapsley asked the purpose of all this data collection. It is a fair question. After a couple of decades of data gathering you would have expected to see some analysis of trends at the very least. I know of none. The raison d’etre of ARMI is to use fly observations as a proxy for pollution detection. This is where the citizen scientists come in, a rather grand term for sifters of water bugs in white trays. It’s no surprise the majority are fly fishermen; they are keen to do what they can to improve the fly life on rivers, on which fly fishing depends. It seems they produce more data than the professionals make use of. In the ARMI database of observations you can find not just the 1 to 4 groupings but more exact estimates of numbers of each aquatic insect family. If these estimates are reasonably accurate they are potentially valuable data going abegging.
Two questions arise from all this data collection by citizen scientists. The first is why spend all those man-hours on bug sampling when it is surely more straightforward to take water samples and analyse these for pollutants? And are the various scoring systems reliable and timely measures of pollution incidents? The ‘trigger level’ for pollution, the score at which a pollution incident is declared, seems to involve an awful lot of handwaving. Riverfly’s advice is to apply a ‘fudge factor’ as necessary. Of course invertebrate numbers in a river are subject to many influences and using them as a proxy for pollution is subject to great uncertainty. So why use them at all? The simple answer is cost. Citizen scientists work for free, a large financial benefit that replaces government funding. Taking water samples is much cheaper, but analysing them for most nasty chemicals requires expensive laboratory facilities. The huge variety of pollutants in our rivers is demonstrated by Ormerod’s study of pharmaceuticals released into the environment, mostly but not entirely from sewage treatment works.

Particularly worrying is the observation that flea treatment insecticides for pets may be at dangerous levels and that microplastics are now ubiquitous in rivers, effect on ecology unknown. Although there are now handheld gadgets available to measure, for example, phosphates on site, these other chemicals require a lab to detect. Monitoring by the Environment Agency is now so patchy as to be of limited value.
But there is another point. Universities and public bodies like the Environment Agency are now run by money men, accountants and philistines. Government since 2010 has eviscerated environmental watchdogs. Scientists are now under far greater pressure to publish papers: never mind the quality, count the numbers. Citizen science is a source of free labour to help this along. Riverfly monitoring has kept a few biologists busy in that respect. Not that one can blame them for this. Yet the quality of the science, supported by the citizen data dogsbodies, is not always the best. Challenge any of the biologists about this and you are likely to get the brushoff; citizen science only goes so far. The impression is biologists are not interested in science beyond their own sphere, as I’ve noted before. Like with the disappearing salmon, the research lacks urgency; meanwhile on the riverbank anglers still lament the decline in flylife.
Despite the statistical infelicities in monitoring, there is genuine value in the data, especially when it is collected regularly, not always the case — some rivers have only one observation, or none. The State of Nature report shows some improvement in distribution of freshwater invertebrate populations between 1995 and 2005 following the EU Urban Wastewater Treatment Directive of 1991, though some are still in decline and many species have gone extinct. The real test of citizen science riverfly monitoring is whether it brings long term improvement to our waterways. So far this is not apparent.
