Tags

, , , , , , ,

Given the recent explosion in Tianjin, perhaps it is time to give an examination of industrial disasters. Little is known about Tianjin’s supposedly 21 ton tnt explosion (some quick calculations indicate that this may be underestimated by as much as a factor of 10), but we know this much: industrial disasters happen with surprising frequency and they are undesirable outcomes of technological society. The list of somewhat recent industrial disasters is fairly extensive: the Gold King mine waste water spill in Colorado, the Pathum Thani building collapse in Thailand, which followed only a year after the outrage of the Savar building collapse in Bangladesh. Chemical explosions such as the one in Tianjin happen frequently in the US as well, such as the West Fertilizer explosion in Texas in 2013, or the Williams Olefins plant explosion the same year. Most such accidents receive little news coverage, but they occur with a somewhat high frequently. Some portion of the most extreme industrial disasters can be found on Wikipedia, and the list isn’t small.

An approximately 150m diameter fireball, the result of a chemical explosion in Tianjin, China

An approximately 150m diameter fireball, the result of a chemical explosion in Tianjin, China

I expect that when reading this many people will think something similar to “yes, this is quite a few disasters, but when you think of all of the industrial activity where these disasters could happen but don’t, it seems like a very small percentage.” Or perhaps “yes, these disasters are bad, but the benefits of the technology that enables them is worth it.” In response to the first reaction, I would like to point out that the size of the ratio of disasters to potential disasters is, in and of itself, not meaningful. We must decide how many disasters and with what consequences we are willing to accept.  Is even one too many?  Or is there some number of disasters we are willing to accept?

Industrial disasters and modern industrial society go hand in hand.

Industrial disasters and modern industrial society go hand in hand.

The later statement, on the other hand, is a value statement that suggests that the number of disasters is currently acceptable. However, it is based on the premise that these disasters are inevitable and nothing can be done if we want to continue to enjoy the lifestyle enabled by industrial society. This is not entirely true. It seems rather nihilistic to accept that nothing can be done about these disasters while still enjoying the fruits of industrial society. It is this question that I wish to examine: how can the frequency and severity of industrial disasters be reduced?

To first understand this issue, it is important to think about these accidents as being normal. Those who reacted with the second statement astutely made the implicit observation that these sorts of accidents are an unavoidable part of industrial civilization. They happen because, as time goes on, technical systems become more complex and more tightly coupled. Basically, this means that the technological systems become more and more intricate, requiring more technical expertise to interact with them. Often, no one person does, or even can, fully understand their operation (complexity). These systems are often dependent upon one another as well. Industrial plants, chemical storage, nuclear power, etc. all do not rely on even one complex technical system, but a myriad number of them interacting in various ways (tightly coupled). The complexity and coupling of these systems also have synergistic effects. As systems become more complex, they require the coupling of different systems, backup systems, redundant systems, and more to functionally meet their requirements. At the same time, coupling systems increases their complexity, as not only must workers and engineers understand the systems independently, but must now also try to understand how they interact.

Complex and tightly coupled systems are uncertain. We often do not know exactly how they work, which effectively creates uncertainty. When something goes wrong, it can be difficult to understand what until it is too late to fix. In addition, it is also more difficult to tell when something is truly going wrong. Such systems have issues with their operation every single day. Warnings happen all the time that lead to nothing. Aspects of these systems are constantly breaking down seemingly without negative effect. This phenomenon, called normalized deviance, makes it very difficult to tell if a problem is serious or just one of the many that might happen on any given day. All of this combines to ensure that technical industrial systems, as they are today, WILL experience accidents. It is not a matter of if, but a matter of when.

If these highly technical systems in industrialized society are organized such that accidents are a given, what aspects of them can be reorganized to alleviate this barrier? Sociotechical systems must be designed to account for uncertainty in these systems that make accidents themselves more certain. Such a system is likely going to need to be incremental.  Incrementalism has two particularly important factors that could be beneficial when considering industrial disasters such as Tianjin: prudence and learning.   Incrementalism is basically a trial and error learning process, where small steps are taken and then time is taken afterwards to learn from them before proceeding further.

These small steps are part of a larger principle of prudence. The idea is to take sensible initial precautions and continue in a way that limits the possibility of disastrously bad outcomes and, when they do happen, limits the damage done from them, all while enabling the time needed to learn from mistakes. It is important that this process start early, rather than work as an intervention half-way through a project because technical systems are substantially more flexible in their infancy. For example, it would have been much easier to design precautions for explosions into the warehouse that exploded in Tianjin than it would have been to make the equivalent modifications after it was constructed (I would not be surprised to hear that authorities knew of a way to prevent the explosion but didn’t take action because of the expense and difficulty). Other initial precautions might include smaller, more dispersed facilities so that accidents can be more easily controlled, and even those that aren’t don’t have the same potential for catastrophe. Another might be to locate them away from areas of dense population, important water supplies, fragile ecosystems, important food supplies, etc. A gradual scaling up of technology is also prudent. Whatever type of storage system containing whatever chemicals that exploded in Tianjin, it is probably a safe bet to assume that the same storage system is storing the same chemicals elsewhere in China, and probably in other nations as well. All of these facilities were constructed before this explosion, and therefore without the knowledge that they have the potential to explode so catastrophically. It would have been more prudent to start with one small instance of such a storage scheme before replicating it. This also produces a better environment for learning, as such scaling up could be done with a variety of competing chemical storage schemes before selecting the safest to scale up.

But learning from experience is not a thing that just happens if one proceeds slowly and cautiously enough. Incremental technical development requires active preparation for learning. To learn from our industrial mistakes, industrial activities must be monitored, and the monitors must be well funded. Whatever your predilection for government intervention, we must all agree that it is impossible to learn about something when no attention is being paid to it. There must be data gathered in order to learn from it. Learning is also a useless endeavor if there is no incentive to correct errors after we have learned how. Often times, lobbying and other incentives work in reverse: they incentivize non-action even when we know how to prevent or ameliorate some industrial risks. Instead, incentives should be used to promote error correction from learning, rather than covering errors up. Finally, the whole point of learning is to prevent unnecessary suffering, but as errors are made, some suffering is inevitable. As such, an essential part of learning is also easing the consequences of errors. This could come in the form of adequate funds for victim compensation, or other forms of effective humanitarian aid for those who are harmed as a result of errors which occur in the learning process. Action should also be taken to ensure that certain groups do not disproportionately bear the cost of errors during the learning process (this will likely require significant improvements to current democratic governance, which I will not delve into here).

One of the more unsettling, but also hopeful, consequences of viewing industrial disasters through this lens is that they are not very different from other technological endeavors. These disasters are not particularly special: they are extreme versions of the everyday workings of technological society. When we think of other consequences of technological society, they are often not discrete instances as disasters are, but no less devastating: mass poverty, war, deforestation, global warming, holes in the ozone, mass extinction, and others. The hopeful part is, if these phenomenon are also a part of technological society in the same way that industrial disasters are, then an incremental approach can work here too. Imagine if various applications of fossil fuels had been subject to such a incremental learning approach. While it is true that advancement of technological innovation would have been much slower, global warming would also not be a problem that potentially threatens the very survival of our species (and some groups are placed in more precarious positions than others, I might add). So while many of you are likely worried about the reduced pace of innovation, I want to leave with one final thought. Perhaps slowing the pace of innovation is the only way to save ourselves from being outpaced by our own technology.