Accueil / COLLOQUE / Sessions plénières

Sessions plénières

Pierre TINARD

Séismes : comment garantir la résilience économique ?

A l’échelle du globe, parmi les dix événements catastrophiques les plus meurtriers de ces cinquante dernières années, six sont des séismes depuis celui de 1976 en Chine (au moins 255 000 morts) à celui d’Haïti en 2010 (222 000 morts). Sur la même période, les pertes économiques n’ont cessé de croître passant de 2 milliards $ en 1970 à plus de 330 milliards $ en 2017.

L’explosion démographique des 50 dernières années et l’évolution constante des niveaux de vie, d’équipements et de services de la plupart des pays créent une situation où, à aléa équivalent, la survenance d’événements majeurs impacte de façon croissante l’économie des territoires exposés.

L’enjeu des sociétés modernes est sans nul doute d’assurer une résilience rapide après la survenue d’une catastrophe et notamment des séismes dont les conséquences se font souvent ressentir durant des mois voire des années. Cette résilience repose pour l’essentiel sur la continuité des lieux de vie et de l’activité économique et nécessite une prise en charge financière adaptée. Celle-ci peut se décliner aussi bien en amont afin de financer des actions de prévention visant à réduire la vulnérabilité qu’en aval afin d’indemniser les personnes et entreprises touchées.

Dès lors, les Etats ont un choix à faire entre une réponse variant du 100% public au 100% privé basée sur l’assurance ou un système mixte basé sur un partenariat public privé comme en France où l’assurance coexiste depuis 1982 avec le régime d’indemnisation des catastrophes naturelles. Quelle que soit la réponse choisie, il est nécessaire de quantifier le risque afin de provisionner les moyens nécessaires tant pour la prévention que pour la remédiation.

Les événements récents montrent une disparité de situation importante selon le contexte et l’enjeu de la remédiation ne saurait être abordé de la même façon à Haïti où le séisme de 2010 a causé la disparation de 220 000 personnes pour un coût économique estimé à 11,5 milliards $ et un coût assuré de 112 millions $ soit 1% du total versus le séisme de Christchurch en Nouvelle-Zélande en 2011 avec 185 morts, 23 milliards $ de coût économique dont 19,1 pour le secteur de l’assurance soit 83 %.

Au-delà de la contextualisation, les outils disponibles pour appréhender le risque sismique sous son aspect financier seront présentés et notamment les modèles stochastiques sur lesquels se basent les assureurs et les régulateurs dans les pays de l’OCDE. Ces outils ont pour vocation d’intégrer l’état de l’art des connaissances scientifiques et d’en faire une synthèse « clé en main » pour estimer rapidement l’exposition des territoires ce qui les rend pratiques et largement diffusés. Pour autant, ils sont parfois entachés de simplifications et d’une vision trop macroscopique. Des séismes récents ont démontré qu’une approche à plus fine échelle était nécessaire introduisant de nouveaux besoins d’études, tant sur l’aléa que sur la vulnérabilité des bâtiments, auxquelles la communauté scientifique se doit de répondre. Quoi qu’il en soit, ces outils sont utilisés pour dimensionner l’effort financier nécessaire en post-catastrophe et servent de support dans de nombreux Etats pour lesquels un état des lieux des systèmes publics/privés mis en place pour assurer une résilience efficace sera exposé.

Un autre axe d’appréhension du risque sismique passe par la réduction en amont de ses conséquences notamment par le biais de la prévention. Parmi le panel d’actions liées à la prévention, l’établissement de normes parasismiques est un outil efficace de réduction des coûts, pas seulement humains, comme l’ont démontré les séismes de fin 2017 au Mexique pour lesquels l’analyse coût/bénéfice ne souffre d’aucune discussion. Des interactions existent entre financement de la post-catastrophe via l’assurance et actions de prévention. L’exemple de la France sera évoqué au regard du Fonds de Prévention des Risques Naturels Majeurs et des actions dévolues au renforcement du bâti existant ou au financement des Plans de Prévention des Risques.

Pour en savoir plus...

Tinard

Roberto SCOTTA

Scotta

Coupling seismic retrofits and energy improvement. An Italian feedback.

Over the past century in Italy seismic events of medium- high intensity took place every 5 years on the average. Just to mention last ones: L’Aquila 2009, Emilia 2012, and Center of Italy 2017. They had dramatic effects in terms of casualties and economic impact, due to the serious seismic inadequacy of the building stock built mainly before the 60/70-ies of the last century – with the economic boom and the industrialization following the World War II - in absence of the modern seismic hazard map of the territory and when the seismic engineering, and consequently seismic codes, were in their embryonic phase. Further costs – direct and indirect – caused by seismic events are not affordable by the Italian society and represent a burden for the entire European Community.

The same building stock, even that built only some years ago, is highly energy-consuming and carbon dioxide-emitting, thus contributing to trigger hazards related to climate changes. It has become completely inadequate for the urgent need to assure sustainability in our society: in Europe, Roadmap 2050 envisions greenhouse gas emissions being cut by 80-95% with respect to the 1990 levels.
The radical substitution of the building stock is obviously impossible for economic and sustainability reasons and for the need of the preservation of the architectural and historical heritage – a particularly sensitive matter in Italy. In this context, renovation actions addressing both energy and seismic issues are strongly needed.

This complex situation - apparently without solution – fostered on one side legislative interventions by the State, on the other side the applied scientific research for methods and technologies towards an “integrated retrofitting” of existing buildings.
Specific decree-laws– the so called sisma-bonus and eco-bonus decrees – were issued to boost the self-intervention of the owners of existing buildings by means of consistent tax deduction in case of works aiming at seismic and/or energetic upgrading. The amount of tax deduction can reach 85% of the total costs in case of intervention on block-buildings, to be spread in 5 years. The intention of the sisma-bonus decree is to shift from an after-shock reparation approach to a damage prevention strategy. The main tool for the application of the sisma-bonus is the seismic classification of buildings, i.e. an objective procedure to measure the initial seismic safety conditions and the efficacy of the seismic retrofit interventions. Seismic classification allows to prioritize and to assign the economic resources made available with the sisma-bonus decree. In a perspective view, seismic class will also influence market value of buildings – as the energetic classification of the buildings already does – and it could become essential for seismic insurance policy.

Seismic classification introduced with sisma-bonus represents a simplified – also defined “conventional” and therefore practically usable even if with intrinsic defects - version of the methodology developed at PEER based on the Expected Annual Loss parameter. Essential concepts and practical uses of Italian seismic classification procedure will be given in the presentation.

On the other side, the engineering community is involved in developing procedures and methods for the sustainable integrated retrofit of the building stock. In the speech, the work done by the author in understanding how strengthening interventions of timber floors affect the seismic performances of historical masonry buildings will be illustrated. Also the theoretical development and experimental validation of an integral retrofit methodology called “seismic thermal coat” will be shown. Its efficacy in the context of sisma-bonus and eco-bonus decrees will be demonstrated with practical case studies.

To learn more...

Luis RIVERA

Ten years of using W-phase for rapid analysis of large earthquakes: An unexpected application of long-period seismology

The research subjects of some earth scientists are naturally well suited for immediate societal applications. Examples are geothermal research, applied geodesy (GNSS) and, of course, geophysical prospecting (mining, oil, water, etc). However, this is not common at all in case of
the solid earth science community and, in particular, seismology where I personally develop my research. Simply put, the longer is the period of the used seismic waves, the lower is the probability for immediate application. For example, short period surface waves are sometimes used for civil engineering or shallow depth applications while long period normal-mode studies remain mostly a “pure research”. What I would like to describe here is in fact a remarkable counter-example of such a statement. Namely, the use of W-phase for fast analysis of large earthquakes. 

In 1960 and 1964 we had the two largest earthquakes of the twenty century: Valdivia (Chile), May 1960 and Anchorage (Alaska), March 1964. The global scale instrumentation that recorded the long-period waves generated by these two events was very limited and so are the models we have for them. This is particularly true for the 1960 event. No comparably large event occurred for forty years since then until the Mw 9.2, Sumatra earthquake occurred on December 26, 2004. The data quality and quantity, our understanding of seismic phenomena and computational capabilities had enormously advanced in the meantime. As a result, the 2004 event has been studied in great detail in thousands of publications with very diverse kind of data and techniques. A problem however was identified in the aftermath: seismologists were not capable of providing quick information concerning the event. More precisely, in 2004 it was necessary to wait several hours (in fact two days for this specific event), before having a robust evaluation of the size (magnitude) and the focal mechanism for such a large event. Large events are difficult to deal with because very long-period waves are necessary to obtain a complete view of the source, but such long-period waves are difficult to observe and analyze quickly. Realizing such a limitation was the starting point of our work on W-phase. The W-phase is a very long period (~ 200-1000 s) seismic phase that was observationally identified after the 1992 Nicaragua earthquake (Kanamori, 1993). It was visible globally in the time interval between the P and the surface waves. However, it remained a rare and exotic stone for fifteen years. Only a couple of publications were dedicated to it during this time period. In 2008 H. Kanamori and myself set to test the potential of W-phase to determine the focal mechanism and the magnitude of major earthquakes (say, Mw >= 8). We quickly started to obtain very promising results. Z. Duputel joined us in 2009 to work on this subject as his PhD dissertation. In a couple of years the algorithm was already operating at the Pacific Tsunami Warning Center (Hawaii, PTWC, NOAA) and at the National Earthquake Information Center (NEIC, USGS). Nowadays it routinely operates in a number of regional, national and international agencies to provide fast and reliable information to authorities, scientists, civil defense, etc. At global scale, it provides in less than half an hour a moment tensor solution for events with Mw as low as 6.0. At regional scale the delay is reduced to less than 15 min and the minimum Mw to 5.5. This information can be used, for example, for tsunami modeling or for humanitarian or civil defense decision making, etc. On the other hand several colleagues around the world also use the algorithm in “manual mode” to perform long-period source studies. The success of the practical application of such an exotic object like W-phase came as a complete surprise to us. It is an encouraging example of the power of scientific research and it prompts us to develop and cultivate interaction with applied scientific communities. 

Pour en savoir plus...

rivera_1