Said Achchab (ENSIAS)
A hybrid deep network approach for predictive analysis of massive and incomplete data of insurance
In this work we focus on machine learning methods in a context of massive and incomplete data of insurance. We adopt hybrid deep learning method for segmentation, classification and mapping of customer profiles to better understand their behavior in relation to existing insurance products and an optimized management of the of disasters cover. We show in particular that the deep learning method gives more accurate results than classical neural networks. We illustrate the results on real data from an insurance company.
Katrien Antonio (KUL)
Sparse modeling of risk factors in insurance analytics
Insurance companies use predictive models for a variety of analytic tasks, including pricing, marketing campaigns, claims handling, fraud detection and reserving. Typically, these predictive models use a selection of continuous, ordinal, nominal and spatial risk factors to differentiate risks. Such models should not only be competitive, but also interpretable by stakeholders (including the policyholder and the regulator) and easy to implement and maintain in a production environment. That is why current actuarial literature puts focus on generalized linear models where risk cells are constructed by binning risk factors up front, using ad hoc techniques or professional expertise. In statistical literature penalized regression is often used to encourage the selection and fusion of predictors in predictive modeling. Most penalization strategies work for data where predictors are of the same type, such as LASSO for continuous variables and Fused LASSO for ordered variables. We design an estimation strategy for generalized linear models which includes variable selection and the binning of risk factors through L1-type penalties. We consider the joint presence of different types of covariates and a specific penalty for each type of predictor. Using the theory of proximal operators, our estimation procedure is computationally efficient since it splits the overall optimization problem into easier to solve sub-problems per predictor and its associated penalty. As such, we are able to simultaneously select, estimate and group, in a statistically sound way, any combination of continuous, ordinal, nominal and spatial risk factors.
We illustrate the approach with simulation studies, an analysis of Munich rent data, and a case-study on motor insurance pricing.
This presentation will cover ongoing work by Sander Devriendt, Katrien Antonio, Edward (Jed) Frees and Roel Verbelen.
Bart Baesens (KUL)
Credit Risk Analytics: Basel versus IFRS 9
Credit risk modeling is undoubtedly among the most crucial and actual issues in the field of financial risk management. In this presentation, we elaborate on some key issues and challenges that arise when building credit risk models in a Basel versus IFRS 9 context. We start by outlining a three level credit risk model architecture: level 0 (data), level 1 (model) and level 2 (ratings and calibration). From there onwards, the following topics will be addressed:
• PD/LGD/EAD performance benchmarks
• Basel versus IFRS 9 perspective
• Model discrimination versus calibration
• Model validation
The speaker will extensively comment on both his industry and research experience and clarify the various concepts with real-life examples.
Enrico Biffis (Imperial College London)
Satellite Data and Machine Learning for Weather Risk Management and Food Security
The increase in frequency and severity of extreme weather events poses challenges for the agricultural sector in developing economies and for food security globally. In this paper, we demonstrate how machine learning can be used to mine satellite data and identify pixel-level optimal weather indices that can be used to inform the design of risk transfers and the quantification of the benefits of resilient production technology adoption. We implement the model to study maize production in Mozambique, and show how the approach can be used to produce country-wide risk profiles resulting from the aggregation of local, heterogeneous exposures to rainfall precipitation and excess temperature. We then develop a framework to quantify the economic gains from technology adoption by using insurance costs as the relevant metric, where insurance is broadly understood as the transfer of weather driven crop losses to a dedicated facility. We consider the case of irrigation in detail, estimating a reduction in insurance costs of at least 30%, which is robust to different configurations of the model. The approach offers a robust framework to understand the costs vs. benefits of investment in irrigation infrastructure, but could clearly be used to explore in detail the benefits of more advanced input packages, allowing for example for different crop varieties, sowing dates, or fertilizers. (This is joint work by Enrico Biffis and Erik Chavez)
Sébastien Conort (BNP Paribas Cardif)
Discovery of Deep Learning - Illustration on a Natural Language Processing use case at BNP Paribas Cardif
First, we will remind shortly what is Deep Learning, why it is so popular right now in the machine learning community, and why it is accessible to passionate data scientists in insurance companies such as BNP Paribas Cardif. Second, we will present results we got at BNP Cardif's Datalab on a Natural Language Processing use case . The use case consisted in identifying missing pieces of information in beneficiary clauses of some old savings contracts, for which beneficiary clauses are stored as unstructured free text in our databases. This use case helped at solving a regulatory issue for BNP Paribas Cardif.
Silvia Figini (University of Pavia)
Credit data science risk models for SMEs
This paper describes novel approaches to predict default for SMEs. Ensemble approaches and novel data science risk models are tested on a real data set provided by a financial institution. Out of sample mesaures obtained outperform standard approaches proposed in the literature.
In our paper we introduce a novel methodological idea for model selection based on distances among predictive distributions, thus supporting financial institutions in decision making.
This is joint work of Silvia Figini and Pierpaolo Uberti.
Guojun Gan (Connecticut University)
Valuation of Large Variable Annuity Portfolios: Challenges and Potential Solutions
In the past decade, the rapid growth of variable annuities has posed great challenges to insurance companies especially when it comes to valuing the complex guarantees embedded in these products. The financial risks associated with guarantees embedded in variable annuities cannot be adequately addressed by traditional actuarial approaches. In practice, dynamic hedging is usually adopted by insurers and the hedging is done on the whole portfolio of VA contracts. Since the guarantees embedded in VA contracts sold by insurance companies are complex, insurers resort to Monte Carlo simulation to calculate the Greeks required by dynamic hedging but this method is extremely time-consuming when applied to a large portfolio of VA contracts. In this talk, I will talk about two major computational problems associated with dynamic hedging and present some potential solutions based on statistical learning to address these computational problems.
Montserrat Guillen (Barcelona University)
Telematics and the natural evolution of pricing in motor insurance
Telematics is a revolution in data analytics when applied to motor insurance, but the transition to a fully data-driven dynamic pricing is challenging. We present methods to quantify risk with applications to usage-based motor insurance. We show illustrations by modelling the time to the first crash and show that it is shorter for those drivers with less experience. The risk of accident increases with excessive speed, but the effect is higher for men than for women among the more experienced drivers. Additionally, nighttime driving reduces the time to first accident for women but not for men. Gender differences in the risk of accident are mainly attributable to the fact that men drive more often than women. We explore alternative methods to include mileage in the quantification of risk, as well as the way exposure to risk is contemplated in generalized linear models. We also investigate changes in driving patterns after having an accident, and conclude that those who speed more and have accidents with bodily injuries reduce their proportion of speed violations after the accident. We show how to adapt existing models for pricing by kilometer driven, with a correction based on telematics information. We also introduce ideas about other aspects of optimal pricing in motor insurance by looking at the possibility of customer lapse.
Gareth Peters (UCLondon)
Feature Extraction Methods and Stochastic Mortality Modelling
In this presentation I will review recent work my co-authors and I have developed in the paper:
" Stochastic Period and Cohort Effect State-Space Mortality Models Incorporating Demographic Factors via Probabilistic Robust Principal Components".
This work considers a multi-factor extension of the family of Lee-Carter stochastic mortality models. We build upon the time, period and cohort stochastic model structure to extend it to include exogenous observable demographic features that can be used as additional factors to improve model fit and forecasting accuracy. We develop a dimension reduction feature extraction framework which (a) employs projection based techniques of dimensionality reduction; in doing this we also develop (b) a robust feature extraction framework that is amenable to different structures of demographic data; (c) we analyse demographic data sets from the patterns of missingness and the impact of such missingness on the feature extraction, and (d) introduce a class of multi-factor stochastic mortality models incorporating time, period, cohort and demographic features, which we develop within a Bayesian state-space estimation framework; finally (e) we develop an efficient combined Markov chain and filtering framework for sampling the posterior and forecasting.
We undertake a detailed case study on the Human Mortality Database demographic data from European countries and we use the extracted features to better explain the term structure of mortality in the UK over time for male and female populations when compared to a pure Lee-Carter stochastic mortality model, demonstrating our feature extraction framework and consequent multi-factor mortality model improves both in sample fit and importantly out-off sample mortality forecasts by a non-trivial gain in performance.
Christian Robert (UCBL)
Non parametric individual claim reserving
Accurate loss reserves are an important item in the financial statement of an insurance company and are mostly evaluated by macro-level models with aggregate data in a run-off triangle. In recent years, a small set of literature that proposed parametric reserving models using underlying individual claims data has emerged. In this paper, we introduce non parametric tools (machine learning mostly) to estimate outstanding and IBNR liabilities using covariables available for each policy and policyholder and which may be informative about claim frequency and severity as well as payments behaviors. This exercise is quite intricate and new since the target variable (claim severity) is right-censored most of the time. The performance of our approach is evaluated by comparing the predictive values of the reserve estimates with their true values on a large number of simulated data. We also compare our individual approach with aggregated classical methods such as Mack’s Chain Ladder with respect to the bias and the volatlity of the estimates.
Sébastien de Valeriola (UCL)
Decision trees & random forest algorithms in credit risk assessment
An increasing number of bankers and insurers now embed machine learning techniques in their operational processes. In this talk, we review the deployment of such a technique in a real-life company. More specifically, we present the implementation of a tree-based loss given default model. We highlight the advantages and disadvantages of these methods when considering their practical use in the industry, and show some of the issues we faced in the course of this implementation.