Retour Accueil / Recherche / Publications sur H.A.L.

In the last years, the automotive engineering industry has been deeply influenced by the use of «machine learning» techniques for new design and innovation purposes. However, some specific engineering aspects like numerical optimization study still require the development of suitable high-performance machine learning approaches involving parametrized Finite Elements (FE) structural dynamics simulation data. Weight reduction on a car body is a crucial matter that improves the environmental impact and the cost of the product. The actual optimization process at Renault SA uses numerical Design of Experiments (DOE) to find the right thicknesses and materials for each part of the vehicle that guarantees a reduced weight while keeping a good behavior of the car body, identified by criteria or sensors on the body (maximum displacements, upper bounds of instantaneous acceleration …). The usual DOE methodology generally uses between 3 and 10 times the numbers of parameters of the study (which means, for a 30-parameters study, at least 90 simulations, with typically 10 h per run on a 140-core computer). During the last 2 years, Renault’s teams strived to develop a disruptive methodology to conduct optimization study. By ‘disruptive’, we mean to find a methodology that cuts the cost of computational effort by several orders of magnitude. It is acknowledged that standard DoEs need a number of simulations which is at least proportional to the dimension of the parameter space, leading generally to hundreds of fine simulations for real applications. Comparatively, a disruptive method should require about 10 fine evaluations only. This can be achieved by means of a combination of massive data knowledge extraction of FE crash simulation results and the help of parallel high-performance computing (HPC). For instance, in the recent study presented by Assou et al. (A car crash reduced order model with random forest. In: 4th International workshop on reduced basis, POD and PGD Model Reduction Techniques—MORTech 2017. 2017), it took 10 runs to find a solution of a 34-parameter problem that fulfils the specifications. In order to improve this method, we must extract more knowledge from the simulation results (correlations, spatio-temporal features, explanatory variables) and process them in order to find efficient ways to describe the car crash dynamics and link criteria/quantities of interest with some explanatory variables. One of the improvements made in the last months is the use of the so-called Empirical Interpolation Method (EIM, [Barrault et al.]) to identify the few time instants and spatial nodes of the FE-mesh (referred to as magic points) that “explain” the behavior of the body during the crash, within a dimensionality reduction approach. The EIM method replaces a former K -Means algorithm (Davies et al. in IEEE Trans Pattern Anal Mach Intell, 1(2):224–227, 1979) which was processed online, for each ROM. Instead, the computation of EIM method is done offline, once for all, for each simulation. This new method allows us to compute a ROM quite faster, and to reduce the number of features that we use for the regression step (~ 100). The nonlinear regression step is achieved by a standard Random Forest (RF, [Breiman. Mach Learn 45:5–32, 2001]) algorithm. Another improvement of the method is the characterization of numerical features describing the shape of the body, at a nodal scale. The characteristics of orientation of the elements surrounding a mesh node must be taken into account to describe the behavior of the node during the crash. The actual method integrates some numerical features, computed from the orientation of the elements around each node, to explain the node behavior. The paper is organized as follows: The introduction states the scientific and industrial context of the research. Then, the ReCUR Method is detailed, and the recent improvements are highlighted. Results are presented and discussed before having some concluding remarks on this piece of work.

ano.nymous@ccsd.cnrs.fr.invalid (Etienne Gstalter), Etienne Gstalter

This paper focuses on the low-dimensional representation of multivariate functions. We study a recursive POD representation, based upon the use of the power iterate algorithm to recursively expand the modes retained in the previous step. We obtain general error estimates for the truncated expansion, and prove that the recursive POD representation provides a quasi-optimal approximation in $$L^2$$ L 2 norm. We also prove an exponential rate of convergence, when applied to the solution of the reaction-diffusion partial differential equation. Some relevant numerical experiments show that the recursive POD is computationally more accurate than the Proper Generalized Decomposition for multivariate functions. We also recover the theoretical exponential convergence rate for the solution of the reaction-diffusion equation.

ano.nymous@ccsd.cnrs.fr.invalid (M. Azaïez), M. Azaïez

The problem of estimating the spatio-functional expectile regression for a given spatial mixing structure Xi,Yi∈F×R, when i∈ZN,N≥1 and F is a metric space, is investigated. We have proposed the M-estimation procedure to construct the Spatial Local Linear (SLL) estimator of the expectile regression function. The main contribution of this study is the establishment of the asymptotic properties of the SLL expectile regression estimator. Precisely, we establish the almost-complete convergence with rate. This result is proven under some mild conditions on the model in the mixing framework. The implementation of the SLL estimator is evaluated using an empirical investigation. A COVID-19 data application is performed, allowing this work to highlight the substantial superiority of the SLL-expectile over SLL-quantile in risk exploration.

ano.nymous@ccsd.cnrs.fr.invalid (Ali Laksaci), Ali Laksaci

We explore an exchangeably weighted bootstrap of the general function-indexed empirical U-processes in the Markov setting, which is a natural higher-order generalization of the weighted bootstrap empirical processes. As a result of our findings, a considerable variety of bootstrap resampling strategies arise. This paper aims to provide theoretical justifications for the exchangeably weighted bootstrap consistency in the Markov setup. General structural conditions on the classes of functions (possibly unbounded) and the underlying distributions are required to establish our results. This paper provides the first general theoretical study of the bootstrap of the empirical U-processes in the Markov setting. Potential applications include the symmetry test, Kendall’s tau and the test of independence.

ano.nymous@ccsd.cnrs.fr.invalid (Inass Soukarieh), Inass Soukarieh

U-statistics are a fundamental class of statistics derived from modeling quantities of interest characterized by responses from multiple subjects. U-statistics make generalizations the empirical mean of a random variable X to the sum of all k-tuples of X observations. This paper examines a setting for nonparametric statistical curve estimation based on an infinite-dimensional covariate, including Stute’s estimator as a special case. In this functional context, the class of “delta sequence estimators” is defined and discussed. The orthogonal series method and the histogram method are both included in this class. We achieve almost complete uniform convergence with the rates of these estimators under certain broad conditions. Moreover, in the same context, we show the uniform almost-complete convergence for the nonparametric inverse probability of censoring weighted (I.P.C.W.) estimators of the regression function under random censorship, which is of its own interest. Among the potential applications are discrimination problems, metric learning and the time series prediction from the continuous set of past values.

ano.nymous@ccsd.cnrs.fr.invalid (Salim Bouzebda), Salim Bouzebda

Stute presented the so-called conditional U-statistics generalizing the Nadaraya–Watson estimates of the regression function. Stute demonstrated their pointwise consistency and the asymptotic normality. In this paper, we extend the results to a more abstract setting. We develop an asymptotic theory of conditional U-statistics for locally stationary random fields {Xs,An:sinRn} observed at irregularly spaced locations in Rn=[0,An]d as a subset of Rd. We employ a stochastic sampling scheme that may create irregularly spaced sampling sites in a flexible manner and includes both pure and mixed increasing domain frameworks. We specifically examine the rate of the strong uniform convergence and the weak convergence of conditional U-processes when the explicative variable is functional. We examine the weak convergence where the class of functions is either bounded or unbounded and satisfies specific moment conditions. These results are achieved under somewhat general structural conditions pertaining to the classes of functions and the underlying models. The theoretical results developed in this paper are (or will be) essential building blocks for several future breakthroughs in functional data analysis.

ano.nymous@ccsd.cnrs.fr.invalid (Salim Bouzebda), Salim Bouzebda

In this study, we look at the wavelet basis for nonparametric estimation of density and regression functions for continuous functional stationary processes in Hilbert space. The mean integrated squared error for a small subset is established. We employ a martingale approach to obtain the asymptotic properties of these wavelet estimators. These findings are established under rather broad assumptions. All we assume about the data is that it is ergodic, but beyond that, we make no assumptions. In this paper, the mean integrated squared error findings in the independence or mixing setting were generalized to the ergodic setting. The theoretical results presented in this study are (or will be) valuable resources for various cutting-edge functional data analysis applications. Applications include conditional distribution, conditional quantile, entropy, and curve discrimination.

ano.nymous@ccsd.cnrs.fr.invalid (Sultana Didi), Sultana Didi

The convergence rate for free-distribution functional data analyses is challenging. It requires some advanced pure mathematics functional analysis tools. This paper aims to bring several contributions to the existing functional data analysis literature. First, we prove in this work that Kolmogorov entropy is a fundamental tool in characterizing the convergence rate of the local linear estimation. Precisely, we use this tool to derive the uniform convergence rate of the local linear estimation of the conditional cumulative distribution function and the local linear estimation conditional quantile function. Second, a central limit theorem for the proposed estimators is established. These results are proved under general assumptions, allowing for the incomplete functional time series case to be covered. Specifically, we model the correlation using the ergodic assumption and assume that the response variable is collected with missing at random. Finally, we conduct Monte Carlo simulations to assess the finite sample performance of the proposed estimators.

ano.nymous@ccsd.cnrs.fr.invalid (Ouahiba Litimein), Ouahiba Litimein

We are mainly concerned with kernel-type estimators for the moment-generating function in the present paper. More precisely, we establish the central limit theorem with the characterization of the bias and the variance for the nonparametric recursive kernel-type estimators for the moment-generating function under some mild conditions in the censored data setting. Finally, we investigate the methodology's performance for small samples through a short simulation study.

ano.nymous@ccsd.cnrs.fr.invalid (Salim Bouzebda), Salim Bouzebda

[...]

ano.nymous@ccsd.cnrs.fr.invalid (Stéphane Mottelet), Stéphane Mottelet

By constructing jointly a random graph and an associated exploration process, we define the dynamics of a “parking process” on a class of uniform random graphs as a measure-valued Markov process, representing the empirical degree distribution of non-explored nodes. We then establish a functional law of large numbers for this process as the number of vertices grows to infinity, allowing us to assess the jamming constant of the considered random graphs, i.e. the size of the maximal independent set discovered by the exploration algorithm. This technique, which can be applied to any uniform random graph with a given–possibly unbounded–degree distribution, can be seen as a generalization in the space of measures, of the differential equation method introduced by Wormald.

ano.nymous@ccsd.cnrs.fr.invalid (Paola Bermolen), Paola Bermolen

In this paper, we design a posteriori estimates for finite element approximations of nonlinear elliptic problems satisfying strong-monotonicity and Lipschitz-continuity properties. These estimates include, and build on, any iterative linearization method that satisfies a few clearly identified assumptions; this encompasses the Picard, Newton, and Zarantonello linearizations. The estimates give a guaranteed upper bound on an augmented energy difference (reliability with constant one), as well as a lower bound (efficiency up to a generic constant). We prove that for the Zarantonello linearization, this generic constant only depends on the space dimension, the mesh shape regularity, and possibly the approximation polynomial degree in four or more space dimensions, making the estimates robust with respect to the strength of the nonlinearity. For the other linearizations, there is only a computable dependence on the local variation of the linearization operators. We also derive similar estimates for the energy difference. Numerical experiments illustrate and validate the theoretical results, for both smooth and singular solutions.

ano.nymous@ccsd.cnrs.fr.invalid (André Harnist), André Harnist

Gaussian smoothed sliced Wasserstein distance has been recently introduced for comparing probability distributions, while preserving privacy on the data. It has been shown that it provides performances similar to its non-smoothed (non-private) counterpart. However, the computational and statistical properties of such a metric have not yet been well-established. This work investigates the theoretical properties of this distance as well as those of generalized versions denoted as Gaussian-smoothed sliced divergences G σ SD p. We first show that smoothing and slicing preserve the metric property and the weak topology. To study the sample complexity of such divergences, we then introduce μn the double empirical distribution for the smoothed-projected µ. The distribution μn is a result of a double sampling process: one from sampling according to the origin distribution µ and the second according to the convolution of the projection of µ on the unit sphere and the Gaussian smoothing. We particularly focus on the Gaussian smoothed sliced Wasserstein distance and prove that it suffers from an unavoidable bias of approximation of order log(n/2). We also derive other properties, including continuity, of different divergences with respect to the smoothing parameter. We support our theoretical findings with empirical studies in the context of privacy-preserving domain adaptation.

ano.nymous@ccsd.cnrs.fr.invalid (Mokhtar Z. Alaya), Mokhtar Z. Alaya

[...]

ano.nymous@ccsd.cnrs.fr.invalid (Hanna Bacave), Hanna Bacave

We propose a way to account for inspection errors in a particular framework. We consider a situation where the lifetime of a system depends essentially of a particular part. A deterioration of this part is regarded as an unacceptable state for the safety of the system and a major renewal is deemed necessary. Thus the statistical analysis of the deterioration time distribution of this part is of primary interest for the preventive maintenance of the system. In this context, we faced the following problem. In the early life of the system, unwarranted renewals of the part are decided upon, caused by overly cautious behaviour. Such unnecessary renewals make the statistical analysis of deterioration time data difficult and can induce an underestimation of the mean life of the part. To overcome this difficulty, we propose to regard the problem as an incomplete data model. We present its estimation under the maximum likelihood methodology. Numerical experiments show that this approach eliminates the pessimistic bias in the estimation of the mean life of the part. We also present a Bayesian analysis of the problem which can be useful in a small sample setting.

ano.nymous@ccsd.cnrs.fr.invalid (Gilles Celeux), Gilles Celeux

We deploy artificial neural networks to unfold neutron spectra from measured energy-integrated quantities. These neutron spectra represent an important parameter allowing to compute the absorbed dose and the kerma to serve radiation protection in addition to nuclear safety. The built architectures are inspired from convolutional neural networks. The first architecture is made up of residual transposed convolution's blocks while the second is a modified version of the U-net architecture. A large and balanced dataset is simulated following "realistic" physical constraints to train the architectures in an efficient way. Results show a high accuracy prediction of neutron spectra ranging from thermal up to fast spectrum. The dataset processing, the attention paid to performances' metrics and the hyperoptimization are behind the architectures' robustness.

ano.nymous@ccsd.cnrs.fr.invalid (Maha Bouhadida), Maha Bouhadida

During a severe accident in a nuclear reactor, extreme temperatures may be reached (T>2500 K). In these conditions, the nuclear fuel may react with the Zircaloy cladding and then with the steel vessel, forming a mixture of solid-liquid phases called in-vessel corium. In the worst scenario, this mixture may penetrate the vessel and reach the concrete underneath the reactor. In order to develop the TAF-ID thermodynamic database (www.oecd-nea.orgiscienceitaf-id) on nuclear fuels and to predict the high temperature behaviour of the corium + concrete system, new high temperature thermodynamic data are needed. The LM2T at CEA Saclay centre started an experimental campaign of phase equilibria measurements at high temperature (up to 2600 K) on interesting corium sub-systems. In particular, a heat treatment at 2500 K has been performed on two prototypic ex-vessel corium samples (within the U-Zr-Al-Ca-Si-O system) with different amounts of CaO and SiO$_2$. The results show that depending on the SiO2-content, the final configuration of the samples can be significantly different. The sample with the higher CaO-content showed a dendritic structure representative of a single quenched liquid phase, whilst the sample richer in SiO2 exhibited a microstructure which suggests the presence of a liquid miscibility gap. Furthermore a new laser heating setup has been conceived. This technique allows very high temperature measures (T > 3000 K) limiting the interactions between the sample and the surroundings.

ano.nymous@ccsd.cnrs.fr.invalid (Andrea Quaini), Andrea Quaini

This work is part of a general study on the long-term safety of the geological repository of nuclear wastes. A diffusion equation with a moving boundary in one dimension is introduced and studied. The model describes some mechanisms involved in corrosion processes at the surface of carbon steel canisters in contact with a claystone formation. The main objective of the paper is to prove the existence of global weak solutions to the problem. For this, a semi-discrete in time minimizing movements scheme à la De Giorgi is introduced. First, the existence of solutions to the scheme is established and then, using a priori estimates, it is proved that as the time step goes to zero these solutions converge up to extraction towards a weak solution to the free boundary model.

ano.nymous@ccsd.cnrs.fr.invalid (Benoît Merlet), Benoît Merlet

Principal component analysis is a recognized powerful and practical method in statistics and data science. It can also be used in modeling as a dimensionality reduction tool to achieve low-order models of complex multiphysics or engineering systems. Model-order reduction (MOR) methodologies today are an important topic for engineering design and analysis. Design space exploration or accelerated numerical optimization for example are made easier by the use of reduced-order models. In this chapter, we will talk about the use of higher-order singular value decompositions (HOSVD) applied to spatiotemporal problems that are parameterized by a set of design variables or physical parameters. Here we consider a data-driven reduced order modeling based on a design of computer experiment: from high-dimensional computational results returned by high-fidelity solvers (e.g. finite element ones), the HOSVD allows us to determine spatial, time and parameters principal components. The dynamics of the system can then be retrieved by identifying the low-order discrete dynamical system. As application, we will consider the dynamics of deformable capsules flowing into microchannels. The study of such fluid-structure interaction problems is motivated by the use of microcapsules as innovative drug delivery carriers through blood vessels.

ano.nymous@ccsd.cnrs.fr.invalid (Florian de Vuyst), Florian de Vuyst

We focus on the ill posed data completion problem and its finite element approximation, when recast via the variational duplication Kohn-Vogelius artifice and the condensation Steklov-Poincaré operators. We try to understand the useful hidden features of both exact and discrete problems. When discretized with finite elements of degree one, the discrete and exact problems behave in diametrically opposite ways. Indeed, existence of the discrete solution is always guaranteed while its uniqueness may be lost. In contrast, the solution of the exact problem may not exist, but it is unique. We show how existence of the so called "weak spurious modes", of the exact variational formulation, is source of instability and the reason why existence may fail. For the discrete problem, we find that the cause of non uniqueness is actually the occurrence of "spurious modes". We track their fading effect asymptotically when the mesh size tends to zero. In order to restore uniqueness, we recall the discrete version of the Holmgren principle, introduced in [Azaïez et al, IPSE, 18, 2011], and we discuss the effect on uniqueness of the finite element mesh, using some graph theory basic material.

ano.nymous@ccsd.cnrs.fr.invalid (F Ben Belgacem), F Ben Belgacem

In this paper we analyse a finite volume scheme for a nonlocal version of the Shigesada-Kawazaki-Teramoto (SKT) cross-diffusion system. We prove the existence of solutions to the scheme, derive qualitative properties of the solutions and prove its convergence. The proofs rely on a discrete entropy-dissipation inequality, discrete compactness arguments, and on the novel adaptation of the so-called duality method at the discrete level. Finally, thanks to numerical experiments, we investigate the influence of the nonlocality in the system: on convergence properties of the scheme, as an approximation of the local system and on the development of diffusive instabilities.

ano.nymous@ccsd.cnrs.fr.invalid (Maxime Herda), Maxime Herda

Compressible multi-material flows are omnipresent in scientifc and industrial applications: from the supernova explosions in space, high speed flows in jet and rocket propulsion to the scenario of the underwater explosions, and vapor explosions in the post accidental situation in the nuclear reactors, their application covers almost all the aspects of classical fluid physics. In the numerical simulations of these flows, interfaces play a very crucial role. A poor numerical resolution of the interfaces could make it very difficult to account for the physics like material separation, location of the shocks and the contact discontinuities, and the transfer of the mass, momentum, heat between different materials/phases. Owing to such an importance, the sharp interface capturing remains a very active area of research in computational Physics. To address this problem in this paper we focus on the Interface Capturing (IC) strategy, and thus we make the use of a newly developed Diffuse Interface Method (DIM) called: Multidimensional Limiting Process-Upper Bound (MLP-UB). Our analysis shows that this method is easy to implement, easily extendable to multiple space dimensions, can deal with any number of material interfaces, and produces sharp shape-preserving interfaces, along with their accurate interaction with shocks and contact discontinuities. Numerical experiments show very good results even over rather coarse meshes.

ano.nymous@ccsd.cnrs.fr.invalid (Shambhavi Nandan), Shambhavi Nandan

For over 60 years, research reactors (RR or RTR for research testing reactors) have been used as neutron sources for research, radioisotope production ($^{99}$Mo/$^{99m}$Tc), nuclear medicine, materials characterization, etc… Currently, over 240 of these reactors are in operation in 56 countries. They are simpler than power reactors and operate at lower temperature (cooled to below 100°C). The fuel assemblies are typically plates or cylinders of uranium alloy and aluminium (U-Al) coated with pure aluminium. These fuels can be processed in AREVA La Hague plant after batch dissolution in concentrated nitric acid and mixing with UOX fuel streams. The aim of this study is to accurately measure the solubility of molybdenum in nitric acid solution containing high concentrations of aluminium. The higher the molybdenum solubility is, the more flexible reprocessing operations are, especially when the spent fuels contain high amounts of molybdenum. To be most representative of the dissolution process, uranium-molybdenum alloy and molybdenum metal powder were dissolved in solutions of aluminium nitrate at the nominal dissolution temperature. The experiments showed complete dissolution of metallic elements after 30minutes stirring, even if molybdenum metal was added in excess. After an induction period, a slow precipitation of molybdic acid occurs for about 15hours. The data obtained show the molybdenum solubility decreases with increasing aluminium concentration. The solubility law follows an exponential relation around 40g/L of aluminium with a high determination coefficient. Molybdenum solubility is not impacted by the presence of gadolinium, or by an increasing concentration of uranium.

ano.nymous@ccsd.cnrs.fr.invalid (Xavier Hérès), Xavier Hérès

Gaussian smoothed sliced Wasserstein distance has been recently introduced for comparing probability distributions, while preserving privacy on the data. It has been shown, in applications such as domain adaptation, to provide performances similar to its non-private (non-smoothed) counterpart. However, the computational and statistical properties of such a metric is not yet been well-established. In this paper, we analyze the theoretical properties of this distance as well as those of generalized versions denoted as Gaussian smoothed sliced divergences. We show that smoothing and slicing preserve the metric property and the weak topology. We also provide results on the sample complexity of such divergences. Since, the privacy level depends on the amount of Gaussian smoothing, we analyze the impact of this parameter on the divergence. We support our theoretical findings with empirical studies of Gaussian smoothed and sliced version of Wassertein distance, Sinkhorn divergence and maximum mean discrepancy (MMD). In the context of privacy-preserving domain adaptation, we confirm that those Gaussian smoothed sliced Wasserstein and MMD divergences perform very well while ensuring data privacy.

ano.nymous@ccsd.cnrs.fr.invalid (Alain Rakotomamonjy), Alain Rakotomamonjy

Optimal Transport (OT) metrics allow for defining discrepancies between two probability measures. Wasserstein distance is for longer the celebrated OT-distance frequently used in the literature, which seeks probability distributions to be supported on the same metric space. Because of its high computational complexity, several approximate Wasserstein distances have been proposed based on entropy regularization or on slicing, and one-dimensional Wassserstein computation. In this paper, we propose a novel extension of Wasserstein distance to compare two incomparable distributions, that hinges on the idea of distributional slicing, embeddings, and on computing the closed-form Wassertein distance between the sliced distributions. We provide a theoretical analysis of this new divergence, called heterogeneous Wasserstein discrepancy (HWD), and we show that it preserves several interesting properties including rotation-invariance. We show that the embeddings involved in HWD can be efficiently learned. Finally, we provide a large set of experiments illustrating the behavior of HWD as a divergence in the context of generative modeling and in query framework.

ano.nymous@ccsd.cnrs.fr.invalid (Mokhtar Z. Alaya), Mokhtar Z. Alaya

Recent works in the Boundary Element Method (BEM) community have been devoted to the derivation of fast techniques to perform the matrix vector product needed in the iterative solver. Fast BEMs are now very mature. However, it has been shown that the number of iterations can significantly hinder the overall efficiency of fast BEMs. The derivation of robust preconditioners is now inevitable to increase the size of the problems that can be considered. Analytical precon-ditioners offer a very interesting strategy by improving the spectral properties of the boundary integral equations ahead from the discretization. The main contribution of this paper is to propose new analytical preconditioners to treat Neumann exterior scattering problems in 2D and 3D elasticity. These preconditioners are local approximations of the adjoint Neumann-to-Dirichlet map. We propose three approximations with different orders. The resulting boundary integral equations are preconditioned Combined Field Integral Equations (CFIEs). An analytical spectral study confirms the expected behavior of the preconditioners, i.e., a better eigenvalue clustering especially in the elliptic part contrary to the standard CFIE of the first-kind. We provide various 2D numerical illustrations of the efficiency of the method for different smooth and non smooth geometries. In particular, the number of iterations is shown to be independent of the density of discretization points per wavelength which is not the case of the standard CFIE. In addition, it is less sensitive to the frequency.

ano.nymous@ccsd.cnrs.fr.invalid (Stéphanie Chaillat), Stéphanie Chaillat

We propose a novel approach for comparing distributions whose supports do not necessarily lie on the same metric space. Unlike Gromov-Wasserstein (GW) distance which compares pairwise distances of elements from each distribution, we consider a method allowing to embed the metric measure spaces in a common Euclidean space and compute an optimal transport (OT) on the embedded distributions. This leads to what we call a sub-embedding robust Wasserstein (SERW) distance. Under some conditions, SERW is a distance that considers an OT distance of the (low-distorted) embedded distributions using a common metric. In addition to this novel proposal that generalizes several recent OT works, our contributions stand on several theoretical analyses: (i) we characterize the embedding spaces to define SERW distance for distribution alignment; (ii) we prove that SERW mimics almost the same properties of GW distance, and we give a cost relation between GW and SERW. The paper also provides some numerical illustrations of how SERW behaves on matching problems.

ano.nymous@ccsd.cnrs.fr.invalid (Mokhtar Z. Alaya), Mokhtar Z. Alaya

We extend the general stochastic matching model on graphs introduced in [13], to matching models on multigraphs, that is, graphs with self-loops. The evolution of the model can be described by a discrete time Markov chain whose positive recurrence is investigated. Necessary and sufficient stability conditions are provided, together with the explicit form of the stationary probability in the case where the matching policy is 'First Come, First Matched'.

ano.nymous@ccsd.cnrs.fr.invalid (Jocelyn Begeot), Jocelyn Begeot

We present here the results regarding the characterization of chemical composition and size distribution of aerosols released during laser cutting of two types of fuel debris simulants (Ex-Vessel and In-Vessel scenarios) in air and underwater conditions in the context of Fukushima Daiichi dismantling. The aerosols have systematically an aerodynamic mass median diameter below 1 μm, with particle sizes generally comprised between 60 nm and 160 nm for air cutting conditions, and larger diameters (300-400 nm) for underwater experiments. Regarding the chemical composition, iron, chromium and nickel are mainly found by more than 50 % in the samples whereas radioactive surrogates of Uranium (Hafnium) are undetectable. When compositions are transposed to radioactivity, taking into account radioisotope inventories 10 years after the accident, it is well evidenced that the radioactivity is carried out by small particles in air condition tests (median size around 100 nm) than underwater (median size around 400 nm): 50 % of the radioactivity is present in particles below 90 nm, and 99 % below 950 nm. Caesium carries the largest part of the radioactivity at all sizes below 1 μm in the case of an Ex- Vessel fuel debris simulant. For the In-Vessel, the aerosol median size for the radioactivity is situated around 100 nm, with 59 % of the radioactivity is carried by strontium, 17 % by barium and 16 % by minor actinides (modelled by cerium) and 7% by the caesium. For sizes above 1.6 μm, cerium representing alpha particles (surrogate of plutonium) is almost the only radioactivity-bearing element (96–97 % of the radioactivity). The data produced here could already be used for modelling or designing development of strategies to implement insitu the laser cutting for fuel debris retrieval and safety associated strategies.

ano.nymous@ccsd.cnrs.fr.invalid (Claire Dazon), Claire Dazon

Dans le contexte du démantèlement des réacteurs de Fukushima Daiichi, plusieurs projets ont été subventionnés par le gouvernement japonais pour préparer les opérations de retrait du corium. Dans ce cadre, une étude conjointe menée entre ONET Technologies et les laboratoires du CEA et de l’IRSN a permis de démontrer la faisabilité de l’utilisation de la technique de découpe par laser et d’estimer le terme source aérosol ainsi généré. Deux simulants du corium synthétisés et caractérisés par le CEA-Cadarache ont fait l’objet d’essais de tirs laser sous air et sous eau au sein de l’installation DELIA du CEA Saclay, et les aérosols émis ont été caractérisés par l’IRSN. La caractérisation des particules émises en termes de concentration et de distribution granulométrique a permis d’apporter des informations pour prédire notamment le transport et le dépôt des particules, mais la connaissance de la composition chimique par classe de taille est une information nécessaire pour une meilleure gestion des risques professionnels et environnementaux. Cet article présente les résultats concernant la caractérisation de la composition chimique de l’aérosol d’un simulant du corium, en condition de découpe laser sous air, et la distribution granulométrique associée

ano.nymous@ccsd.cnrs.fr.invalid (Emmanuel Porcheron), Emmanuel Porcheron

We consider in this paper a model parabolic variational inequality. This problem is discretized with conforming Lagrange finite elements of order $p ≥ 1$ in space and with the backward Euler scheme in time. The nonlinearity coming from the complementarity constraints is treated with any semismooth Newton algorithm and we take into account in our analysis an arbitrary iterative algebraic solver. In the case $p = 1$, when the system of nonlinear algebraic equations is solved exactly, we derive an a posteriori error estimate on both the energy error norm and a norm approximating the time derivative error. When $p ≥ 1$, we provide a fully computable and guaranteed a posteriori estimate in the energy error norm which is valid at each step of the linearization and algebraic solvers. Our estimate, based on equilibrated flux reconstructions, also distinguishes the discretization, linearization, and algebraic error components. We build an adaptive inexact semismooth Newton algorithm based on stopping the iterations of both solvers when the estimators of the corresponding error components do not affect significantly the overall estimate. Numerical experiments are performed with the semismooth Newton-min algorithm and the semismooth Newton-Fischer-Burmeister algorithm in combination with the GMRES iterative algebraic solver to illustrate the strengths of our approach.

ano.nymous@ccsd.cnrs.fr.invalid (Jad Dabaghi), Jad Dabaghi

We propose an adaptive inexact version of a class of semismooth Newton methods that is aware of the continuous (variational) level. As a model problem, we study the system of variational inequalities describing the contact between two membranes. This problem is discretized with conforming finite elements of order $p \geq 1$, yielding a nonlinear algebraic system of variational inequalities. We consider any iterative semismooth linearization algorithm like the Newton-min or the Newton--Fischer--Burmeister which we complementby any iterative linear algebraic solver. We then derive an a posteriori estimate on the error between the exact solution at the continuous level and the approximate solution which is valid at any step of the linearization and algebraic resolutions. Our estimate is based on flux reconstructions in discrete subspaces of $\mathbf{H}(\mathrm{div}, \Omega)$ and on potential reconstructions in discrete subspaces of $H^1(\Omega)$ satisfying the constraints. It distinguishes the discretization, linearization, and algebraic components of the error. Consequently, we can formulate adaptive stopping criteria for both solvers, giving rise to an adaptive version of the considered inexact semismooth Newton algorithm. Under these criteria, the efficiency of the leading estimates is also established, meaning that we prove them equivalent with the error up to a generic constant. Numerical experiments for the Newton-min algorithm in combination with the GMRES algebraic solver confirm the efficiency of the developed adaptive method.

ano.nymous@ccsd.cnrs.fr.invalid (Jad Dabaghi), Jad Dabaghi

Dans le cadre d’un programme pluriannuel, des campagnes de sondages ont été réalisées sur les deux versants du col du Petit-Saint-Bernard (2188 m, Alpes occidentales), entre 750 et 3000 m d’altitude. La méthode de travail néglige les prospections au sol, au profit de la multiplication des sondages manuels, implantés dans des contextes topographiques sélectionnés et menés jusqu’à la base des remplissages holocènes. Les résultats obtenus documentent dans la longue durée l’évolution de la dynamique pédo-sédimentaire et la fréquentation des différents étages d’altitude. La signification des données archéologiques collectées est discutée par rapport à l’état des connaissances dans une zone de comparaison groupant les vallées voisines des Alpes occidentales, par rapport aux modèles de peuplement existants et par rapport aux indications taphonomiques apportées par l’étude pédo-sédimentaire. Un programme d’analyses complémentaires destiné à préciser le contexte, la taphonomie et le statut fonctionnel

ano.nymous@ccsd.cnrs.fr.invalid (Pierre-Jérôme Rey), Pierre-Jérôme Rey

This paper introduces a new approach for the forecasting of solar radiation series at a located station for very short time scale. We built a multivariate model in using few stations (3 stations) separated with irregular distances from 26 km to 56 km. The proposed model is a spatio temporal vector autoregressive VAR model specifically designed for the analysis of spatially sparse spatio-temporal data. This model differs from classic linear models in using spatial and temporal parameters where the available pre-dictors are the lagged values at each station. A spatial structure of stations is defined by the sequential introduction of predictors in the model. Moreover, an iterative strategy in the process of our model will select the necessary stations removing the uninteresting predictors and also selecting the optimal p-order. We studied the performance of this model. The metric error, the relative root mean squared error (rRMSE), is presented at different short time scales. Moreover, we compared the results of our model to simple and well known persistence model and those found in literature.

ano.nymous@ccsd.cnrs.fr.invalid (Maïna André), Maïna André

In this work, we develop an a-posteriori-steered algorithm for a compositional two-phase flow with exchange of components between the phases in porous media. As a model problem, we choose the two-phase liquid-gas flow with appearance and disappearance of the gas phase formulated as a system of nonlinear evolutive partial differential equations with nonlinear complementarity constraints. The discretization of our model is based on the backward Euler scheme in time and the finite volume scheme in space. The resulting nonlinear system is solved via an inexact semismooth Newton method. The key ingredient for the a posteriori analysis are the discretization, linearization, and algebraic flux reconstructions allowing to devise estimators for each error component. These enable to formulate criteria for stopping the iterative algebraic solver and the iterative linearization solver whenever the corresponding error components do not affect significantly the overall error. Numerical experiments are performed using the Newton-min algorithm as well as the Newton-Fischer-Burmeister algorithm in combination with the GMRES iterative linear solver to show the efficiency of the proposed adaptive method.

ano.nymous@ccsd.cnrs.fr.invalid (Ibtihel Ben Gharbia), Ibtihel Ben Gharbia

The γ-irradiation of a biphasic system composed of tri-n-butylphosphate in tetrapropylene hydrogen (TPH) in contact with palladium(II) nitrate in nitric acid aqueous solution led to the formation of two precipitates. A thorough characterization of these solids was performed by means of various analytical techniques including X-Ray Diffraction (XRD), Thermal Gravimetric Analysis coupled with a Differential Scanning Calorimeter (TGA-DSC), X-ray Photoelectron Spectroscopy (XPS), InfraRed (IR), RAMAN and Nuclear Magnetic Resonance (NMR) Spectroscopy, and ElectroSpray Ionization Mass Spectrometry (ESI-MS). Investigations showed that the two precipitates exhibit quite similar structures. They are composed at least of two compounds: palladium cyanide and palladium species containing ammonium, phosphorous or carbonyl groups. Several mechanisms are proposed to explain the formation of Pd(CN)2.

ano.nymous@ccsd.cnrs.fr.invalid (Bénédicte Simon), Bénédicte Simon

This paper focuses on solving coupled problems of lumped parameter models. Such problems are of interest for the simulation of severe accidents in nuclear reactors: these coarse-grained models allow for fast calculations for statistical analysis used for risk assessment and solutions of large problems when considering the whole severe accident scenario. However, this modeling approach has several numerical flaws. Besides, in this industrial context, computational efficiency is of great importance leading to various numerical constraints. The objective of this research is to analyze the applicability of explicit coupling strategies to solve such coupled problems and to design implicit coupling schemes allowing stable and accurate computations. The proposed schemes are theoretically analyzed and tested within CEA's procor platform on a problem of heat conduction solved with coupled lumped parameter models and coupled 1D models. Numerical results are discussed and allow us to emphasize the benefits of using the designed coupling schemes instead of the usual explicit coupling schemes.

ano.nymous@ccsd.cnrs.fr.invalid (Louis Viot), Louis Viot

One of the important challenges for the decommissioning of the damaged reactors of the Fukushima Daiichi Nuclear Power Plant is the safe retrieval of the fuel debris or corium. It is especially primordial to investigate the cutting conditions for air configuration and for underwater configuration at different water levels. Concerning the cutting techniques, the laser technique is well adapted to the cutting of expected material such as corium that has an irregular shape and heterogeneous composition. A French consortium (ONET Technologies, CEA and IRSN) is being subsidized by the Japanese government to implement R&D related to the laser cutting of Fukushima Daiichi fuel debris and related to dust collection technology. Debris simulant have been manufactured in the PLINIUS platform to represent Molten Core Concrete Interaction as estimated from Fukushima Daiichi calculations. In this simulant, uranium is replaced by hafnium and the major fission products have been replaced by their natural isotopes. During laser cutting experiments in the DELIA facility, aerosols have been collected thanks to filters and impactors. The collected aerosols have been analyzed. Both chemical analysis (dissolution + ICP MS and ICP AES) and microscopic analyses (SEM EDS) will be presented and discussed. These data provide insights on the expected dust releases during cutting and can be converted to provide radioactivity estimates. They have also been successfully compared to thermodynamic calculations with the NUCLEA database.

ano.nymous@ccsd.cnrs.fr.invalid (Christophe Journeau), Christophe Journeau

We consider a degenerate parabolic system modelling the flow of fresh and saltwater in an anisotropic porous medium in the context of seawater intrusion. We propose and analyze a nonlinear Control Volume Finite Element scheme. This scheme ensures the nonnegativity of the discrete solution without any restriction on the mesh and on the anisotropy tensor. Moreover It also provides a control on the entropy. Based on these nonlinear stability results, we show that the scheme converges towards a weak solution to the problem. Numerical results are provided to illustrate the behavior of the model and of the scheme.

ano.nymous@ccsd.cnrs.fr.invalid (Ahmed Ait Hammou Oulhaj), Ahmed Ait Hammou Oulhaj

Résumé du papier "A Coq formal proof of the Lax-Milgram Theorem", CPP 2017.

ano.nymous@ccsd.cnrs.fr.invalid (Sylvie Boldo), Sylvie Boldo

We introduce in this paper a technique for the reduced order approximation of parametric symmetric elliptic partial differential equations. For any given dimension, we prove the existence of an optimal subspace of at most that dimension which realizes the best approximation in mean of the error with respect to the parameter in the quadratic norm associated to the elliptic operator between the exact solution and the Galerkin solution calculated on the subspace. This is analogous to the best approximation property of the Proper Orthogonal Decomposition (POD) subspaces, excepting that in our case the norm is parameter-depending, and then the POD optimal sub-spaces cannot be characterized by means of a spectral problem. We apply a deflation technique to build a series of approximating solutions on finite-dimensional optimal subspaces, directly in the on-line step. We prove that the partial sums converge to the continuous solutions in mean quadratic elliptic norm.

ano.nymous@ccsd.cnrs.fr.invalid (Mejdi Azaiez), Mejdi Azaiez

This paper focuses on Generalized Impedance Boundary Conditions (GIBC) with second order derivatives in the context of linear elasticity and general curved interfaces. A condition of the Wentzell type modeling thin layer coatings on some elastic structure is obtained through an asymptotic analysis of order one of the transmission problem at the thin layer interfaces with respect to the thickness parameter. We prove the well-posedness of the approximate problem and the theoretical quadratic accuracy of the boundary conditions. Then we perform a shape sensitivity analysis of the GIBC model in order to study a shape optimization/optimal design problem. We prove the existence and characterize the first shape derivative of this model. A comparison with the asymptotic expansion of the first shape derivative associated to the original thin layer transmission problem shows that we can interchange the asymptotic and shape derivative analysis. Finally we apply these results to the compliance minimization problem. We compute the shape derivative of the compliance in this context and present some numerical simulations.

ano.nymous@ccsd.cnrs.fr.invalid (Fabien Caubet), Fabien Caubet

The Finite Element Method is a widely-used method to solve numerical problems coming for instance from physics or biology. To obtain the highest confidence on the correction of numerical simulation programs implementing the Finite Element Method, one has to formalize the mathematical notions and results that allow to establish the sound-ness of the method. The Lax–Milgram theorem may be seen as one of those theoretical cornerstones: under some completeness and coercivity assumptions, it states existence and uniqueness of the solution to the weak formulation of some boundary value problems. This article presents the full formal proof of the Lax–Milgram theorem in Coq. It requires many results from linear algebra, geometry, functional analysis , and Hilbert spaces.

ano.nymous@ccsd.cnrs.fr.invalid (Sylvie Boldo), Sylvie Boldo

Faults and geological barriers can drastically affect the flow patterns in porous media. Such fractures can be modeled as interfaces that interact with the surrounding matrix. We propose a new technique for the estimation of the location and hydrogeological properties of a small number of large fractures in a porous medium from given distributed pressure or flow data. At each iteration, the algorithm builds a short list of candidates by comparing fracture indicators. These indicators quantify at the first order the decrease of a data misfit function; they are cheap to compute. Then, the best candidate is picked up by minimization of the objective function for each candidate. Optimally driven by the fit to the data, the approach has the great advantage of not requiring remeshing, nor shape derivation. The stability of the algorithm is shown on a series of numerical examples representative of typical situations.

ano.nymous@ccsd.cnrs.fr.invalid (Hend Ben Ameur), Hend Ben Ameur

We develop the shape derivative analysis of solutions to the problem of scattering of time-harmonic electromagnetic waves by a bounded penetrable obstacle. Since boundary integral equations are a classical tool to solve electromagnetic scattering problems, we study the shape differentiability properties of the standard electromagnetic boundary integral operators. The latter are typically bounded on the space of tangential vector fields of mixed regularity $TH\sp{-\frac{1}{2}}(\Div_{\Gamma},\Gamma)$. Using Helmholtz decomposition, we can base their analysis on the study of pseudo-differential integral operators in standard Sobolev spaces, but we then have to study the Gâteaux differentiability of surface differential operators. We prove that the electromagnetic boundary integral operators are infinitely differentiable without loss of regularity. We also give a characterization of the first shape derivative of the solution of the dielectric scattering problem as a solution of a new electromagnetic scattering problem.

ano.nymous@ccsd.cnrs.fr.invalid (Martin Costabel), Martin Costabel

In this paper we study the shape differentiability properties of a class of boundary integral operators and of potentials with weakly singular pseudo-homogeneous kernels acting between classical Sobolev spaces, with respect to smooth deformations of the boundary. We prove that the boundary integral operators are infinitely differentiable without loss of regularity. The potential operators are infinitely shape differentiable away from the boundary, whereas their derivatives lose regularity near the boundary. We study the shape differentiability of surface differential operators. The shape differentiability properties of the usual strongly singular or hypersingular boundary integral operators of interest in acoustic, elastodynamic or electromagnetic potential theory can then be established by expressing them in terms of integral operators with weakly singular kernels and of surface differential operators.

ano.nymous@ccsd.cnrs.fr.invalid (Martin Costabel), Martin Costabel

The interface problem describing the scattering of time-harmonic electromagnetic waves by a dielectric body is often formulated as a pair of coupled boundary integral equations for the electric and magnetic current densities on the interface Γ. In this paper, following an idea developed by Kleinman and Martin for acoustic scattering problems, we consider methods for solving the dielectric scattering problem using a single integral equation over Γ. for a single unknown density. One knows that such boundary integral formulations of the Maxwell equations are not uniquely solvable when the exterior wave number is an eigenvalue of an associated interior Maxwell boundary value problem. We obtain four different families of integral equations for which we can show that by choosing some parameters in an appropriate way, they become uniquely solvable for all real frequencies. We analyze the well-posedness of the integral equations in the space of finite energy on smooth and non-smooth boundaries.

ano.nymous@ccsd.cnrs.fr.invalid (Martin Costabel), Martin Costabel

We consider a model for fluid flow in a porous medium with a fracture. In this model, the fracture is represented as an interface between subdomains, where specific equations have to be solved. In this article we analyse the discrete problem, assuming that the fracture mesh and the subdomain meshes are completely independent, but that the geometry of the fracture is respected. We show that despite this non-conformity, first order convergence is preserved with the lowest order Raviart-Thomas(-Nedelec) mixed finite elements. Numerical simulations confirm this result.

ano.nymous@ccsd.cnrs.fr.invalid (Najla Frih), Najla Frih

The contact between two membranes can be described by a system of variational inequalities, where the unknowns are the displacements of the membranes and the action of a membrane on the other one. A discretization of this system is proposed in Part 1 of this work, where the displacements are approximated by standard finite elements and the action by a local postprocessing which admits an equivalent mixed reformulation.Here, we perform the a posteriori analysis of this discretization and prove optimal error estimates. Next, we present numerical experiments that confirm the efficiency of the error indicators.

ano.nymous@ccsd.cnrs.fr.invalid (Faker Ben Belgacem), Faker Ben Belgacem

We develop the shape derivative analysis of solutions to the problem of scattering of time-harmonic electromagnetic waves by a bounded penetrable obstacle. Since boundary integral equations are a classical tool to solve electromagnetic scattering problems, we study the shape differentiability properties of the standard electromagnetic boundary integral operators. Using Helmholtz decomposition, we can base their analysis on the study of scalar integral operators in standard Sobolev spaces, but we then have to study the Gâteaux differentiability of surface differential operators. We prove that the electromagnetic boundary integral operators are infinitely differentiable without loss of regularity and that the solutions of the scattering problem are infinitely shape differentiable away from the boundary of the obstacle, whereas their derivatives lose regularity on the boundary. We also give a characterization of the first shape derivative as a solution of a new electromagnetic scattering problem.

ano.nymous@ccsd.cnrs.fr.invalid (Martin Costabel), Martin Costabel

Les antennes lentilles sont des dispositifs ayant pour support les ondes électromagnétiques et sont constituées d'une source primaire et d'un système focalisant diélectrique. La montée en importance récente d'applications en ondes millimétriques (exemple : radars d'assistance et d'aide à la conduite), nécessite la construction d'antennes lentilles de quelques centimètres qui répondent à des cahiers des charges spécifiques à chaque cas. L'une des problématiques à résoudre consiste à déterminer la forme optimale de la lentille étant données : (i) les caractéristiques de la source primaire, (ii) les caractéristiques en rayonnement fixées. Ce projet de thèse vise à développer de nouveaux outils pour l'optimisation de forme en utilisant une formulation intégrale du problème. Cette thèse s'articule en deux parties. Dans la première nous avons construit plusieurs formulations intégrales pour le problème de diffraction diélectrique en utilisant une approche par équation intégrale surfacique. Dans la seconde nous avons étudié les dérivées de forme des opérateurs intégraux standard en électromagnétisme dans le but de les incorporer dans un algorithme d'optimisation de forme.

ano.nymous@ccsd.cnrs.fr.invalid (Frédérique Le Louër), Frédérique Le Louër