Recherche

Publications sur H.A.L.

Retour Accueil / Recherche / Publications sur H.A.L.

[hal-04750754] Large graph limits of local matching algorithms on uniform random graphs

In this work, we propose a large-graph limit estimate of the matching coverage for several matching algorithms, on general graphs generated by the configuration model. For a wide class of local matching algorithms, namely, algorithms that only use information on the immediate neighborhood of the explored nodes, we propose a joint construction of the graph by the configuration model, and of the resulting matching on the latter graph. This leads to a generalization in infinite dimension of the differential equation method of Wormald: We keep track of the matching algorithm over time by a measure-valued CTMC, for which we prove the convergence, to the large-graph limit, to a deterministic hydrodynamic limit, identified as the unique solution of a system of ODE's in the space of integer measures. Then, the asymptotic proportion of nodes covered by the matching appears as a simple function of that solution. We then make this solution explicit for three particular local algorithms: the classical greedy algorithm, and then the uni-min and uni-max algorithms, two variants of the greedy algorithm that select, as neighbor of any explored node, its neighbor having the least (respectively largest) residual degree.

ano.nymous@ccsd.cnrs.fr.invalid (Mohamed Habib Aliou Diallo Aoudi), Mohamed Habib Aliou Diallo Aoudi

[hal-04053732] Non-parametric Observation Driven HMM

The hidden Markov models (HMM) are used in many different fields, to study the dynamics of a process that cannot be directly observed. However, in some cases, the structure of dependencies of a HMM is too simple to describe the dynamics of the hidden process. In particular, in some applications in finance or in ecology, the transition probabilities of the hidden Markov chain can also depend on the current observation. In this work we are interested in extending the classical HMM to this situation. We define a new model, referred to as the Observation Driven-Hidden Markov Model (OD-HMM). We present a complete study of the general non-parametric OD-HMM with discrete and finite state spaces (hidden and observed variables). We study its identifiability. Then we study the consistency of the maximum likelihood estimators. We derive the associated forward-backward equations for the E-step of the EM algorithm. The quality of the procedure is tested on simulated data sets. Finally, we illustrate the use of the model on an application on the study of annual plants dynamics. This works sets theoretical and practical foundations for a new framework that could be further extended, on one hand to the non-parametric context to simplify estimation, and on the other hand to the hidden semi-Markov models for more realism.

ano.nymous@ccsd.cnrs.fr.invalid (Hanna Bacave), Hanna Bacave

[hal-04713897] Finite element method. Detailed proofs to be formalized in Coq

To obtain the highest confidence on the correction of numerical simulation programs for the resolution of Partial Differential Equations (PDEs), one has to formalize the mathematical notions and results that allow to establish the soundness of the approach. The finite element method is one of the popular tools for the numerical resolution of a wide range of PDEs. The purpose of this document is to provide the formal proof community with very detailed pen-and-paper proofs for the construction of the Lagrange finite elements of any degree on simplices in positive dimension.

ano.nymous@ccsd.cnrs.fr.invalid (François Clément), François Clément

[hal-04708986] Enhanced Drag Force Estimation in Automotive Design : A Surrogate Model Leveraging Limited Full-Order Model Drag Data and Comprehensive Physical Field Integration

In this paper, a novel surrogate model for shape-parametrized vehicle drag force prediction is proposed. It is assumed that only a limited dataset of high-fidelity CFD results is available, typically less than ten high-fidelity CFD solutions for different shape samples. The idea is to take advantage not only of the drag coefficients, but also physical fields such as velocity, pressure and kinetic energy evaluated on a cutting plane in the wake of the vehicle and perpendicular to the road. This additional 'augmented' information provides a more accurate and robust prediction of the drag force, compared to a standard surface response methodology. As a first step, an original reparametrization of the shape based on combination coefficients of shape principal components is proposed, leading to a low-dimensional representation of the shape space. The second step consists in determining principal components of the x-direction momentum flux through a cutting plane behind the car. The final step is to find the mapping between the reduced shape description and the momentum flux formula to achieve an accurate drag estimation. The resulting surrogate model is a space-parameter separated representation with shape principal component coefficients and spatial modes dedicated to drag-force evaluation. The algorithm can deal with shapes of variable mesh, by using an optimal transport procedure that interpolates the fields on a shared reference mesh. The Machine Learning algorithm is challenged on a car concept with a shape design space of dimensional three. With only two wellchosen samples, the numerical algorithm is able to return a drag surrogate model with reasonable uniform error over the validation dataset. An incremental learning approach involving additional high-fidelity computations is also proposed. The leading algorithm is shown to improve the model accuracy. The study also shows the sensitivity of the results with respect to the initial experimental design. As a feedback, we discuss and suggest what appear to be the correct choices of experimental designs for best results.

ano.nymous@ccsd.cnrs.fr.invalid (Kalinja Naffer-Chevassier), Kalinja Naffer-Chevassier

[hal-04702353] Fast inference for stationary time series

This paper considers the statistical inference for stationary time series under weak assumptions. Firstly, a frequency domain approach is proposed for fast estimation based on a one step procedure. This method correct an initial Whittle guess estimator on a subsample by a single Fisher scoring step. The resulting estimator shares the same asymptotic properties of the Whittle estimator on the whole sample and reduce drastically the computation time. Secondly, the asymptotic covariance matrix of the Whittle estimator is estimated for full inference solving an open question raised by Shao, X. (2010).

ano.nymous@ccsd.cnrs.fr.invalid (Samir Ben Hariz), Samir Ben Hariz

[hal-04033438] Robust augmented energy a posteriori estimates for Lipschitz and strongly monotone elliptic problems

In this paper, we design a posteriori estimates for finite element approximations of nonlinear elliptic problems satisfying strong-monotonicity and Lipschitz-continuity properties. These estimates include, and build on, any iterative linearization method that satisfies a few clearly identified assumptions; this encompasses the Picard, Newton, and Zarantonello linearizations. The estimates give a guaranteed upper bound on an augmented energy difference (reliability with constant one), as well as a lower bound (efficiency up to a generic constant). We prove that for the Zarantonello linearization, this generic constant only depends on the space dimension, the mesh shape regularity, and possibly the approximation polynomial degree in four or more space dimensions, making the estimates robust with respect to the strength of the nonlinearity. For the other linearizations, there is only a computable dependence on the local variation of the linearization operators. We also derive similar estimates for the usual energy difference that depend locally on the nonlinearity and improve the established bound. Numerical experiments illustrate and validate the theoretical results, for both smooth and singular solutions.

ano.nymous@ccsd.cnrs.fr.invalid (André Harnist), André Harnist

[tel-04500378] Méthode eulérienne faiblement diffuse appliquée à un modèle totalement conservatif pour la simulation des écoulements multifluides

Dans ce manuscrit, nous développons une méthode numérique adaptée à la simulation d’écoulements de fluides compressibles non miscibles. Pour modéliser ces écoulements, nous analysons un système totalement conservatif original comptant six équations, ferme par une équation d’état stiffened-gas et une équation d’équilibre en pression. Nous introduisons également un schéma numérique d’ordre 2, en espace et en temps, spécialement conçu pour la capture des interfaces entre les fluides dans des configurations à plusieurs dimensions. Pour atteindre l’ordre 2, nous mettons au point une méthode de reconstruction de pente multidimensionnelle basée sur le critère de stabilité : local extremum diminishing (LED). Le schéma d’ordre 2 associé au modèle totalement conservatif entraine l’apparition d’oscillations dans les profils de pression. Pour éviter ces oscillations parasites, nous démontrons un ensemble de propriétés essentielles. Tout d’abord, nous trouvons des conditions de stabilité, de type CFL, imposées par les reconstructions de pente. Puis, nous démontrons un théorème garantissant la consistance entre l’équation d’énergie et le transport des fractions volumiques. Ensuite, nous proposons une reconstruction de la pression en deux temps pour assurer la positivité de l’énergie interne. Enfin, nous développons une méthode numérique à une seule étape adaptée à la simulation d’écoulements faisant intervenir plus de deux fluides. L’ensemble des résultats présentés dans ce document est illustré par des cas test, à une, deux ou trois dimensions d’espace.

ano.nymous@ccsd.cnrs.fr.invalid (Vincent Mahy), Vincent Mahy

[hal-03915451] Uniform Consistency for Functional Conditional U-Statistics Using Delta-Sequences

U-statistics are a fundamental class of statistics derived from modeling quantities of interest characterized by responses from multiple subjects. U-statistics make generalizations the empirical mean of a random variable X to the sum of all k-tuples of X observations. This paper examines a setting for nonparametric statistical curve estimation based on an infinite-dimensional covariate, including Stute’s estimator as a special case. In this functional context, the class of “delta sequence estimators” is defined and discussed. The orthogonal series method and the histogram method are both included in this class. We achieve almost complete uniform convergence with the rates of these estimators under certain broad conditions. Moreover, in the same context, we show the uniform almost-complete convergence for the nonparametric inverse probability of censoring weighted (I.P.C.W.) estimators of the regression function under random censorship, which is of its own interest. Among the potential applications are discrimination problems, metric learning and the time series prediction from the continuous set of past values.

ano.nymous@ccsd.cnrs.fr.invalid (Salim Bouzebda), Salim Bouzebda

[hal-02459255] Metabolic Flux Analysis in Isotope Labeling Experiments Using the Adjoint Approach

[...]

ano.nymous@ccsd.cnrs.fr.invalid (Stéphane Mottelet), Stéphane Mottelet

[hal-03294727] The jamming constant of uniform random graphs

By constructing jointly a random graph and an associated exploration process, we define the dynamics of a “parking process” on a class of uniform random graphs as a measure-valued Markov process, representing the empirical degree distribution of non-explored nodes. We then establish a functional law of large numbers for this process as the number of vertices grows to infinity, allowing us to assess the jamming constant of the considered random graphs, i.e. the size of the maximal independent set discovered by the exploration algorithm. This technique, which can be applied to any uniform random graph with a given–possibly unbounded–degree distribution, can be seen as a generalization in the space of measures, of the differential equation method introduced by Wormald.

ano.nymous@ccsd.cnrs.fr.invalid (Paola Bermolen), Paola Bermolen

[hal-04267859] Test allocation based on risk of infection from first and second order contact tracing

Under limited available resources, strategies for mitigating the propagation of an epidemic such as random testing and contact tracing become inefficient. Here, we propose to accurately allocate the resources by computing over time an individual risk of infection based on the partial observation of the epidemic spreading on a contact network; this risk is defined as the probability of getting infected from any possible transmission chain up to length two, originating from recently detected individuals. To evaluate the performance of our method and the effects of some key parameters, we perform comparative simulated experiments using data generated by an agent-based model.

ano.nymous@ccsd.cnrs.fr.invalid (Gabriela Bayolo Soler), Gabriela Bayolo Soler

[hal-04180133] Non parametric observation driven HMM

[...]

ano.nymous@ccsd.cnrs.fr.invalid (Hanna Bacave), Hanna Bacave

[hal-04104489] Neutron spectrum unfolding using two architectures of convolutional neural networks

We deploy artificial neural networks to unfold neutron spectra from measured energy-integrated quantities. These neutron spectra represent an important parameter allowing to compute the absorbed dose and the kerma to serve radiation protection in addition to nuclear safety. The built architectures are inspired from convolutional neural networks. The first architecture is made up of residual transposed convolution's blocks while the second is a modified version of the U-net architecture. A large and balanced dataset is simulated following "realistic" physical constraints to train the architectures in an efficient way. Results show a high accuracy prediction of neutron spectra ranging from thermal up to fast spectrum. The dataset processing, the attention paid to performances' metrics and the hyperoptimization are behind the architectures' robustness.

ano.nymous@ccsd.cnrs.fr.invalid (Maha Bouhadida), Maha Bouhadida

[tel-03975688] Theoretical contribution to the U-processes in Markov and dependent setting : asymptotic and bootstraps

The world is producing 2.5 quintillion bytes daily, known as big data. Volume, value, variety, velocity, and veracity define the five characteristics of big data that represent a fundamental complexity for many machine learning algorithms, such as clustering, image recognition, and other modern learning techniques. With this large data, hyperparameter estimations do not take the form of the sample mean (not linear). Instead, they takethe form of average over m-tuples, known as the U-statistic estimator in probabilityand statistics. In this work, we treat the collection of U-statistics, known as the Uprocess,for two types of dependent variables, the Markovian data, and locally stationary random variables. Thus, we have divided our work into two parts to address each type independently.In the first part, we deal with Markovian data. The approach relies on regenerative methods, which essentially involve dividing the sample into independent and identically distributed (i.i.d.) blocks of data, where each block corresponds to the path segments between two visits of an atom called A, forming a renewal sequence. We derive the limiting theory for Harris recurrent Markov chain over uniformly bounded and unbounded classes of functions. We show that the results can be generalized also to the bootstrappe dU statistics. The bootstrap approach bypasses the problems faced with the asymptotic behavior due to the unknown parameters of limiting distribution. Furthermore, the bootstrap technique we use in this thesis is the renewal bootstrap, where the bootstrap samplevis formed by resampling the blocks. Since the non-bootstrapped blocks are independent, most proofs reduce to the i.i.d. case. The main difficulties are related to the randomsize of the resampled blocks, which creates a problem with random stopping times. This problem is degraded by replacing the random stopping time with their expectation. Also, since we resample from a random number of blocks, and the bootstrap equicontinuity can be verified by comparing with the initial process, the weak convergence of the bootstrap U-process must be treated very carefully. We successfully derive the results in the case of the k-Harris Markov chain. We extend all the above results to the case where the degreeof U-statistic grows with the sample size n, with the kernel varying in a class of functions. We provide the uniform limit theory for the renewal bootstrap for the infinite-degree U-process with the help of the decoupling technique combined with symmetrization techniques in addition to the chaining inequality. Remaining in the Markovian setting, we extend the weighted bootstrap empirical processes to a high-dimensional estimation. We consider an exchangeably weighted bootstrap of the general function-indexed empirical U-processes. In the second part of this thesis, dependent data are represented by locally stationary random variables. Propelled by the increasing representation of the data by functionalor curves time series and the non-stationary behavior of the latter, we are interested in the conditional U-process of locally stationary functional time series. More precisely, we investigate the weak convergence of the conditional U-processes in the locally stationary functional mixing data framework. We treat the weak convergence in both caseswhen the class of functions is bounded or unbounded, satisfying some moment conditions. Finally, we extend the asymptotic theory of conditional U-process to the locallystationary functional random field {Xs,An : s ∈ Rn} observed at irregular spaced locations in Rn = [0,An]d ∈ Rd, and include both pure increasing domain and mixed increasing domain. We treat the weak convergence in both cases when the class of functions is boundedor unbounded, satisfying some moment conditions. These results are established underfairly general structural conditions on the classes of functions and the underlying models.

ano.nymous@ccsd.cnrs.fr.invalid (Inass Soukarieh), Inass Soukarieh

[tel-03212765] Stochastic matching model on the general graphical structures

Motivated by a wide range of assemble-to-order systems and systems of the collaborativeeconomy applications, we introduce a stochastic matching model on hypergraphs and multigraphs, extending the model introduced by Mairesse and Moyal 2016. In this thesis, the stochastic matching model on general graph structures are defined as follows: given a compatibility general graph structure S = (V; S) which of a set of nodes denoted by V that represent the classes of items and by a set of edges denoted by S that allows matching between different classes of items. Items arrive at the system at a random time, by a sequence (assumed to be i:i:d:) that consists of different classes of V; and request to be matched due to their compatibility according to S: The compatibility by groups of two or more (hypergraphical cases) and by groups of two with possibilities of matching between the items of the same classes (multigraphical cases). The unmatched items are stored in the system and wait for a future compatible item and as soon as they are matched they leave it together. Upon arrival, an item may find several possible matches, the items that leave the system depend on a matching policy _ to be specified. We study the stability of the stochastic matching model on hypergraphs, for different hypergraphical topologies. Then, the stability of the stochastic matching model on multigraphs using the maximal subgraph and minimal blow-up to distinguish the zone of stability.

ano.nymous@ccsd.cnrs.fr.invalid (Youssef Rahmé), Youssef Rahmé

[hal-03564379] Lebesgue Induction and Tonelli’s Theorem in Coq

Lebesgue integration is a well-known mathematical tool, used for instance in probability theory, real analysis, and numerical mathematics. Thus, its formalization in a proof assistant is to be designed to fit different goals and projects. Once the Lebesgue integral is formally defined and the first lemmas are proved, the question of the convenience of the formalization naturally arises. To check it, a useful extension is Tonelli's theorem, stating that the (double) integral of a nonnegative measurable function of two variables can be computed by iterated integrals, and allowing to switch the order of integration. This article describes the formal definition and proof in Coq of product sigma-algebras, product measures and their uniqueness, the construction of iterated integrals, up to Tonelli's theorem. We also advertise the Lebesgue induction principle provided by an inductive type for nonnegative measurable functions.

ano.nymous@ccsd.cnrs.fr.invalid (Sylvie Boldo), Sylvie Boldo

[hal-03105815] Lebesgue integration. Detailed proofs to be formalized in Coq

To obtain the highest confidence on the correction of numerical simulation programs implementing the finite element method, one has to formalize the mathematical notions and results that allow to establish the soundness of the method. Sobolev spaces are the mathematical framework in which most weak formulations of partial derivative equations are stated, and where solutions are sought. These functional spaces are built on integration and measure theory. Hence, this chapter in functional analysis is a mandatory theoretical cornerstone for the definition of the finite element method. The purpose of this document is to provide the formal proof community with very detailed pen-and-paper proofs of the main results from integration and measure theory.

ano.nymous@ccsd.cnrs.fr.invalid (François Clément), François Clément

[hal-03889276] A Coq Formalization of Lebesgue Induction Principle and Tonelli’s Theorem

Lebesgue integration is a well-known mathematical tool, used for instance in probability theory, real analysis, and numerical mathematics. Thus, its formalization in a proof assistant is to be designed to fit different goals and projects. Once the Lebesgue integral is formally defined and the first lemmas are proved, the question of the convenience of the formalization naturally arises. To check it, a useful extension is Tonelli's theorem, stating that the (double) integral of a nonnegative measurable function of two variables can be computed by iterated integrals, and allowing to switch the order of integration. This article describes the formal definition and proof in Coq of product sigma-algebras, product measures and their uniqueness, the construction of iterated integrals, up to Tonelli's theorem. We also advertise the Lebesgue induction principle provided by an inductive type for nonnegative measurable functions.

ano.nymous@ccsd.cnrs.fr.invalid (Sylvie Boldo), Sylvie Boldo

[hal-03888607] Analysis of a one dimensional energy dissipating free boundary model with nonlinear boundary conditions. Existence of global weak solutions

This work is part of a general study on the long-term safety of the geological repository of nuclear wastes. A diffusion equation with a moving boundary in one dimension is introduced and studied. The model describes some mechanisms involved in corrosion processes at the surface of carbon steel canisters in contact with a claystone formation. The main objective of the paper is to prove the existence of global weak solutions to the problem. For this, a semi-discrete in time minimizing movements scheme à la De Giorgi is introduced. First, the existence of solutions to the scheme is established and then, using a priori estimates, it is proved that as the time step goes to zero these solutions converge up to extraction towards a weak solution to the free boundary model.

ano.nymous@ccsd.cnrs.fr.invalid (Benoît Merlet), Benoît Merlet

[hal-03882839] Analysis of Lavrentiev-Finite Element Methods for Data Completion Problems

The variational finite element solution of Cauchy's problem, expressed in the Steklov-Poincaré framework and regularized by the Lavrentiev method, has been introduced and computationally assessed in [Inverse Problems in Science and Engineering, 18, 1063-1086 (2011)]. The present work concentrates on the numerical analysis of the semi-discrete problem. We perform the mathematical study of the error to rigorously establish the convergence of the global bias-variance error.

ano.nymous@ccsd.cnrs.fr.invalid (Faker Ben Belgacem), Faker Ben Belgacem

[hal-03858196] Uniqueness’ Failure for the Finite Element Cauchy-Poisson’s Problem

We focus on the ill posed data completion problem and its finite element approximation, when recast via the variational duplication Kohn-Vogelius artifice and the condensation Steklov-Poincaré operators. We try to understand the useful hidden features of both exact and discrete problems. When discretized with finite elements of degree one, the discrete and exact problems behave in diametrically opposite ways. Indeed, existence of the discrete solution is always guaranteed while its uniqueness may be lost. In contrast, the solution of the exact problem may not exist, but it is unique. We show how existence of the so called "weak spurious modes", of the exact variational formulation, is source of instability and the reason why existence may fail. For the discrete problem, we find that the cause of non uniqueness is actually the occurrence of "spurious modes". We track their fading effect asymptotically when the mesh size tends to zero. In order to restore uniqueness, we recall the discrete version of the Holmgren principle, introduced in [Azaïez et al, IPSE, 18, 2011], and we discuss the effect on uniqueness of the finite element mesh, using some graph theory basic material.

ano.nymous@ccsd.cnrs.fr.invalid (F Ben Belgacem), F Ben Belgacem

[hal-02934256] The consistency and asymptotic normality of the kernel type expectile regression estimator for functional data

[...]

ano.nymous@ccsd.cnrs.fr.invalid (Mustapha Mohammedi), Mustapha Mohammedi

[tel-03774522] Semiparametric M-estimators and their applications to multiple change-point problems

In this dissertation we are concerned with semiparametric models. These models have success and impact in mathematical statistics due to their excellent scientific utility and intriguing theoretical complexity. In the first part of the thesis, we consider the problem of the estimation of a parameter θ, in Banach spaces, maximizing some criterion function which depends on an unknown nuisance parameter h, possibly infinite-dimensional. We show that the m out of n bootstrap, in a general setting, is weakly consistent under conditions similar to those required for weak convergence of the non smooth M-estimators. In this framework, delicate mathematical derivations will be required to cope with estimators of the nuisance parameters inside non-smooth criterion functions. We then investigate an exchangeable weighted bootstrap for function-valued estimators defined as a zero point of a function-valued random criterion function. The main ingredient is the use of a differential identity that applies when the random criterion function is linear in terms of the empirical measure. A large number of bootstrap resampling schemes emerge as special cases of our settings. Examples of applications from the literature are given to illustrate the generality and the usefulness of our results. The second part of the thesis is devoted to the statistical models with multiple change-points. The main purpose of this part is to investigate the asymptotic properties of semiparametric M-estimators with non-smooth criterion functions of the parameters of multiple change-points model for a general class of models in which the form of the distribution can change from segment to segment and in which, possibly, there are parameters that are common to all segments. Consistency of the semiparametric M-estimators of the change-points is established and the rate of convergence is determined. The asymptotic normality of the semiparametric M-estimators of the parameters of the within-segment distributions is established under quite general conditions. We finally extend our study to the censored data framework. We investigate the performance of our methodologies for small samples through simulation studies.

ano.nymous@ccsd.cnrs.fr.invalid (Anouar Abdeldjaoued Ferfache), Anouar Abdeldjaoued Ferfache

[tel-03752281] Contributions to the existence, uniqueness, and contraction of the solutions to some evolutionary partial differential equations

In this thesis, we are mainly interested in the theoretical and numerical study of certain equations that describe the dynamics of dislocation densities. Dislocations are microscopic defects in materials, which move under the effect of an external stress. As a first work, we prove a global in time existence result of a discontinuous solution to a diagonal hyperbolic system, which is not necessarily strictly hyperbolic, in one space dimension. Then in another work, we broaden our scope by proving a similar result to a non-linear eikonal system, which is in fact a generalization of the hyperbolic system studied first. We also prove the existence and uniqueness of a continuous solution to the eikonal system. After that, we study this system numerically in a third work through proposing a finite difference scheme approximating it, of which we prove the convergence to the continuous problem, strengthening our outcomes with some numerical simulations. On a different direction, we were enthused by the theory of differential contraction to evolutionary equations. By introducing a new distance, we create a new family of contracting positive solutions to the evolutionary p-Laplacian equation.

ano.nymous@ccsd.cnrs.fr.invalid (Maryam Al Zohbi), Maryam Al Zohbi

[tel-03746986] Étude théorique et numérique des systèmes modélisant la dynamique des densités des dislocations

Dans cette thèse, nous nous intéressons à l’analyse théorique et numérique de la dynamique des densités des dislocations. Les dislocations sont des défauts linéaires qui se déplacent dans les cristaux lorsque ceux-ci sont soumis à des contraintes extérieures. D’une manière générale, la dynamique des densités des dislocations est décrite par un système d’équations de transport, où les champs de vitesse dépendent de manière non-locale des densités des dislocations. Au départ, notre travail se focalise sur l’étude d’un système unidimensionnel (2 × 2) de type Hamilton-Jacobi dérivé d’un système bidimensionnel proposé par Groma et Balogh en 1999. Pour ce modèle, nous montrons un résultat d’existence globale et d’unicité. En addition, nous nous intéressons à l’étude numérique de ce problème, complété par des conditions initiales croissantes, en proposant un schéma aux différences finies implicite dont on prouve la convergence. Ensuite, en s’inspirant du travail effectué pour la résolution de la dynamique des densités des dislocations, nous mettons en œuvre une théorie plus générale permettant d’obtenir un résultat similaire d’existence et d’unicité d’une solution dans le cas des systèmes de type eikonal unidimensionnels. En considérant des conditions initiales croissantes, nous faisons une étude numérique pour ce système. Sous certaines conditions de monotonies sur la vitesse, nous proposons un schéma aux différences finies implicite permettant de calculer la solution discrète et simuler ainsi la dynamique des dislocations à travers ce modèle.

ano.nymous@ccsd.cnrs.fr.invalid (Aya Oussaily), Aya Oussaily

[hal-03714164] Study of an entropy dissipating finite volume scheme for a nonlocal cross-diffusion system

In this paper we analyse a finite volume scheme for a nonlocal version of the Shigesada-Kawazaki-Teramoto (SKT) cross-diffusion system. We prove the existence of solutions to the scheme, derive qualitative properties of the solutions and prove its convergence. The proofs rely on a discrete entropy-dissipation inequality, discrete compactness arguments, and on the novel adaptation of the so-called duality method at the discrete level. Finally, thanks to numerical experiments, we investigate the influence of the nonlocality in the system: on convergence properties of the scheme, as an approximation of the local system and on the development of diffusive instabilities.

ano.nymous@ccsd.cnrs.fr.invalid (Maxime Herda), Maxime Herda

[hal-03700112] Fast calibration of weak FARIMA models

In this paper, we investigate the asymptotic properties of Le Cam's one-step estimator for weak Fractionally AutoRegressive Integrated Moving-Average (FARIMA) models. For these models, noises are uncorrelated but neither necessarily independent nor martingale differences errors. We show under some regularity assumptions that the onestep estimator is strongly consistent and asymptotically normal with the same asymptotic variance as the least squares estimator. We show through simulations that the proposed estimator reduces computational time compared with the least squares estimator. An application for providing remotely computed indicators for time series is proposed.

ano.nymous@ccsd.cnrs.fr.invalid (Samir Ben Hariz), Samir Ben Hariz

[cea-03631609] Sharp interface capturing in compressible multi-material flows with a diffuse interface method

Compressible multi-material flows are omnipresent in scientifc and industrial applications: from the supernova explosions in space, high speed flows in jet and rocket propulsion to the scenario of the underwater explosions, and vapor explosions in the post accidental situation in the nuclear reactors, their application covers almost all the aspects of classical fluid physics. In the numerical simulations of these flows, interfaces play a very crucial role. A poor numerical resolution of the interfaces could make it very difficult to account for the physics like material separation, location of the shocks and the contact discontinuities, and the transfer of the mass, momentum, heat between different materials/phases. Owing to such an importance, the sharp interface capturing remains a very active area of research in computational Physics. To address this problem in this paper we focus on the Interface Capturing (IC) strategy, and thus we make the use of a newly developed Diffuse Interface Method (DIM) called: Multidimensional Limiting Process-Upper Bound (MLP-UB). Our analysis shows that this method is easy to implement, easily extendable to multiple space dimensions, can deal with any number of material interfaces, and produces sharp shape-preserving interfaces, along with their accurate interaction with shocks and contact discontinuities. Numerical experiments show very good results even over rather coarse meshes.

ano.nymous@ccsd.cnrs.fr.invalid (Shambhavi Nandan), Shambhavi Nandan

[tel-03583899] On thermo-acoustic and photo-acoustic Imaging of small absorbers

This thesis is divided into two parts. The first part is dedicated to the study of inverse problems for wave equations and their application to medical imaging. More precisely, we focus our work on the study of the photo-acoustic and thermo-acoustic tomography techniques. They are multi-wave imaging techniques based on the photo-acoustic effect that was discovered in 1880 by Alexander Graham Bell. The inverse problem we are concerned in throughout this thesis is the problem of recovering small absorbers in a bounded domain Ω R3. We provide a direct reconstruction method based on the algebraic algorithm that was developed first in without following the quantitative photo-acoustic tomography approach (qPAT). This algorithm allows us to reconstruct the number of the absorbers and their locations from a single Cauchy data, in addition to some information on optical parameters such as the conductivity and the absorption coefficient that can serve as an important diagnostic information in detecting tumors. The main difference between PAT and TAT is in the type of optical pulse used. In PAT, a high frequency radiation is delivered into the biological tissue to be imaged, while in TAT low frequency radiations are used, which makes some differences in the physical and mathematical setting of the problem. In this dissertation we study the both mathematical models, and propose reconstruction algorithms for the two inverse problems. The second part of this thesis is devoted to the study of non-autonomous semilinear elliptic equations. We study the existence of radial solution in Rn with non zero limiting behavior.

ano.nymous@ccsd.cnrs.fr.invalid (Hanin Al Jebawy), Hanin Al Jebawy

[cea-03541209] Molybdenum solubility in aluminium nitrate solutions

For over 60 years, research reactors (RR or RTR for research testing reactors) have been used as neutron sources for research, radioisotope production ($^{99}$Mo/$^{99m}$Tc), nuclear medicine, materials characterization, etc… Currently, over 240 of these reactors are in operation in 56 countries. They are simpler than power reactors and operate at lower temperature (cooled to below 100°C). The fuel assemblies are typically plates or cylinders of uranium alloy and aluminium (U-Al) coated with pure aluminium. These fuels can be processed in AREVA La Hague plant after batch dissolution in concentrated nitric acid and mixing with UOX fuel streams. The aim of this study is to accurately measure the solubility of molybdenum in nitric acid solution containing high concentrations of aluminium. The higher the molybdenum solubility is, the more flexible reprocessing operations are, especially when the spent fuels contain high amounts of molybdenum. To be most representative of the dissolution process, uranium-molybdenum alloy and molybdenum metal powder were dissolved in solutions of aluminium nitrate at the nominal dissolution temperature. The experiments showed complete dissolution of metallic elements after 30minutes stirring, even if molybdenum metal was added in excess. After an induction period, a slow precipitation of molybdic acid occurs for about 15hours. The data obtained show the molybdenum solubility decreases with increasing aluminium concentration. The solubility law follows an exponential relation around 40g/L of aluminium with a high determination coefficient. Molybdenum solubility is not impacted by the presence of gadolinium, or by an increasing concentration of uranium.

ano.nymous@ccsd.cnrs.fr.invalid (Xavier Hérès), Xavier Hérès

[tel-03530823] Properties of words and competing risk processes under semi-Markov hypothesis

Our thesis is dedicated in big part, to solving certain problems in biology (biologic sequences and lifespan using the competing risk framework) under semi-Markovian hypothesis. In recent years, computing the properties of words through random sequences has become a topic of interest in the intersection between mathematics and biology. In the literature, a vast number of methods have tackled this problem under the assumption that sequences of symbols are modeled by Markov processes. Nevertheless, the markovian hypothesis has some disadvantages. In Markov processes, the sojourn time is modeled by the exponential (geometric) distribution in continuous (discrete) time. By contrast, in semi-Markov processes the sojourn time in a state can be modeledby any probability law. Therefore, in order to propose a more general approach to compute the properties of words through a random sequence, in this PhD work weconsider that biological sequences are modeled by semi-Markovian discrete processes. We also compute the average number of times that the elements from a specific set of words appear through a sequence of letters. To achieve our goal, we use the strong law of large numbers and we provide the central limit theorem. To prove the applicationof our proposed model, we find a particular enzyme in a bacteriophage DNA sequence. Competing risk problems conform another interesting topic in the lifespan domain. In general, competing risk problems have been dealt with a statistic approach. In this thesis, we present competing risk models within a semi-Markov framework. We consider continuous and discrete time semi-Markov processes with a finite number of transient and absorbing states. Each absorbing state represents a failure mode (in reliability of a system) or a cause of death of an individual (in survival analysis). We express the probability that a failure occurs at a given time due to a unique cause. We give the joint distribution of the life time and the failure cause via the transition function of the semi-Markov process in continuous and discrete-time respectively. Some examples are given for illustration. We also present a method for solving continuous time Markovian renewal equations based on well-established algorithms in their discrete time corresponding counter parts. The great advantage drawn by this approach is that the in finite series of the renewal function, in continuous time, is replaced, in discrete time, by a finite series. Results for error estimation are also established. To illustrate this approach we propose a digital application concerning cyber-attacks where the functions of conditional transitions are of the Weibull type.

ano.nymous@ccsd.cnrs.fr.invalid (Brenda Ivette Garcia Maya), Brenda Ivette Garcia Maya

[hal-03273118] A Hybrid High-Order method for incompressible flows of non-Newtonian fluids with power-like convective behaviour

In this work, we design and analyze a Hybrid High-Order (HHO) discretization method for incompressible flows of non-Newtonian fluids with power-like convective behaviour. We work under general assumptions on the viscosity and convection laws, that are associated with possibly different Sobolev exponents r ∈ (1, ∞) and s ∈ (1, ∞). After providing a novel weak formulation of the continuous problem, we study its well-posedness highlighting how a subtle interplay between the exponents r and s determines the existence and uniqueness of a solution. We next design an HHO scheme based on this weak formulation and perform a comprehensive stability and convergence analysis, including convergence for general data and error estimates for shear-thinning fluids and small data. The HHO scheme is validated on a complete panel of model problems.

ano.nymous@ccsd.cnrs.fr.invalid (Daniel Castanon Quiroz), Daniel Castanon Quiroz

[hal-03471095] A Coq Formalization of Lebesgue Integration of Nonnegative Functions

Integration, just as much as differentiation, is a fundamental calculus tool that is widely used in many scientific domains. Formalizing the mathematical concept of integration and the associated results in a formal proof assistant helps in providing the highest confidence on the correctness of numerical programs involving the use of integration, directly or indirectly. By its capability to extend the (Riemann) integral to a wide class of irregular functions, and to functions defined on more general spaces than the real line, the Lebesgue integral is perfectly suited for use in mathematical fields such as probability theory, numerical mathematics, and real analysis. In this article, we present the Coq formalization of $\sigma$-algebras, measures, simple functions, and integration of nonnegative measurable functions, up to the full formal proofs of the Beppo Levi (monotone convergence) theorem and Fatou's lemma. More than a plain formalization of the known literature, we present several design choices made to balance the harmony between mathematical readability and usability of Coq theorems. These results are a first milestone toward the formalization of $L^p$~spaces such as Banach spaces.

ano.nymous@ccsd.cnrs.fr.invalid (Sylvie Boldo), Sylvie Boldo

[hal-03194113] A Coq Formalization of Lebesgue Integration of Nonnegative Functions

Integration, just as much as differentiation, is a fundamental calculus tool that is widely used in many scientific domains. Formalizing the mathematical concept of integration and the associated results in a formal proof assistant helps in providing the highest confidence on the correctness of numerical programs involving the use of integration, directly or indirectly. By its capability to extend the (Riemann) integral to a wide class of irregular functions, and to functions defined on more general spaces than the real line, the Lebesgue integral is perfectly suited for use in mathematical fields such as probability theory, numerical mathematics, and real analysis. In this article, we present the Coq formalization of $\sigma$-algebras, measures, simple functions, and integration of nonnegative measurable functions, up to the full formal proofs of the Beppo Levi (monotone convergence) theorem and Fatou's lemma. More than a plain formalization of the known literature, we present several design choices made to balance the harmony between mathematical readability and usability of Coq theorems. These results are a first milestone toward the formalization of $L^p$~spaces such as Banach spaces.

ano.nymous@ccsd.cnrs.fr.invalid (Sylvie Boldo), Sylvie Boldo

[hal-03228076] Cuban history of CRF19 recombinant subtype of HIV-1

CRF19 is a recombinant form of HIV-1 subtypes D, A1 and G, which was first sampled in Cuba in 1999, but was already present there in 1980s. CRF19 was reported almost uniquely in Cuba, where it accounts for ∼25% of new HIV-positive patients and causes rapid progression to AIDS (∼3 years). We analyzed a large data set comprising ∼350 pol and env sequences sampled in Cuba over the last 15 years and ∼350 from Los Alamos database. This data set contained both CRF19 (∼315), and A1, D and G sequences. We performed and combined analyses for the three A1, G and D regions, using fast maximum likelihood approaches, including: (1) phylogeny reconstruction, (2) spatio-temporal analysis of the virus spread, and ancestral character reconstruction for (3) transmission mode and (4) drug resistance mutations (DRMs). We verified these results with a Bayesian approach. This allowed us to acquire new insights on the CRF19 origin and transmission patterns. We showed that CRF19 recombined between 1966 and 1977, most likely in Cuban community stationed in Congo region. We further investigated CRF19 spread on the Cuban province level, and discovered that the epidemic started in 1970s, most probably in Villa Clara, that it was at first carried by heterosexual transmissions, and then quickly spread in the 1980s within the “men having sex with men” (MSM) community, with multiple transmissions back to heterosexuals. The analysis of the transmission patterns of common DRMs found very few resistance transmission clusters. Our results show a very early introduction of CRF19 in Cuba, which could explain its local epidemiological success. Ignited by a major founder event, the epidemic then followed a similar pattern as other subtypes and CRFs in Cuba. The reason for the short time to AIDS remains to be understood and requires specific surveillance, in Cuba and elsewhere.

ano.nymous@ccsd.cnrs.fr.invalid (Anna Zhukova), Anna Zhukova

[tel-03119538] Modeling, estimation and simulation into two statistical models : quantile regression and blind deconvolution

This thesis is dedicated to the estimation of two statistical models: the simultaneous regression quantiles model and the blind deconvolution model. It therefore consists of two parts. In the first part, we are interested in estimating several quantiles simultaneously in a regression context via the Bayesian approach. Assuming that the error term has an asymmetric Laplace distribution and using the relation between two distinct quantiles of this distribution, we propose a simple fully Bayesian method that satisfies the noncrossing property of quantiles. For implementation, we use Metropolis-Hastings within Gibbs algorithm to sample unknown parameters from their full conditional distribution. The performance and the competitiveness of the underlying method with other alternatives are shown in simulated examples. In the second part, we focus on recovering both the inverse filter and the noise level of a noisy blind deconvolution model in a parametric setting. After the characterization of both the true noise level and inverse filter, we provide a new estimation procedure that is simpler to implement compared with other existing methods. As well, we consider the estimation of the unknown discrete distribution of the input signal. We derive strong consistency and asymptotic normality for all our estimates. Including a comparison with another method, we perform a consistent simulation study that demonstrates empirically the computational performance of our estimation procedures.

ano.nymous@ccsd.cnrs.fr.invalid (Josephine Merhi Bleik), Josephine Merhi Bleik

[hal-03112758] Hypotheses testing and posterior concentration rates for semi-Markov processes

In this paper, we adopt a nonparametric Bayesian approach and investigate the asymptotic behavior of the posterior distribution in continuous time and general state space semi-Markov processes. In particular, we obtain posterior concentration rates for semi-Markov kernels. For the purposes of this study, we construct robust statistical tests between Hellinger balls around semi-Markov kernels and present some specifications to particular cases, including discrete-time semi-Markov processes and finite state space Markov processes. The objective of this paper is to provide sufficient conditions on priors and semi-Markov kernels that enable us to establish posterior concentration rates.

ano.nymous@ccsd.cnrs.fr.invalid (I. Votsi), I. Votsi

[hal-02635638] Caracterisation de la composition chimique des aerosols issus de la decoupe laser de simulants du corium

Dans le contexte du démantèlement des réacteurs de Fukushima Daiichi, plusieurs projets ont été subventionnés par le gouvernement japonais pour préparer les opérations de retrait du corium. Dans ce cadre, une étude conjointe menée entre ONET Technologies et les laboratoires du CEA et de l’IRSN a permis de démontrer la faisabilité de l’utilisation de la technique de découpe par laser et d’estimer le terme source aérosol ainsi généré. Deux simulants du corium synthétisés et caractérisés par le CEA-Cadarache ont fait l’objet d’essais de tirs laser sous air et sous eau au sein de l’installation DELIA du CEA Saclay, et les aérosols émis ont été caractérisés par l’IRSN. La caractérisation des particules émises en termes de concentration et de distribution granulométrique a permis d’apporter des informations pour prédire notamment le transport et le dépôt des particules, mais la connaissance de la composition chimique par classe de taille est une information nécessaire pour une meilleure gestion des risques professionnels et environnementaux. Cet article présente les résultats concernant la caractérisation de la composition chimique de l’aérosol d’un simulant du corium, en condition de découpe laser sous air, et la distribution granulométrique associée

ano.nymous@ccsd.cnrs.fr.invalid (Emmanuel Porcheron), Emmanuel Porcheron

[hal-02274493] A posteriori estimates distinguishing the error components and adaptive stopping criteria for numerical approximations of parabolic variational inequalities

We consider in this paper a model parabolic variational inequality. This problem is discretized with conforming Lagrange finite elements of order $p ≥ 1$ in space and with the backward Euler scheme in time. The nonlinearity coming from the complementarity constraints is treated with any semismooth Newton algorithm and we take into account in our analysis an arbitrary iterative algebraic solver. In the case $p = 1$, when the system of nonlinear algebraic equations is solved exactly, we derive an a posteriori error estimate on both the energy error norm and a norm approximating the time derivative error. When $p ≥ 1$, we provide a fully computable and guaranteed a posteriori estimate in the energy error norm which is valid at each step of the linearization and algebraic solvers. Our estimate, based on equilibrated flux reconstructions, also distinguishes the discretization, linearization, and algebraic error components. We build an adaptive inexact semismooth Newton algorithm based on stopping the iterations of both solvers when the estimators of the corresponding error components do not affect significantly the overall estimate. Numerical experiments are performed with the semismooth Newton-min algorithm and the semismooth Newton-Fischer-Burmeister algorithm in combination with the GMRES iterative algebraic solver to illustrate the strengths of our approach.

ano.nymous@ccsd.cnrs.fr.invalid (Jad Dabaghi), Jad Dabaghi

[hal-01666845] Adaptive inexact semismooth Newton methods for the contact problem between two membranes

We propose an adaptive inexact version of a class of semismooth Newton methods that is aware of the continuous (variational) level. As a model problem, we study the system of variational inequalities describing the contact between two membranes. This problem is discretized with conforming finite elements of order $p \geq 1$, yielding a nonlinear algebraic system of variational inequalities. We consider any iterative semismooth linearization algorithm like the Newton-min or the Newton--Fischer--Burmeister which we complementby any iterative linear algebraic solver. We then derive an a posteriori estimate on the error between the exact solution at the continuous level and the approximate solution which is valid at any step of the linearization and algebraic resolutions. Our estimate is based on flux reconstructions in discrete subspaces of $\mathbf{H}(\mathrm{div}, \Omega)$ and on potential reconstructions in discrete subspaces of $H^1(\Omega)$ satisfying the constraints. It distinguishes the discretization, linearization, and algebraic components of the error. Consequently, we can formulate adaptive stopping criteria for both solvers, giving rise to an adaptive version of the considered inexact semismooth Newton algorithm. Under these criteria, the efficiency of the leading estimates is also established, meaning that we prove them equivalent with the error up to a generic constant. Numerical experiments for the Newton-min algorithm in combination with the GMRES algebraic solver confirm the efficiency of the developed adaptive method.

ano.nymous@ccsd.cnrs.fr.invalid (Jad Dabaghi), Jad Dabaghi

[tel-02536500] Réduction de modèles et apprentissage de solutions spatio-temporelles paramétrées à partir de données : application à des couplages EDP-EDO

On s’intéresse dans cette thèse à l’apprentissage d’un modèle réduit précis et stable, à partir de données correspondant à la solution d’une équation aux dérivées partielles (EDP), et générées par un solveur haute fidélité (HF). Pour ce faire, on utilise la méthode Dynamic Mode Decomposition (DMD) ainsi que la méthode de réduction Proper Orthogonal Decomposition (POD). Le modèle réduit appris est facilement interprétable, et par une analyse spectrale a posteriori de ce modèle on peut détecter les anomalies lors de la phase d’apprentissage. Les extensions au cas de couplage EDP-EDO, ainsi qu’au cas d’EDP d’ordre deux en temps sont présentées. L’apprentissage d’un modèle réduit dans le cas d’un système dynamique contrôlé par commutation, où la règle de contrôle est apprise à l’aide d’un réseau de neurones artificiel (ANN), est également traité. Un inconvénient de la réduction POD, est la difficile interprétation de la représentation basse dimension. On proposera alors l’utilisation de la méthode Empirical Interpolation Method (EIM). La représentation basse dimension est alors intelligible, et consiste en une restriction de la solution en des points sélectionnés. Cette approche sera ensuite étendue au cas d’EDP dépendant d’un paramètre, et où l’algorithme Kernel Ridge Regression (KRR) nous permettra d’apprendre la variété solution. Ainsi, on présentera l’apprentissage d’un modèle réduit paramétré. L’extension au cas de données bruitées ou bien au cas d’EDP d’évolution non linéaire est présentée en ouverture.

ano.nymous@ccsd.cnrs.fr.invalid (Tarik Fahlaoui), Tarik Fahlaoui

[hal-01349456] Approche d’un territoire de montagne : occupations humaines et contexte pédo-sédimentaire des versants du col du Petit-Saint-Bernard, de la Préhistoire à l’Antiquité

Dans le cadre d’un programme pluriannuel, des campagnes de sondages ont été réalisées sur les deux versants du col du Petit-Saint-Bernard (2188 m, Alpes occidentales), entre 750 et 3000 m d’altitude. La méthode de travail néglige les prospections au sol, au profit de la multiplication des sondages manuels, implantés dans des contextes topographiques sélectionnés et menés jusqu’à la base des remplissages holocènes. Les résultats obtenus documentent dans la longue durée l’évolution de la dynamique pédo-sédimentaire et la fréquentation des différents étages d’altitude. La signification des données archéologiques collectées est discutée par rapport à l’état des connaissances dans une zone de comparaison groupant les vallées voisines des Alpes occidentales, par rapport aux modèles de peuplement existants et par rapport aux indications taphonomiques apportées par l’étude pédo-sédimentaire. Un programme d’analyses complémentaires destiné à préciser le contexte, la taphonomie et le statut fonctionnel

ano.nymous@ccsd.cnrs.fr.invalid (Pierre-Jérôme Rey), Pierre-Jérôme Rey

[tel-02470901] Analysis of an elasto-visco-plastic model describing dislocation dynamics

In this thesis, we are interested in the theoretical and numerical analysis o the dynamics of dislocation densities, where dislocations are crystalline defects appearing at the microscopic scale in metallic alloys. Particularly, the study of the Groma-Czikor-Zaiser model (GCZ) and the study of the Groma-Balog model (GB) are considered. The first is actually a system of parabolic type equations, where as, the second is a system of non-linear Hamilton-Jacobi equations. Initially, we demonstrate an existence and uniqueness result of a regular solution using a comparison principle and a fixed point argument for the GCZ model. Next, we establish a time-based global existence result for the GB model, based on notions of discontinuous viscosity solutions and a new estimate of total solution variation, as well as finite velocity propagation of the governed equations. This result is extended also to the cas of general Hamilton-Jacobi equation systems. Finally, we propose a semi-explicit numerical scheme allowing the discretization of the GB model. Based on the theoretical study, we prove that the discrete solution converges toward the continuous solution, as well as an estimate of error between the continuous solution and the numerical solution has been established. Simulations showing the robustness of the numerical scheme are also presented.

ano.nymous@ccsd.cnrs.fr.invalid (Vivian Rizik), Vivian Rizik

[hal-01919067] A posteriori error estimates for a compositional two-phase flow with nonlinear complementarity constraints

In this work, we develop an a-posteriori-steered algorithm for a compositional two-phase flow with exchange of components between the phases in porous media. As a model problem, we choose the two-phase liquid-gas flow with appearance and disappearance of the gas phase formulated as a system of nonlinear evolutive partial differential equations with nonlinear complementarity constraints. The discretization of our model is based on the backward Euler scheme in time and the finite volume scheme in space. The resulting nonlinear system is solved via an inexact semismooth Newton method. The key ingredient for the a posteriori analysis are the discretization, linearization, and algebraic flux reconstructions allowing to devise estimators for each error component. These enable to formulate criteria for stopping the iterative algebraic solver and the iterative linearization solver whenever the corresponding error components do not affect significantly the overall error. Numerical experiments are performed using the Newton-min algorithm as well as the Newton-Fischer-Burmeister algorithm in combination with the GMRES iterative linear solver to show the efficiency of the proposed adaptive method.

ano.nymous@ccsd.cnrs.fr.invalid (Ibtihel Ben Gharbia), Ibtihel Ben Gharbia

[cea-02360117] Estimating Stochastic Dynamical Systems Driven by a Continuous-Time Jump Markov Process

We discuss the use of a continuous-time jump Markov process as the driving process in stochastic differential systems. Results are given on the estimation of the infinitesimal generator of the jump Markov process, when considering sample paths on random time intervals. These results are then applied within the framework of stochastic dynamical systems modeling and estimation. Numerical examples are given to illustrate both consistency and asymptotic normality of the estimator of the infinitesimal generator of the driving process. We apply these results to fatigue crack growth modeling as an example of a complex dynamical system, with applications to reliability analysis.

ano.nymous@ccsd.cnrs.fr.invalid (Julien Chiquet), Julien Chiquet

[hal-02182974] Characterization of palladium species after γ-irradiation of a TBP–alkane–Pd(NO 3 ) 2 system

The γ-irradiation of a biphasic system composed of tri-n-butylphosphate in tetrapropylene hydrogen (TPH) in contact with palladium(II) nitrate in nitric acid aqueous solution led to the formation of two precipitates. A thorough characterization of these solids was performed by means of various analytical techniques including X-Ray Diffraction (XRD), Thermal Gravimetric Analysis coupled with a Differential Scanning Calorimeter (TGA-DSC), X-ray Photoelectron Spectroscopy (XPS), InfraRed (IR), RAMAN and Nuclear Magnetic Resonance (NMR) Spectroscopy, and ElectroSpray Ionization Mass Spectrometry (ESI-MS). Investigations showed that the two precipitates exhibit quite similar structures. They are composed at least of two compounds: palladium cyanide and palladium species containing ammonium, phosphorous or carbonyl groups. Several mechanisms are proposed to explain the formation of Pd(CN)2.

ano.nymous@ccsd.cnrs.fr.invalid (Bénédicte Simon), Bénédicte Simon

[hal-02153384] Hypotheses testing and posterior concentration rates for semi-Markov processes

In this paper, we adopt a nonparametric Bayesian approach and investigate the asymptotic behavior of the posterior distribution in continuous time and general state space semi-Markov processes. In particular, we obtain posterior concentration rates for semi-Markov kernels. For the purposes of this study, we construct robust statistical tests between Hellinger balls around semi-Markov kernels and present some specifications to particular cases, including discrete-time semi-Markov processes and finite state space Markov processes. The objective of this paper is to provide sufficient conditions on priors and semi-Markov kernels that enable us to establish posterior concentration rates.

ano.nymous@ccsd.cnrs.fr.invalid (V S Barbu), V S Barbu

[tel-01084237] Contribution à la modélisation physique du dosage des actinides par microanalyse électronique

L'analyse par microsonde électronique (EPMA) permet de quantifier, avec une grande précision, les concentrations élémentaires d'échantillons de compositions inconnues. Elle permet, par exemple, de quantifier les actinides présents dans les combustibles nucléaires neufs ou irradiés, d'aider à la gestion des déchets nucléaires ou encore de dater certaines roches. Malheureusement, ces analyses quantitatives ne sont pas toujours réalisables dû à l'indisponibilité des étalons de référence pour certains actinides. Afin de pallier cette difficulté, une méthode d'analyse dite « sans standard » peut-être employée au moyen d'étalons virtuels. Ces derniers sont obtenus à partir de formules empiriques ou à partir de calculs basés sur des modèles théoriques. Toutefois, ces calculs requièrent la connaissance de paramètres physiques généralement mal connus, comme c'est le cas pour les sections efficaces de production de rayons X. La connaissance précise de ces sections efficaces est requise dans de nombreuses applications telles que dans les codes de transport de particules et dans les simulations Monte-Carlo. Ces codes de calculs sont très utilisés en médecine et particulièrement en imagerie médicale et dans les traitements par faisceau d'électrons. Dans le domaine de l'astronomie, ces données sont utilisées pour effectuer des simulations servant à prédire les compositions des étoiles et des nuages galactiques ainsi que la formation des systèmes planétaires.Au cours de ce travail, les sections efficaces de production des raies L et M du plomb, du thorium et de l'uranium ont été mesurées par impact d'électrons sur des cibles minces autosupportées d'épaisseur variant de 0,2 à 8 nm. Les résultats expérimentaux ont été comparés avec les prédictions théoriques de sections efficaces d'ionisation calculées grâce à l'approximation de Born en ondes distordues (DWBA) et avec les prédictions de formules analytiques utilisées dans les applications pratiques. Les sections efficaces d'ionisation ont été converties en sections efficaces de productions de rayons X grâce aux paramètres de relaxation atomique extraits de la littérature. Les résultats théoriques du modèle DWBA sont en excellents accords avec les résultats expérimentaux. Ceci permet de confirmer les prédictions de ce modèle et de valider son utilisation pour le calcul de standards virtuels.Les prédictions de ce modèle ont été intégrées dans le code Monte-Carlo PENELOPE afin de calculer l'intensité de rayons X produite par des standards pur d'actinides. Les calculs ont été réalisés pour les éléments dont le numéro atomique est 89 ≤ Z ≤ 99 et pour des tensions d'accélération variant du seuil d'ionisation jusque 40 kV, par pas de 0,5 kV. Pour une utilisation pratique, les intensités calculées pour les raies L et M les plus intenses ont été regroupées dans une base de données.Les prédictions des standards virtuels ainsi obtenus ont été comparées avec des mesures effectuées sur des échantillons de composition connue (U, UO2, ThO2, ThF4, PuO2…) et avec les données acquises lors de précédentes campagnes de mesures. Le dosage des actinides à l'aide de ces standards virtuels a montré un bon accord avec les résultats attendus. Ceci confirme la fiabilité des standards virtuels développés et démontre que la quantification des actinides par microsonde électronique est réalisable sans standards d'actinides et avec un bon niveau de confiance.

ano.nymous@ccsd.cnrs.fr.invalid (Aurélien Moy), Aurélien Moy

[cea-02023046] Aerosols released during the laser cutting of a Fukushima Daiichi debris simulant

One of the important challenges for the decommissioning of the damaged reactors of the Fukushima Daiichi Nuclear Power Plant is the safe retrieval of the fuel debris or corium. It is especially primordial to investigate the cutting conditions for air configuration and for underwater configuration at different water levels. Concerning the cutting techniques, the laser technique is well adapted to the cutting of expected material such as corium that has an irregular shape and heterogeneous composition. A French consortium (ONET Technologies, CEA and IRSN) is being subsidized by the Japanese government to implement R&D related to the laser cutting of Fukushima Daiichi fuel debris and related to dust collection technology. Debris simulant have been manufactured in the PLINIUS platform to represent Molten Core Concrete Interaction as estimated from Fukushima Daiichi calculations. In this simulant, uranium is replaced by hafnium and the major fission products have been replaced by their natural isotopes. During laser cutting experiments in the DELIA facility, aerosols have been collected thanks to filters and impactors. The collected aerosols have been analyzed. Both chemical analysis (dissolution + ICP MS and ICP AES) and microscopic analyses (SEM EDS) will be presented and discussed. These data provide insights on the expected dust releases during cutting and can be converted to provide radioactivity estimates. They have also been successfully compared to thermodynamic calculations with the NUCLEA database.

ano.nymous@ccsd.cnrs.fr.invalid (Christophe Journeau), Christophe Journeau

[hal-01700663] A Lagrange multiplier method for a discrete fracture model for flow in porous media

In this work we present a novel discrete fracture model for single-phase Darcy flow in porous media with fractures of co-dimension one, which introduces an additional unknown at the fracture interface. Inspired by the fictitious domain method this Lagrange multiplier couples fracture and matrix domain and represents a local exchange of the fluid. The multipliers naturally impose the equality of the pressures at the fracture interface. The model is thus appropriate for domains with fractures of permeability higher than that in the surrounding bulk domain. In particular the novel approach allows for independent, regular meshing of fracture and matrix domain and therefore avoids the generation of small elements. We show existence and uniqueness of the weak solution of the continuous primal formulation. Moreover we discuss the discrete inf-sup condition of two different finite element formulations. Several numerical examples verify the accuracy and convergence of proposed method.

ano.nymous@ccsd.cnrs.fr.invalid (Markus Köppel), Markus Köppel

[hal-01761591] A stabilized Lagrange multiplier finite-element method for flow in porous media with fractures

In this work we introduce a stabilized, numerical method for a multi-dimensional, discrete-fracture model (DFM) for single-phase Darcy flow in fractured porous media. In the model, introduced in an earlier work, flow in the (n − 1)-dimensional fracture domain is coupled with that in the n-dimensional bulk or matrix domain by the use of Lagrange multipliers. Thus the model permits a finite element discretization in which the meshes in the fracture and matrix domains are independent so that irregular meshing and in particular the generation of small elements can be avoided. In this paper we introduce in the numerical formulation, which is a saddle-point problem based on a primal, variational formulation for flow in the matrix domain and in the fracture system, a consistent stabilizing term which penalizes discontinuities in the Lagrange multipliers. For this penalized scheme we show stability and prove convergence. With numerical experiments we analyze the performance of the method for various choices of the penalization parameter and compare with other numerical DFM's.

ano.nymous@ccsd.cnrs.fr.invalid (Markus Köppel), Markus Köppel

[hal-01800481] Diffusion Problems in Multi-layer Media with Nonlinear Interface Contact Resistance

The purpose is a finite element approximation of the heat diffusion problem in composite media, with non-linear contact resistance at the interfaces. As already explained in [Journal of Scientific Computing, {\bf 63}, 478-501(2015)], hybrid dual formulations are well fitted to complicated composite geometries and provide tractable approaches to variationally express the jumps of the temperature. The finite elements spaces are standard. Interface contributions are added to the variational problem to account for the contact resistance. This is an important advantage for computing codes developers. We undertake the analysis of the non-linear heat problem for a large range of contact resistance and we investigate its discretization by hybrid dual finite element methods. Numerical experiments are presented at the end to support the theoretical results.

ano.nymous@ccsd.cnrs.fr.invalid (F Ben Belgacem), F Ben Belgacem

[hal-01939854] A New Algorithm of Proper Generalized Decomposition for Parametric Symmetric Elliptic Problems

We introduce a new algorithm of proper generalized decomposition (PGD) for parametric symmetric elliptic partial differential equations. For any given dimension, we prove the existence of an optimal subspace of at most that dimension which realizes the best approximation---in the mean parametric norm associated to the elliptic operator---of the error between the exact solution and the Galerkin solution calculated on the subspace. This is analogous to the best approximation property of the proper orthogonal decomposition (POD) subspaces, except that in our case the norm is parameter-dependent. We apply a deflation technique to build a series of approximating solutions on finite-dimensional optimal subspaces, directly in the online step, and we prove that the partial sums converge to the continuous solution in the mean parametric elliptic norm. We show that the standard PGD for the considered parametric problem is strongly related to the deflation algorithm introduced in this paper. This opens the possibility of computing the PGD expansion by directly solving the optimization problems that yield the optimal subspaces.

ano.nymous@ccsd.cnrs.fr.invalid (M. Azaïez), M. Azaïez

[hal-01906872] Convergence of a positive nonlinear control volume finite element scheme for an anisotropic seawater intrusion model with sharp interfaces

We consider a degenerate parabolic system modelling the flow of fresh and saltwater in an anisotropic porous medium in the context of seawater intrusion. We propose and analyze a nonlinear Control Volume Finite Element scheme. This scheme ensures the nonnegativity of the discrete solution without any restriction on the mesh and on the anisotropy tensor. Moreover It also provides a control on the entropy. Based on these nonlinear stability results, we show that the scheme converges towards a weak solution to the problem. Numerical results are provided to illustrate the behavior of the model and of the scheme.

ano.nymous@ccsd.cnrs.fr.invalid (Ahmed Ait Hammou Oulhaj), Ahmed Ait Hammou Oulhaj

[hal-01581807] Preuve formelle du théorème de Lax–Milgram

Résumé du papier "A Coq formal proof of the Lax-Milgram Theorem", CPP 2017.

ano.nymous@ccsd.cnrs.fr.invalid (Sylvie Boldo), Sylvie Boldo

[hal-01391578] A Coq formal proof of the Lax–Milgram theorem

The Finite Element Method is a widely-used method to solve numerical problems coming for instance from physics or biology. To obtain the highest confidence on the correction of numerical simulation programs implementing the Finite Element Method, one has to formalize the mathematical notions and results that allow to establish the sound-ness of the method. The Lax–Milgram theorem may be seen as one of those theoretical cornerstones: under some completeness and coercivity assumptions, it states existence and uniqueness of the solution to the weak formulation of some boundary value problems. This article presents the full formal proof of the Lax–Milgram theorem in Coq. It requires many results from linear algebra, geometry, functional analysis , and Hilbert spaces.

ano.nymous@ccsd.cnrs.fr.invalid (Sylvie Boldo), Sylvie Boldo

[hal-01279503] First-order indicators for the estimation of discrete fractures in porous media

Faults and geological barriers can drastically affect the flow patterns in porous media. Such fractures can be modeled as interfaces that interact with the surrounding matrix. We propose a new technique for the estimation of the location and hydrogeological properties of a small number of large fractures in a porous medium from given distributed pressure or flow data. At each iteration, the algorithm builds a short list of candidates by comparing fracture indicators. These indicators quantify at the first order the decrease of a data misfit function; they are cheap to compute. Then, the best candidate is picked up by minimization of the objective function for each candidate. Optimally driven by the fit to the data, the approach has the great advantage of not requiring remeshing, nor shape derivation. The stability of the algorithm is shown on a series of numerical examples representative of typical situations.

ano.nymous@ccsd.cnrs.fr.invalid (Hend Ben Ameur), Hend Ben Ameur

[hal-01344090] The Lax–Milgram theorem. A detailed proof to be formalized in Coq

To obtain the highest confidence on the correction of numerical simulation programs implementing the finite element method, one has to formalize the mathematical notions and results that allow to establish the soundness of the method. The Lax-Milgram theorem may be seen as one of those theoretical cornerstones: under some completeness and coercivity assumptions, it states existence and uniqueness of the solution to the weak formulation of some boundary value problems. The purpose of this document is to provide the formal proof community with a very detailed pen-and-paper proof of the Lax-Milgram theorem.

ano.nymous@ccsd.cnrs.fr.invalid (François Clément), François Clément

[hal-01070701] Implementation of an adaptive energy-efficient MAC protocol in OMNeT++/MiXiM

In recent years, many MAC protocols for wireless sensor networks have been proposed and most of them are evaluated using Matlab simulator and/or network simulators (OMNeT++, NS2, etc). However, most of them have a static behavior and few network simulations are available for adaptive protocols. Specially, in OMNeT++/MiXiM, there are few energy efficient MAC protocols for WSNs (B-MAC & L-MAC) and no adaptive ones. To this end, the TAD-MAC (Traffic Aware Dynamic MAC) protocol has been simulated in OMNeT++ with the MiXiM framework and implementation details are given in this paper. The simulation results have been used to evaluate the performance of TAD-MAC through comparisons with B-MAC and L-MAC protocols.

ano.nymous@ccsd.cnrs.fr.invalid (Van-Thiep Nguyen), Van-Thiep Nguyen

[hal-00839653] Well-conditioned boundary integral formulations for high-frequency elastic scattering problems in three dimensions

We construct and analyze a family of well-conditioned boundary integral equations for the Krylov iterative solution of three-dimensional elastic scattering problems by a bounded rigid obstacle. We develop a new potential theory using a rewriting of the Somigliana integral representation formula. From these results, we generalize to linear elasticity the well-known Brakhage-Werner and Combined Field Integral Equation formulations. We use a suitable approximation of the Dirichlet-to-Neumann (DtN) map as a regularizing operator in the proposed boundary integral equations. The construction of the approximate DtN map is inspired by the On-Surface Radiation Conditions method. We prove that the associated integral equations are uniquely solvable and possess very interesting spectral properties. Promising analytical and numerical investigations, in terms of spherical harmonics, with the elastic sphere are provided.

ano.nymous@ccsd.cnrs.fr.invalid (Marion Darbas), Marion Darbas

[inria-00625293] Exact MLE and asymptotic properties for nonparametric semi-Markov models

This article concerns maximum-likelihood estimation for discrete time homogeneous nonparametric semi-Markov models with finite state space. In particular, we present the exact maximum-likelihood estimator of the semi-Markov kernel which governs the evolution of the semi-Markov chain (SMC). We study its asymptotic properties in the following cases: (i) for one observed trajectory, when the length of the observation tends to infinity, and (ii) for parallel observations of independent copies of an SMC censored at a fixed time, when the number of copies tends to infinity. In both cases, we obtain strong consistency, asymptotic normality, and asymptotic efficiency for every finite dimensional vector of this estimator. Finally, we obtain explicit forms for the covariance matrices of the asymptotic distributions.

ano.nymous@ccsd.cnrs.fr.invalid (Samis Trevezas), Samis Trevezas

[hal-00731856] On the necessity of Nitsche term

The aim of this article is to explore the possibility of using a family of fixed finite elements shape functions to solve a Dirichlet boundary value problem with an alternative variational formulation. The domain is embedded in a bounding box and the finite element approximation is associated to a regular structured mesh of the box. The shape of the domain is independent of the discretization mesh. In these conditions, a meshing tool is never required. This may be especially useful in the case of evolving domains, for example shape optimization or moving interfaces. This is not a new idea, but we analyze here a special approach. The main difficulty of the approach is that the associated quadratic form is not coercive and an inf-sup condition has to be checked. In dimension one, we prove that this formulation is well posed and we provide error estimates. Nevertheless, our proof relying on explicit computations is limited to that case and we give numerical evidence in dimension two that the formulation does not provide a reliable method. We first add a regularization through a Nitscheterm and we observe that some instabilities still remain. We then introduce and justify a geometrical regularization. A reliable method is obtained using both regularizations.

ano.nymous@ccsd.cnrs.fr.invalid (Gaël Dupire), Gaël Dupire

[hal-00731528] On the necessity of Nitsche term. Part II : An alternative approach

The aim of this article is to explore the possibility of using a family of fixed finite element shape functions that does not match the domain to solve a boundary value problem with Dirichlet boundary condition. The domain is embedded in a bounding box and the finite element approximation is associated to a regular structured mesh of the box. The shape of the domain is independent of the discretization mesh. In these conditions, a meshing tool is never required. This may be especially useful in the case of evolving domains, for example shape optimization or moving interfaces. Nitsche method has been intensively applied. However, Nitsche is weighted with the mesh size h and therefore is a purely discrete point of view with no interpretation in terms of a continuous variational approach associated with a boundary value problem. In this paper, we introduce an alternative to Nitsche method which is associated with a continuous bilinear form. This extension has strong restrictions: it needs more regularity on the data than the usual method. We prove the well-posedness of our formulation and error estimates. We provide numerical comparisons with Nitsche method.

ano.nymous@ccsd.cnrs.fr.invalid (Jean-Paul Boufflet), Jean-Paul Boufflet

[inria-00576524] Maximum likelihood estimation for general hidden semi-Markov processes with backward recurrence time dependence

This article concerns the study of the asymptotic properties of the maximum likelihood estimator (MLE) for the general hidden semi-Markov model (HSMM) with backward recurrence time dependence. By transforming the general HSMM into a general hidden Markov model, we prove that under some regularity conditions, the MLE is strongly consistent and asymptotically normal. We also provide useful expressions for the asymptotic covariance matrices, involving the MLE of the conditional sojourn times and the embedded Markov chain of the hidden semi-Markov chain.

ano.nymous@ccsd.cnrs.fr.invalid (Samis Trevezas), Samis Trevezas

[inria-00576514] An EM and a stochastic version of the EM algorithm for nonparametric Hidden semi-Markov models

The Hidden semi-Markov models (HSMMs) have been introduced to overcome the constraint of a geometric sojourn time distribution for the different hidden states in the classical hidden Markov models. Several variations of HSMMs have been proposed that model the sojourn times by a parametric or a nonparametric family of distributions. In this article, we concentrate our interest on the nonparametric case where the duration distributions are attached to transitions and not to states as in most of the published papers in HSMMs. Therefore, it is worth noticing that here we treat the underlying hidden semi–Markov chain in its general probabilistic structure. In that case, Barbu and Limnios (2008) proposed an Expectation–Maximization (EM) algorithm in order to estimate the semi-Markov kernel and the emission probabilities that characterize the dynamics of the model. In this paper, we consider an improved version of Barbu and Limnios' EM algorithm which is faster than the original one. Moreover, we propose a stochastic version of the EM algorithm that achieves comparable estimates with the EM algorithm in less execution time. Some numerical examples are provided which illustrate the efficient performance of the proposed algorithms.

ano.nymous@ccsd.cnrs.fr.invalid (Sonia Malefaki), Sonia Malefaki

[inria-00468804] Variance Estimation in the Central Limit Theorem for Markov chains

This article concerns the variance estimation in the central limit theorem for finite recurrent Markov chains. The associated variance is calculated in terms of the transition matrix of the Markov chain. We prove the equivalence of different matrix forms representing this variance. The maximum likelihood estimator for this variance is constructed and it is proved that it is strongly consistent and asymptotically normal. The main part of our analysis consists in presenting closed matrix forms for this new variance. Additionally, we prove the asymptotic equivalence between the empirical and the MLE estimator for the stationary distribution.

ano.nymous@ccsd.cnrs.fr.invalid (Samis Trevezas), Samis Trevezas