Retour Accueil / Recherche / Publications sur H.A.L.

n the present paper, we are mainly concerned with the kernel type estimators for the moment generating function. More precisely, we establish the central limit theorem together with the characterization of the bias and the variance for the nonparametric recursive kernel-type estimators for the moment generating function under some mild conditions. Finally, we investigate the performance of the methodology for small samples through a short simulation study.

ano.nymous@ccsd.cnrs.fr.invalid (Salim Bouzebda), Salim Bouzebda

In this dissertation, we are interested in nonparametric regression estimation models. More precisely, we are concerned with a class of conditional U-statistics estimators. Conditional U-statistics can be viewed as a generalization of the Nadaray-Watson estimator. The latter uses a smoothing kernel function to “average” response variable values within a predictor range. Stute generalizes the Nadaraya-Watson estimator first by replacing simple weighted averages in the numerator and denominator with U-statistics. Then, using a collection of predictor random variables, he predicts the conditional expectation of the U-statistic kernel function. This generalization is prosperous and influential in mathematical statistics due to its outstanding scientific utility and fascinating theoretical complexity. However, like any other kernel estimation technique, the question of choosing a suitable bandwidth to balance the variance-bias trade off is a subject that remains insufficiently addressed in the literature on conditional U-statistics when explanatory variables are functional. In the first part, we introduce the k nearest neighborhoods estimator of the conditional U-statistics depending on an infinite-dimensional covariate. A sharp uniform in the number of neighborhoods (UINN) limit law for the proposed estimator is presented. Such a result allows the NN to vary within a complete range for which the estimator is consistent. Consequently, it represents an interesting guideline in practice to select the optimal NN in nonparametric functional data analysis. In addition, uniform consistency is also established over ϕ ∈F for a suitably restricted class F, in both cases bounded and unbounded, satisfying some moment conditions and some mild conditions on the model. As a by-product of our proofs, we state consistency results for the k-NN conditional U-statistics, under the random censoring, are uniform in the number of neighbors. The second part of the thesis deals with a general nonparametric statistical curve estimation setting, including the Stute estimator as a particular case. The class of “delta sequence estimators” is defined and treated here. This class also includes the orthogonal series and histogram methods. We partially extend these results to the setting of the functional data. The major part of the thesis is motivated by machine learning problems, including, among many others, the discrimination problems, the metric learning, and the multipartite ranking.

ano.nymous@ccsd.cnrs.fr.invalid (Amel Nezzal), Amel Nezzal

A cross-diffusion system with Lotka--Volterra reaction terms in a bounded domain with no-flux boundary conditions is analyzed. The system is a nonlocal regularization of a generalized Busenberg--Travis model, which describes segregating population species with local averaging. The partial velocities are the solutions of an elliptic regularization of Darcy's law, which can be interpreted as a Brinkman's law. The following results are proved: the existence of global weak solutions; localization limit; boundedness and uniqueness of weak solutions (in one space dimension); exponential decay of the solutions. Moreover, the weak--strong uniqueness property for the limiting system is shown.

ano.nymous@ccsd.cnrs.fr.invalid (Ansgar Jüngel), Ansgar Jüngel

We consider a repairable system modeled by a semi-Markov process (SMP), where we include a geometric renewal process for system degradation upon repair, and replacement strategies for non-repairable failure or upon N repairs. First Pérez-Ocón and Torres-Castro studied this system (Pérez-Ocón and Torres-Castro in Appl Stoch Model Bus Ind 18(2):157–170, 2002) and proposed availability calculation using the Laplace Transform. In our work, we consider an extended state space for up and down times separately. This allows us to leverage the standard theory for SMP to obtain all reliability related measurements such as reliability, availability (point and steady-state), mean times and rate of occurrence of failures of the system with general initial law. We proceed with a convolution algebra, which allows us to obtain final closed form formulas for the above measurements. Finally, numerical examples are given to illustrate the methodology.

ano.nymous@ccsd.cnrs.fr.invalid (Jingqi Zhang), Jingqi Zhang

This paper focuses on the low-dimensional representation of multivariate functions. We study a recursive POD representation, based upon the use of the power iterate algorithm to recursively expand the modes retained in the previous step. We obtain general error estimates for the truncated expansion, and prove that the recursive POD representation provides a quasi-optimal approximation in $$L^2$$ L 2 norm. We also prove an exponential rate of convergence, when applied to the solution of the reaction-diffusion partial differential equation. Some relevant numerical experiments show that the recursive POD is computationally more accurate than the Proper Generalized Decomposition for multivariate functions. We also recover the theoretical exponential convergence rate for the solution of the reaction-diffusion equation.

ano.nymous@ccsd.cnrs.fr.invalid (M. Azaïez), M. Azaïez

The problem of estimating the spatio-functional expectile regression for a given spatial mixing structure Xi,Yi∈F×R, when i∈ZN,N≥1 and F is a metric space, is investigated. We have proposed the M-estimation procedure to construct the Spatial Local Linear (SLL) estimator of the expectile regression function. The main contribution of this study is the establishment of the asymptotic properties of the SLL expectile regression estimator. Precisely, we establish the almost-complete convergence with rate. This result is proven under some mild conditions on the model in the mixing framework. The implementation of the SLL estimator is evaluated using an empirical investigation. A COVID-19 data application is performed, allowing this work to highlight the substantial superiority of the SLL-expectile over SLL-quantile in risk exploration.

ano.nymous@ccsd.cnrs.fr.invalid (Ali Laksaci), Ali Laksaci

The convergence rate for free-distribution functional data analyses is challenging. It requires some advanced pure mathematics functional analysis tools. This paper aims to bring several contributions to the existing functional data analysis literature. First, we prove in this work that Kolmogorov entropy is a fundamental tool in characterizing the convergence rate of the local linear estimation. Precisely, we use this tool to derive the uniform convergence rate of the local linear estimation of the conditional cumulative distribution function and the local linear estimation conditional quantile function. Second, a central limit theorem for the proposed estimators is established. These results are proved under general assumptions, allowing for the incomplete functional time series case to be covered. Specifically, we model the correlation using the ergodic assumption and assume that the response variable is collected with missing at random. Finally, we conduct Monte Carlo simulations to assess the finite sample performance of the proposed estimators.

ano.nymous@ccsd.cnrs.fr.invalid (Ouahiba Litimein), Ouahiba Litimein

In this paper, we design a posteriori estimates for finite element approximations of nonlinear elliptic problems satisfying strong-monotonicity and Lipschitz-continuity properties. These estimates include, and build on, any iterative linearization method that satisfies a few clearly identified assumptions; this encompasses the Picard, Newton, and Zarantonello linearizations. The estimates give a guaranteed upper bound on an augmented energy difference (reliability with constant one), as well as a lower bound (efficiency up to a generic constant). We prove that for the Zarantonello linearization, this generic constant only depends on the space dimension, the mesh shape regularity, and possibly the approximation polynomial degree in four or more space dimensions, making the estimates robust with respect to the strength of the nonlinearity. For the other linearizations, there is only a computable dependence on the local variation of the linearization operators. We also derive similar estimates for the usual energy difference that depend locally on the nonlinearity and improve the established bound. Numerical experiments illustrate and validate the theoretical results, for both smooth and singular solutions.

ano.nymous@ccsd.cnrs.fr.invalid (André Harnist), André Harnist

[...]

ano.nymous@ccsd.cnrs.fr.invalid (Hanna Bacave), Hanna Bacave

We propose a way to account for inspection errors in a particular framework. We consider a situation where the lifetime of a system depends essentially of a particular part. A deterioration of this part is regarded as an unacceptable state for the safety of the system and a major renewal is deemed necessary. Thus the statistical analysis of the deterioration time distribution of this part is of primary interest for the preventive maintenance of the system. In this context, we faced the following problem. In the early life of the system, unwarranted renewals of the part are decided upon, caused by overly cautious behaviour. Such unnecessary renewals make the statistical analysis of deterioration time data difficult and can induce an underestimation of the mean life of the part. To overcome this difficulty, we propose to regard the problem as an incomplete data model. We present its estimation under the maximum likelihood methodology. Numerical experiments show that this approach eliminates the pessimistic bias in the estimation of the mean life of the part. We also present a Bayesian analysis of the problem which can be useful in a small sample setting.

ano.nymous@ccsd.cnrs.fr.invalid (Gilles Celeux), Gilles Celeux

The main goal of this research is to develop a data-driven reduced order model (ROM) strategy from high-fidelity simulation result data of a full order model (FOM). The goal is to predict at lower computational cost the time evolution of solutions of Fluid-Structure Interaction (FSI) problems. For some FSI applications like tire/water interaction, the FOM solid model (often chosen as quasistatic) can take far more computational time than the HF fluid one. In this context, for the sake of performance one could only derive a reduced-order model for the structure and try to achieve a partitioned HF fluid solver coupled with a ROM solid one. In this paper, we present a datadriven partitioned ROM on a study case involving a simplified 1D-1D FSI problem representing an axisymmetric elastic model of an arterial vessel, coupled with an incompressible fluid flow. We derive a purely data-driven solid ROM for FOM fluid-ROM structure partitioned coupling and present early results.

ano.nymous@ccsd.cnrs.fr.invalid (Azzeddine Tiba), Azzeddine Tiba

The hidden Markov models (HMM) are used in many different fields, to study the dynamics of a process that cannot be directly observed. However, in some cases, the structure of dependencies of a HMM is too simple to describe the dynamics of the hidden process. In particular, in some applications in finance or in ecology, the transition probabilities of the hidden Markov chain can also depend on the current observation. In this work we are interested in extending the classical HMM to this situation. We define a new model, referred to as the Observation Driven-Hidden Markov Model (OD-HMM). We present a complete study of the general non-parametric OD-HMM with discrete and finite state spaces (hidden and observed variables). We study its identifiability. Then we study the consistency of the maximum likelihood estimators. We derive the associated forward-backward equations for the E-step of the EM algorithm. The quality of the procedure is tested on simulated data sets. Finally, we illustrate the use of the model on an application on the study of annual plants dynamics. This works sets theoretical and practical foundations for a new framework that could be further extended, on one hand to the non-parametric context to simplify estimation, and on the other hand to the hidden semi-Markov models for more realism.

ano.nymous@ccsd.cnrs.fr.invalid (Hanna Bacave), Hanna Bacave

During a severe accident in a nuclear reactor, extreme temperatures may be reached (T>2500 K). In these conditions, the nuclear fuel may react with the Zircaloy cladding and then with the steel vessel, forming a mixture of solid-liquid phases called in-vessel corium. In the worst scenario, this mixture may penetrate the vessel and reach the concrete underneath the reactor. In order to develop the TAF-ID thermodynamic database (www.oecd-nea.orgiscienceitaf-id) on nuclear fuels and to predict the high temperature behaviour of the corium + concrete system, new high temperature thermodynamic data are needed. The LM2T at CEA Saclay centre started an experimental campaign of phase equilibria measurements at high temperature (up to 2600 K) on interesting corium sub-systems. In particular, a heat treatment at 2500 K has been performed on two prototypic ex-vessel corium samples (within the U-Zr-Al-Ca-Si-O system) with different amounts of CaO and SiO$_2$. The results show that depending on the SiO2-content, the final configuration of the samples can be significantly different. The sample with the higher CaO-content showed a dendritic structure representative of a single quenched liquid phase, whilst the sample richer in SiO2 exhibited a microstructure which suggests the presence of a liquid miscibility gap. Furthermore a new laser heating setup has been conceived. This technique allows very high temperature measures (T > 3000 K) limiting the interactions between the sample and the surroundings.

ano.nymous@ccsd.cnrs.fr.invalid (Andrea Quaini), Andrea Quaini

Lebesgue integration is a well-known mathematical tool, used for instance in probability theory, real analysis, and numerical mathematics. Thus, its formalization in a proof assistant is to be designed to fit different goals and projects. Once the Lebesgue integral is formally defined and the first lemmas are proved, the question of the convenience of the formalization naturally arises. To check it, a useful extension is Tonelli's theorem, stating that the (double) integral of a nonnegative measurable function of two variables can be computed by iterated integrals, and allowing to switch the order of integration. This article describes the formal definition and proof in Coq of product sigma-algebras, product measures and their uniqueness, the construction of iterated integrals, up to Tonelli's theorem. We also advertise the Lebesgue induction principle provided by an inductive type for nonnegative measurable functions.

ano.nymous@ccsd.cnrs.fr.invalid (Sylvie Boldo), Sylvie Boldo

To obtain the highest confidence on the correction of numerical simulation programs implementing the finite element method, one has to formalize the mathematical notions and results that allow to establish the soundness of the method. Sobolev spaces are the mathematical framework in which most weak formulations of partial derivative equations are stated, and where solutions are sought. These functional spaces are built on integration and measure theory. Hence, this chapter in functional analysis is a mandatory theoretical cornerstone for the definition of the finite element method. The purpose of this document is to provide the formal proof community with very detailed pen-and-paper proofs of the main results from integration and measure theory.

ano.nymous@ccsd.cnrs.fr.invalid (François Clément), François Clément

Lebesgue integration is a well-known mathematical tool, used for instance in probability theory, real analysis, and numerical mathematics. Thus, its formalization in a proof assistant is to be designed to fit different goals and projects. Once the Lebesgue integral is formally defined and the first lemmas are proved, the question of the convenience of the formalization naturally arises. To check it, a useful extension is Tonelli's theorem, stating that the (double) integral of a nonnegative measurable function of two variables can be computed by iterated integrals, and allowing to switch the order of integration. This article describes the formal definition and proof in Coq of product sigma-algebras, product measures and their uniqueness, the construction of iterated integrals, up to Tonelli's theorem. We also advertise the Lebesgue induction principle provided by an inductive type for nonnegative measurable functions.

ano.nymous@ccsd.cnrs.fr.invalid (Sylvie Boldo), Sylvie Boldo

Compressible multi-material flows are omnipresent in scientifc and industrial applications: from the supernova explosions in space, high speed flows in jet and rocket propulsion to the scenario of the underwater explosions, and vapor explosions in the post accidental situation in the nuclear reactors, their application covers almost all the aspects of classical fluid physics. In the numerical simulations of these flows, interfaces play a very crucial role. A poor numerical resolution of the interfaces could make it very difficult to account for the physics like material separation, location of the shocks and the contact discontinuities, and the transfer of the mass, momentum, heat between different materials/phases. Owing to such an importance, the sharp interface capturing remains a very active area of research in computational Physics. To address this problem in this paper we focus on the Interface Capturing (IC) strategy, and thus we make the use of a newly developed Diffuse Interface Method (DIM) called: Multidimensional Limiting Process-Upper Bound (MLP-UB). Our analysis shows that this method is easy to implement, easily extendable to multiple space dimensions, can deal with any number of material interfaces, and produces sharp shape-preserving interfaces, along with their accurate interaction with shocks and contact discontinuities. Numerical experiments show very good results even over rather coarse meshes.

ano.nymous@ccsd.cnrs.fr.invalid (Shambhavi Nandan), Shambhavi Nandan

In this paper, we consider the problem of identifying a single moving point source for a three-dimensional wave equation from boundary measurements. Precisely, we show that the knowledge of the field generated by the source at six different points of the boundary over a finite time interval is sufficient to determine uniquely its trajectory. We also derive a Lipschitz stability estimate for the inversion.

ano.nymous@ccsd.cnrs.fr.invalid (Hanin Al Jebawy), Hanin Al Jebawy

For over 60 years, research reactors (RR or RTR for research testing reactors) have been used as neutron sources for research, radioisotope production ($^{99}$Mo/$^{99m}$Tc), nuclear medicine, materials characterization, etc… Currently, over 240 of these reactors are in operation in 56 countries. They are simpler than power reactors and operate at lower temperature (cooled to below 100°C). The fuel assemblies are typically plates or cylinders of uranium alloy and aluminium (U-Al) coated with pure aluminium. These fuels can be processed in AREVA La Hague plant after batch dissolution in concentrated nitric acid and mixing with UOX fuel streams. The aim of this study is to accurately measure the solubility of molybdenum in nitric acid solution containing high concentrations of aluminium. The higher the molybdenum solubility is, the more flexible reprocessing operations are, especially when the spent fuels contain high amounts of molybdenum. To be most representative of the dissolution process, uranium-molybdenum alloy and molybdenum metal powder were dissolved in solutions of aluminium nitrate at the nominal dissolution temperature. The experiments showed complete dissolution of metallic elements after 30minutes stirring, even if molybdenum metal was added in excess. After an induction period, a slow precipitation of molybdic acid occurs for about 15hours. The data obtained show the molybdenum solubility decreases with increasing aluminium concentration. The solubility law follows an exponential relation around 40g/L of aluminium with a high determination coefficient. Molybdenum solubility is not impacted by the presence of gadolinium, or by an increasing concentration of uranium.

ano.nymous@ccsd.cnrs.fr.invalid (Xavier Hérès), Xavier Hérès

In this work, we design and analyze a Hybrid High-Order (HHO) discretization method for incompressible flows of non-Newtonian fluids with power-like convective behaviour. We work under general assumptions on the viscosity and convection laws, that are associated with possibly different Sobolev exponents r ∈ (1, ∞) and s ∈ (1, ∞). After providing a novel weak formulation of the continuous problem, we study its well-posedness highlighting how a subtle interplay between the exponents r and s determines the existence and uniqueness of a solution. We next design an HHO scheme based on this weak formulation and perform a comprehensive stability and convergence analysis, including convergence for general data and error estimates for shear-thinning fluids and small data. The HHO scheme is validated on a complete panel of model problems.

ano.nymous@ccsd.cnrs.fr.invalid (Daniel Castanon Quiroz), Daniel Castanon Quiroz

Integration, just as much as differentiation, is a fundamental calculus tool that is widely used in many scientific domains. Formalizing the mathematical concept of integration and the associated results in a formal proof assistant helps in providing the highest confidence on the correctness of numerical programs involving the use of integration, directly or indirectly. By its capability to extend the (Riemann) integral to a wide class of irregular functions, and to functions defined on more general spaces than the real line, the Lebesgue integral is perfectly suited for use in mathematical fields such as probability theory, numerical mathematics, and real analysis. In this article, we present the Coq formalization of $\sigma$-algebras, measures, simple functions, and integration of nonnegative measurable functions, up to the full formal proofs of the Beppo Levi (monotone convergence) theorem and Fatou's lemma. More than a plain formalization of the known literature, we present several design choices made to balance the harmony between mathematical readability and usability of Coq theorems. These results are a first milestone toward the formalization of $L^p$~spaces such as Banach spaces.

ano.nymous@ccsd.cnrs.fr.invalid (Sylvie Boldo), Sylvie Boldo

Integration, just as much as differentiation, is a fundamental calculus tool that is widely used in many scientific domains. Formalizing the mathematical concept of integration and the associated results in a formal proof assistant helps in providing the highest confidence on the correctness of numerical programs involving the use of integration, directly or indirectly. By its capability to extend the (Riemann) integral to a wide class of irregular functions, and to functions defined on more general spaces than the real line, the Lebesgue integral is perfectly suited for use in mathematical fields such as probability theory, numerical mathematics, and real analysis. In this article, we present the Coq formalization of $\sigma$-algebras, measures, simple functions, and integration of nonnegative measurable functions, up to the full formal proofs of the Beppo Levi (monotone convergence) theorem and Fatou's lemma. More than a plain formalization of the known literature, we present several design choices made to balance the harmony between mathematical readability and usability of Coq theorems. These results are a first milestone toward the formalization of $L^p$~spaces such as Banach spaces.

ano.nymous@ccsd.cnrs.fr.invalid (Sylvie Boldo), Sylvie Boldo

Recent works in the Boundary Element Method (BEM) community have been devoted to the derivation of fast techniques to perform the matrix vector product needed in the iterative solver. Fast BEMs are now very mature. However, it has been shown that the number of iterations can significantly hinder the overall efficiency of fast BEMs. The derivation of robust preconditioners is now inevitable to increase the size of the problems that can be considered. Analytical precon-ditioners offer a very interesting strategy by improving the spectral properties of the boundary integral equations ahead from the discretization. The main contribution of this paper is to propose new analytical preconditioners to treat Neumann exterior scattering problems in 2D and 3D elasticity. These preconditioners are local approximations of the adjoint Neumann-to-Dirichlet map. We propose three approximations with different orders. The resulting boundary integral equations are preconditioned Combined Field Integral Equations (CFIEs). An analytical spectral study confirms the expected behavior of the preconditioners, i.e., a better eigenvalue clustering especially in the elliptic part contrary to the standard CFIE of the first-kind. We provide various 2D numerical illustrations of the efficiency of the method for different smooth and non smooth geometries. In particular, the number of iterations is shown to be independent of the density of discretization points per wavelength which is not the case of the standard CFIE. In addition, it is less sensitive to the frequency.

ano.nymous@ccsd.cnrs.fr.invalid (Stéphanie Chaillat), Stéphanie Chaillat

An innovative data-driven model-order reduction technique is proposed to model dilute micrometric or nanometric suspensions of microcapsules, i.e., microdrops protected in a thin hyperelastic membrane, which are used in Healthcare as innovative drug vehicles. We consider a microcapsule flowing in a similar-size microfluidic channel and vary systematically the governing parameter, namely the capillary number, ratio of the viscous to elastic forces, and the confinement ratio, ratio of the capsule to tube size. The resulting space-time-parameter problem is solved using two global POD reduced bases, determined in the offline stage for the space and parameter variables, respectively. A suitable low-order spatial reduced basis is then computed in the online stage for any new parameter instance. The time evolution of the capsule dynamics is achieved by identifying the nonlinear low-order manifold of the reduced variables; for that, a point cloud of reduced data is computed and a diffuse approximation method is used. Numerical comparisons between the full-order fluid-structure interaction model and the reduced-order one confirm both accuracy and stability of the reduction technique over the whole admissible parameter domain. We believe that such an approach can be applied to a broad range of coupled problems especially involving quasistatic models of structural mechanics.

ano.nymous@ccsd.cnrs.fr.invalid (Toufik Boubehziz), Toufik Boubehziz

Concise formulae are given for the cumulant matrices of a real-valued (zero-mean) random vector up to order 6. In addition to usual matrix operations, they involve only the Kronecker product, the vec operator, and the commutation matrix. Orders 5 and 6 are provided here for the first time; the same method as provided in the paper can be applied to compute higher orders. An immediate consequence of these formulae is to return 1) upper bounds on the rank of the cumulant matrices and 2) the expression of the sixth-order moment matrix of a Gaussian vector. Due to their conciseness, the proposed formulae also have a computational advantage as compared to the repeated use of Leonov and Shiryaev formula.

ano.nymous@ccsd.cnrs.fr.invalid (Hanany Ould-Baba), Hanany Ould-Baba

For a system, a priori identifiability is a theoretical property depending only on the model and guarantees that its parameters can be uniquely determined from observations. This paper provides a survey of the various and numerous definitions of a priori identifiability given in the literature, for both deterministic continuous and discrete-time models. A classification is done by distinguishing analytical and algebraic definitions as well as local and global ones. Moreover, this paper provides an overview on the distinct methods to test the parameter identifiability. They are classified into the so-called output equality approaches, local state isomorphism approaches and differential algebra approaches. A few examples are detailed to illustrate the methods and complete this survey.

ano.nymous@ccsd.cnrs.fr.invalid (Floriane Anstett-Collin), Floriane Anstett-Collin

Dans le contexte du démantèlement des réacteurs de Fukushima Daiichi, plusieurs projets ont été subventionnés par le gouvernement japonais pour préparer les opérations de retrait du corium. Dans ce cadre, une étude conjointe menée entre ONET Technologies et les laboratoires du CEA et de l’IRSN a permis de démontrer la faisabilité de l’utilisation de la technique de découpe par laser et d’estimer le terme source aérosol ainsi généré. Deux simulants du corium synthétisés et caractérisés par le CEA-Cadarache ont fait l’objet d’essais de tirs laser sous air et sous eau au sein de l’installation DELIA du CEA Saclay, et les aérosols émis ont été caractérisés par l’IRSN. La caractérisation des particules émises en termes de concentration et de distribution granulométrique a permis d’apporter des informations pour prédire notamment le transport et le dépôt des particules, mais la connaissance de la composition chimique par classe de taille est une information nécessaire pour une meilleure gestion des risques professionnels et environnementaux. Cet article présente les résultats concernant la caractérisation de la composition chimique de l’aérosol d’un simulant du corium, en condition de découpe laser sous air, et la distribution granulométrique associée

ano.nymous@ccsd.cnrs.fr.invalid (Emmanuel Porcheron), Emmanuel Porcheron

We consider in this paper a model parabolic variational inequality. This problem is discretized with conforming Lagrange finite elements of order $p ≥ 1$ in space and with the backward Euler scheme in time. The nonlinearity coming from the complementarity constraints is treated with any semismooth Newton algorithm and we take into account in our analysis an arbitrary iterative algebraic solver. In the case $p = 1$, when the system of nonlinear algebraic equations is solved exactly, we derive an a posteriori error estimate on both the energy error norm and a norm approximating the time derivative error. When $p ≥ 1$, we provide a fully computable and guaranteed a posteriori estimate in the energy error norm which is valid at each step of the linearization and algebraic solvers. Our estimate, based on equilibrated flux reconstructions, also distinguishes the discretization, linearization, and algebraic error components. We build an adaptive inexact semismooth Newton algorithm based on stopping the iterations of both solvers when the estimators of the corresponding error components do not affect significantly the overall estimate. Numerical experiments are performed with the semismooth Newton-min algorithm and the semismooth Newton-Fischer-Burmeister algorithm in combination with the GMRES iterative algebraic solver to illustrate the strengths of our approach.

ano.nymous@ccsd.cnrs.fr.invalid (Jad Dabaghi), Jad Dabaghi

We propose an adaptive inexact version of a class of semismooth Newton methods that is aware of the continuous (variational) level. As a model problem, we study the system of variational inequalities describing the contact between two membranes. This problem is discretized with conforming finite elements of order $p \geq 1$, yielding a nonlinear algebraic system of variational inequalities. We consider any iterative semismooth linearization algorithm like the Newton-min or the Newton--Fischer--Burmeister which we complementby any iterative linear algebraic solver. We then derive an a posteriori estimate on the error between the exact solution at the continuous level and the approximate solution which is valid at any step of the linearization and algebraic resolutions. Our estimate is based on flux reconstructions in discrete subspaces of $\mathbf{H}(\mathrm{div}, \Omega)$ and on potential reconstructions in discrete subspaces of $H^1(\Omega)$ satisfying the constraints. It distinguishes the discretization, linearization, and algebraic components of the error. Consequently, we can formulate adaptive stopping criteria for both solvers, giving rise to an adaptive version of the considered inexact semismooth Newton algorithm. Under these criteria, the efficiency of the leading estimates is also established, meaning that we prove them equivalent with the error up to a generic constant. Numerical experiments for the Newton-min algorithm in combination with the GMRES algebraic solver confirm the efficiency of the developed adaptive method.

ano.nymous@ccsd.cnrs.fr.invalid (Jad Dabaghi), Jad Dabaghi

Dans le cadre d’un programme pluriannuel, des campagnes de sondages ont été réalisées sur les deux versants du col du Petit-Saint-Bernard (2188 m, Alpes occidentales), entre 750 et 3000 m d’altitude. La méthode de travail néglige les prospections au sol, au profit de la multiplication des sondages manuels, implantés dans des contextes topographiques sélectionnés et menés jusqu’à la base des remplissages holocènes. Les résultats obtenus documentent dans la longue durée l’évolution de la dynamique pédo-sédimentaire et la fréquentation des différents étages d’altitude. La signification des données archéologiques collectées est discutée par rapport à l’état des connaissances dans une zone de comparaison groupant les vallées voisines des Alpes occidentales, par rapport aux modèles de peuplement existants et par rapport aux indications taphonomiques apportées par l’étude pédo-sédimentaire. Un programme d’analyses complémentaires destiné à préciser le contexte, la taphonomie et le statut fonctionnel

ano.nymous@ccsd.cnrs.fr.invalid (Pierre-Jérôme Rey), Pierre-Jérôme Rey

This paper introduces a new approach for the forecasting of solar radiation series at a located station for very short time scale. We built a multivariate model in using few stations (3 stations) separated with irregular distances from 26 km to 56 km. The proposed model is a spatio temporal vector autoregressive VAR model specifically designed for the analysis of spatially sparse spatio-temporal data. This model differs from classic linear models in using spatial and temporal parameters where the available pre-dictors are the lagged values at each station. A spatial structure of stations is defined by the sequential introduction of predictors in the model. Moreover, an iterative strategy in the process of our model will select the necessary stations removing the uninteresting predictors and also selecting the optimal p-order. We studied the performance of this model. The metric error, the relative root mean squared error (rRMSE), is presented at different short time scales. Moreover, we compared the results of our model to simple and well known persistence model and those found in literature.

ano.nymous@ccsd.cnrs.fr.invalid (Maïna André), Maïna André

In this work, we develop an a-posteriori-steered algorithm for a compositional two-phase flow with exchange of components between the phases in porous media. As a model problem, we choose the two-phase liquid-gas flow with appearance and disappearance of the gas phase formulated as a system of nonlinear evolutive partial differential equations with nonlinear complementarity constraints. The discretization of our model is based on the backward Euler scheme in time and the finite volume scheme in space. The resulting nonlinear system is solved via an inexact semismooth Newton method. The key ingredient for the a posteriori analysis are the discretization, linearization, and algebraic flux reconstructions allowing to devise estimators for each error component. These enable to formulate criteria for stopping the iterative algebraic solver and the iterative linearization solver whenever the corresponding error components do not affect significantly the overall error. Numerical experiments are performed using the Newton-min algorithm as well as the Newton-Fischer-Burmeister algorithm in combination with the GMRES iterative linear solver to show the efficiency of the proposed adaptive method.

ano.nymous@ccsd.cnrs.fr.invalid (Ibtihel Ben Gharbia), Ibtihel Ben Gharbia

We discuss the use of a continuous-time jump Markov process as the driving process in stochastic differential systems. Results are given on the estimation of the infinitesimal generator of the jump Markov process, when considering sample paths on random time intervals. These results are then applied within the framework of stochastic dynamical systems modeling and estimation. Numerical examples are given to illustrate both consistency and asymptotic normality of the estimator of the infinitesimal generator of the driving process. We apply these results to fatigue crack growth modeling as an example of a complex dynamical system, with applications to reliability analysis.

ano.nymous@ccsd.cnrs.fr.invalid (Julien Chiquet), Julien Chiquet

The γ-irradiation of a biphasic system composed of tri-n-butylphosphate in tetrapropylene hydrogen (TPH) in contact with palladium(II) nitrate in nitric acid aqueous solution led to the formation of two precipitates. A thorough characterization of these solids was performed by means of various analytical techniques including X-Ray Diffraction (XRD), Thermal Gravimetric Analysis coupled with a Differential Scanning Calorimeter (TGA-DSC), X-ray Photoelectron Spectroscopy (XPS), InfraRed (IR), RAMAN and Nuclear Magnetic Resonance (NMR) Spectroscopy, and ElectroSpray Ionization Mass Spectrometry (ESI-MS). Investigations showed that the two precipitates exhibit quite similar structures. They are composed at least of two compounds: palladium cyanide and palladium species containing ammonium, phosphorous or carbonyl groups. Several mechanisms are proposed to explain the formation of Pd(CN)2.

ano.nymous@ccsd.cnrs.fr.invalid (Bénédicte Simon), Bénédicte Simon

L'analyse par microsonde électronique (EPMA) permet de quantifier, avec une grande précision, les concentrations élémentaires d'échantillons de compositions inconnues. Elle permet, par exemple, de quantifier les actinides présents dans les combustibles nucléaires neufs ou irradiés, d'aider à la gestion des déchets nucléaires ou encore de dater certaines roches. Malheureusement, ces analyses quantitatives ne sont pas toujours réalisables dû à l'indisponibilité des étalons de référence pour certains actinides. Afin de pallier cette difficulté, une méthode d'analyse dite « sans standard » peut-être employée au moyen d'étalons virtuels. Ces derniers sont obtenus à partir de formules empiriques ou à partir de calculs basés sur des modèles théoriques. Toutefois, ces calculs requièrent la connaissance de paramètres physiques généralement mal connus, comme c'est le cas pour les sections efficaces de production de rayons X. La connaissance précise de ces sections efficaces est requise dans de nombreuses applications telles que dans les codes de transport de particules et dans les simulations Monte-Carlo. Ces codes de calculs sont très utilisés en médecine et particulièrement en imagerie médicale et dans les traitements par faisceau d'électrons. Dans le domaine de l'astronomie, ces données sont utilisées pour effectuer des simulations servant à prédire les compositions des étoiles et des nuages galactiques ainsi que la formation des systèmes planétaires.Au cours de ce travail, les sections efficaces de production des raies L et M du plomb, du thorium et de l'uranium ont été mesurées par impact d'électrons sur des cibles minces autosupportées d'épaisseur variant de 0,2 à 8 nm. Les résultats expérimentaux ont été comparés avec les prédictions théoriques de sections efficaces d'ionisation calculées grâce à l'approximation de Born en ondes distordues (DWBA) et avec les prédictions de formules analytiques utilisées dans les applications pratiques. Les sections efficaces d'ionisation ont été converties en sections efficaces de productions de rayons X grâce aux paramètres de relaxation atomique extraits de la littérature. Les résultats théoriques du modèle DWBA sont en excellents accords avec les résultats expérimentaux. Ceci permet de confirmer les prédictions de ce modèle et de valider son utilisation pour le calcul de standards virtuels.Les prédictions de ce modèle ont été intégrées dans le code Monte-Carlo PENELOPE afin de calculer l'intensité de rayons X produite par des standards pur d'actinides. Les calculs ont été réalisés pour les éléments dont le numéro atomique est 89 ≤ Z ≤ 99 et pour des tensions d'accélération variant du seuil d'ionisation jusque 40 kV, par pas de 0,5 kV. Pour une utilisation pratique, les intensités calculées pour les raies L et M les plus intenses ont été regroupées dans une base de données.Les prédictions des standards virtuels ainsi obtenus ont été comparées avec des mesures effectuées sur des échantillons de composition connue (U, UO2, ThO2, ThF4, PuO2…) et avec les données acquises lors de précédentes campagnes de mesures. Le dosage des actinides à l'aide de ces standards virtuels a montré un bon accord avec les résultats attendus. Ceci confirme la fiabilité des standards virtuels développés et démontre que la quantification des actinides par microsonde électronique est réalisable sans standards d'actinides et avec un bon niveau de confiance.

ano.nymous@ccsd.cnrs.fr.invalid (Aurélien Moy), Aurélien Moy

One of the important challenges for the decommissioning of the damaged reactors of the Fukushima Daiichi Nuclear Power Plant is the safe retrieval of the fuel debris or corium. It is especially primordial to investigate the cutting conditions for air configuration and for underwater configuration at different water levels. Concerning the cutting techniques, the laser technique is well adapted to the cutting of expected material such as corium that has an irregular shape and heterogeneous composition. A French consortium (ONET Technologies, CEA and IRSN) is being subsidized by the Japanese government to implement R&D related to the laser cutting of Fukushima Daiichi fuel debris and related to dust collection technology. Debris simulant have been manufactured in the PLINIUS platform to represent Molten Core Concrete Interaction as estimated from Fukushima Daiichi calculations. In this simulant, uranium is replaced by hafnium and the major fission products have been replaced by their natural isotopes. During laser cutting experiments in the DELIA facility, aerosols have been collected thanks to filters and impactors. The collected aerosols have been analyzed. Both chemical analysis (dissolution + ICP MS and ICP AES) and microscopic analyses (SEM EDS) will be presented and discussed. These data provide insights on the expected dust releases during cutting and can be converted to provide radioactivity estimates. They have also been successfully compared to thermodynamic calculations with the NUCLEA database.

ano.nymous@ccsd.cnrs.fr.invalid (Christophe Journeau), Christophe Journeau

In this work we present a novel discrete fracture model for single-phase Darcy flow in porous media with fractures of co-dimension one, which introduces an additional unknown at the fracture interface. Inspired by the fictitious domain method this Lagrange multiplier couples fracture and matrix domain and represents a local exchange of the fluid. The multipliers naturally impose the equality of the pressures at the fracture interface. The model is thus appropriate for domains with fractures of permeability higher than that in the surrounding bulk domain. In particular the novel approach allows for independent, regular meshing of fracture and matrix domain and therefore avoids the generation of small elements. We show existence and uniqueness of the weak solution of the continuous primal formulation. Moreover we discuss the discrete inf-sup condition of two different finite element formulations. Several numerical examples verify the accuracy and convergence of proposed method.

ano.nymous@ccsd.cnrs.fr.invalid (Markus Köppel), Markus Köppel

In this work we introduce a stabilized, numerical method for a multi-dimensional, discrete-fracture model (DFM) for single-phase Darcy flow in fractured porous media. In the model, introduced in an earlier work, flow in the (n − 1)-dimensional fracture domain is coupled with that in the n-dimensional bulk or matrix domain by the use of Lagrange multipliers. Thus the model permits a finite element discretization in which the meshes in the fracture and matrix domains are independent so that irregular meshing and in particular the generation of small elements can be avoided. In this paper we introduce in the numerical formulation, which is a saddle-point problem based on a primal, variational formulation for flow in the matrix domain and in the fracture system, a consistent stabilizing term which penalizes discontinuities in the Lagrange multipliers. For this penalized scheme we show stability and prove convergence. With numerical experiments we analyze the performance of the method for various choices of the penalization parameter and compare with other numerical DFM's.

ano.nymous@ccsd.cnrs.fr.invalid (Markus Köppel), Markus Köppel

The purpose is a finite element approximation of the heat diffusion problem in composite media, with non-linear contact resistance at the interfaces. As already explained in [Journal of Scientific Computing, {\bf 63}, 478-501(2015)], hybrid dual formulations are well fitted to complicated composite geometries and provide tractable approaches to variationally express the jumps of the temperature. The finite elements spaces are standard. Interface contributions are added to the variational problem to account for the contact resistance. This is an important advantage for computing codes developers. We undertake the analysis of the non-linear heat problem for a large range of contact resistance and we investigate its discretization by hybrid dual finite element methods. Numerical experiments are presented at the end to support the theoretical results.

ano.nymous@ccsd.cnrs.fr.invalid (F Ben Belgacem), F Ben Belgacem

We introduce a new algorithm of proper generalized decomposition (PGD) for parametric symmetric elliptic partial differential equations. For any given dimension, we prove the existence of an optimal subspace of at most that dimension which realizes the best approximation---in the mean parametric norm associated to the elliptic operator---of the error between the exact solution and the Galerkin solution calculated on the subspace. This is analogous to the best approximation property of the proper orthogonal decomposition (POD) subspaces, except that in our case the norm is parameter-dependent. We apply a deflation technique to build a series of approximating solutions on finite-dimensional optimal subspaces, directly in the online step, and we prove that the partial sums converge to the continuous solution in the mean parametric elliptic norm. We show that the standard PGD for the considered parametric problem is strongly related to the deflation algorithm introduced in this paper. This opens the possibility of computing the PGD expansion by directly solving the optimization problems that yield the optimal subspaces.

ano.nymous@ccsd.cnrs.fr.invalid (M. Azaïez), M. Azaïez

We introduce in this paper a technique for the reduced order approximation of parametric symmetric elliptic partial differential equations. For any given dimension, we prove the existence of an optimal subspace of at most that dimension which realizes the best approximation in mean of the error with respect to the parameter in the quadratic norm associated to the elliptic operator between the exact solution and the Galerkin solution calculated on the subspace. This is analogous to the best approximation property of the Proper Orthogonal Decomposition (POD) subspaces, excepting that in our case the norm is parameter-depending, and then the POD optimal sub-spaces cannot be characterized by means of a spectral problem. We apply a deflation technique to build a series of approximating solutions on finite-dimensional optimal subspaces, directly in the on-line step. We prove that the partial sums converge to the continuous solutions in mean quadratic elliptic norm.

ano.nymous@ccsd.cnrs.fr.invalid (Mejdi Azaiez), Mejdi Azaiez

The fast multipole method is an efficient technique to accelerate the solution of large scale 3D scattering problems with boundary integral equations. However, the fast multipole accelerated boundary element method (FM-BEM) is intrinsically based on an iterative solver. It has been shown that the number of iterations can significantly hinder the overall efficiency of the FM-BEM. The derivation of robust preconditioners for FM-BEM is now inevitable to increase the size of the problems that can be considered. The main constraint in the context of the FM-BEM is that the complete system is not assembled to reduce computational times and memory requirements. Analytic preconditioners offer a very interesting strategy by improving the spectral properties of the boundary integral equations ahead from the discretization. The main contribution of this paper is to combine an approximate adjoint Dirichlet to Neumann (DtN) map as an analytic preconditioner with a FM-BEM solver to treat Dirichlet exterior scattering problems in 3D elasticity. The approximations of the adjoint DtN map are derived using tools proposed in [40]. The resulting boundary integral equations are preconditioned Combined Field Integral Equations (CFIEs). We provide various numerical illustrations of the efficiency of the method for different smooth and non smooth geometries. In particular, the number of iterations is shown to be completely independent of the number of degrees of freedom and of the frequency for convex obstacles.

ano.nymous@ccsd.cnrs.fr.invalid (Stéphanie Chaillat), Stéphanie Chaillat

The main purpose of this paper is to investigate the strong approximation of the $p$-fold integrated empirical process, $p$ being a fixed positive integer. More precisely, we obtain the exact rate of the approximations by a sequence of weighted Brownian bridges and a weighted Kiefer process. Our arguments are based in part on results of Koml\'os, Major and Tusn\'ady (1975). Applications include the two-sample testing procedures together with the change-point problems. We also consider the strong approximation of integrated empirical processes when the parameters are estimated. Finally, we study the behavior of the self-intersection local time of the partial sum process representation of integrated empirical processes.

ano.nymous@ccsd.cnrs.fr.invalid (Sergio Alvarez-Andrade), Sergio Alvarez-Andrade

We derive rates of contraction of posterior distributions on non-parametric models resulting from sieve priors. The aim of the study was to provide general conditions to get posterior rates when the parameter space has a general structure, and rate adaptation when the parameter is, for example, a Sobolev class. The conditions employed, although standard in the literature, are combined in a different way. The results are applied to density, regression, nonlinear autoregression and Gaussian white noise models. In the latter we have also considered a loss function which is different from the usual l2 norm, namely the pointwise loss. In this case it is possible to prove that the adaptive Bayesian approach for the l2 loss is strongly suboptimal and we provide a lower bound on the rate.

ano.nymous@ccsd.cnrs.fr.invalid (Julyan Arbel), Julyan Arbel

It has been proven that the knowledge of an accurate approximation of the Dirichlet-to-Neumann (DtN) map is useful for a large range of applications in wave scattering problems. We are concerned in this paper with the construction of an approximate local DtN operator for time-harmonic elastic waves. The main contributions are the following. First, we derive exact operators using Fourier analysis in the case of an elastic half-space. These results are then extended to a general three-dimensional smooth closed surface by using a local tangent plane approximation. Next, a regularization step improves the accuracy of the approximate DtN operators and a localization process is proposed. Finally, a first application is presented in the context of the On-Surface Radiation Conditions method. The efficiency of the approach is investigated for various obstacle geometries at high frequencies.

ano.nymous@ccsd.cnrs.fr.invalid (Stéphanie Chaillat), Stéphanie Chaillat

The main purpose of this paper is to investigate the strong approximation of the integrated empirical process. More precisely, we obtain the exact rate of the approximations by a sequence of weighted Brownian bridges and a weighted Kiefer process. Our arguments are based in part on the Komlós et al. (1975)'s results. Applications include the two-sample testing procedures together with the change-point problems. We also consider the strong approximation of the integrated empirical process when the parameters are estimated. Finally, we study the behavior of the self-intersection local time of the partial sum process representation of the integrated empirical process.Reference: Koml\'os, J., Major, P. and Tusn\'ady, G. (1975). An approximation of partial sums of independent RV's and the sample DF. I. Z. Wahrscheinlichkeitstheorie und Verw. Gebiete, 32, 111-131.

ano.nymous@ccsd.cnrs.fr.invalid (Sergio Alvarez-Andrade), Sergio Alvarez-Andrade

In recent years, many MAC protocols for wireless sensor networks have been proposed and most of them are evaluated using Matlab simulator and/or network simulators (OMNeT++, NS2, etc). However, most of them have a static behavior and few network simulations are available for adaptive protocols. Specially, in OMNeT++/MiXiM, there are few energy efficient MAC protocols for WSNs (B-MAC & L-MAC) and no adaptive ones. To this end, the TAD-MAC (Traffic Aware Dynamic MAC) protocol has been simulated in OMNeT++ with the MiXiM framework and implementation details are given in this paper. The simulation results have been used to evaluate the performance of TAD-MAC through comparisons with B-MAC and L-MAC protocols.

ano.nymous@ccsd.cnrs.fr.invalid (Van-Thiep Nguyen), Van-Thiep Nguyen

Karhunen-Loève's decompositions (KLD) or the proper orthogonal decompositions (POD) of bivariate functions are revisited in this work. We investigate the truncation error first for regular functions and try to improve and sharpen bounds found in the literature. However it happens that (KL)-series expansions are in fact more sensitive to the liability of fields to approximate to be well represented by a small sum of products of separated variables functions. We consider this very issue for some interesting fields solutions of partial differential equations such as the transient heat problem and Poisson's equation. The main tool to state approximation bounds is linear algebra. We show how the singular value decomposition underlying the (KL)-expansion is connected to the spectrum of some Gram matrices. Deriving estimates on the truncation error is thus strongly tied to the spectral properties of these Gram matrices which are structured matrices with low displacement ranks.

ano.nymous@ccsd.cnrs.fr.invalid (Mejdi Azaïez), Mejdi Azaïez

The inverse problem under investigation consists of the boundary data completion in a deoxygenation-reaeration model in stream-waters. The unidimensional transport model we deal with is based on the one introduced by Streeter and Phelps, augmented by Taylor dispersion terms. The missing boundary condition is the load or/and the flux of the biochemical oxygen demand indicator at the outfall point. The counterpart is the availability of two boundary conditions on the dissolved oxygen tracer at the same point. The major consequences of these non-standard boundary conditions is that dispersive transport equations on both oxygen tracers are strongly coupled and the resulting system becomes ill-posed. The main purpose is a finite element space-discretization of the variational problem put under a non-symmetric mixed form. Combining analytical calculations, numerical computations and theoretical justifications, we try to elucidate the characteristics related to the ill-posedness of this data completion dynamical problem and understand its mathematical structure.

ano.nymous@ccsd.cnrs.fr.invalid (Faker Ben Belgacem), Faker Ben Belgacem

Nous considérons une ́equation qui modélise la diffusion de la température dans une mousse de graphite contenant des capsules de sel. Les conditions de transition de la température entre le graphite et le sel doivent être traitées correctement. Nous effectuons l'analyse de ce modèle et prouvons qu'il est bien posé. Puis nous en proposons une discrétisation par éléments finis et effectuons l'analyse a priori du problème discret. Quelques expériences numériques confirment l'intérêt de cette approche.

ano.nymous@ccsd.cnrs.fr.invalid (Faker Ben Belgacem), Faker Ben Belgacem

We consider an inverse problem that arises in the management of water resources and pertains to the analysis of the surface waters pollution by organic matter. Most of physical models used by engineers derive from various additions and corrections to enhance the earlier deoxygenation-reaeration model proposed by Streeter and Phelps in 1925, the unknowns being the biochemical oxygen demand (BOD) and the dissolved oxygen (DO) concentrations. The one we deal with includes Taylor's dispersion to account for the heterogeneity of the contamination in all space directions. The system we obtain is then composed of two reaction-dispersion equations. The particularity is that both Neumann and Dirichlet boundary conditions are available on the DO tracer while the BOD density is free of any condition. In fact, for real-life concerns, measurements on the dissolved oxygen are easy to obtain and to save. In the contrary, collecting data on the biochemical oxygen demand is a sensitive task and turns out to be a long-time process. The global model pursues the reconstruction of the BOD density, and especially of its flux along the boundary. Not only this problem is plainly worth studying for its own interest but it can be also a mandatory step in other applications such as the identification of the pollution sources location. The non-standard boundary conditions generate two difficulties in mathematical and computational grounds. They set up a severe coupling between both equations and they are cause of ill-posedness for the data reconstruction problem. Existence and stability fail. Identifiability is therefore the only positive result one can seek after ; it is the central purpose of the paper. We end by some computational experiences to assess the capability of the mixed finite element capability in the missing data recovery (on the biochemical oxygen demand).

ano.nymous@ccsd.cnrs.fr.invalid (Mejdi Azaïez), Mejdi Azaïez

The direct electrochemical reduction of UO2 solid pellets was carried out in LiF-CaF2 (+ 2 mass. % Li2O) at 850°C. An inert gold anode was used instead of the usual reactive sacrificial carbon anode. In this case, oxidation of oxide ions present in the melt yields O2 gas evolution on the anode. Electrochemical characterisations of UO2 pellets were performed by linear sweep voltammetry at 10mV/s and reduction waves associated to oxide direct reduction were observed at a potential 150mV more positive in comparison to the solvent reduction. Subsequent, galvanostatic electrolyses runs were carried out and products were characterised by SEM-EDX, EPMA/WDS and XRD. In one of the runs, uranium oxide was partially reduced and three phases were observed: non reduced UO2 in the centre, pure metallic uranium on the external layer and an intermediate phase representing the initial stage of reduction taking place at the grain boundaries. In another run, the UO2 sample was fully reduced. Due to oxygen removal, the U matrix had a typical coral-like structure which is characteristic of the pattern observed after the electroreduction of solid oxides.

ano.nymous@ccsd.cnrs.fr.invalid (Mathieu Gibilaro), Mathieu Gibilaro

This paper considers two different methods in the analysis of nonlinear controlled dynamical system identifiability. The corresponding identifiability definitions are not equivalent. Moreover one is based on the construction of an input-output ideal and the other on the similarity transformation theorem. Our aim is to develop algorithms which give identifiability results from both approaches. Differential algebra theory allows realization of such a project. In order to state these algorithms, new results of differential algebra must be proved. Then the implementation of these algorithms is done in a symbolic computation language.

ano.nymous@ccsd.cnrs.fr.invalid (Lilianne Denis-Vidal), Lilianne Denis-Vidal

In this paper, we study the uniqueness of solutions for diagonal hyperbolic systems in one space dimension. We present two uniqueness results. The first one is a global existence and uniqueness result of a continuous solution for strictly hyperbolic systems. The second one is a global existence and uniqueness result of a Lipschitz solution for hyperbolic systems not necessarily strictly hyperbolic. An application of these two results is shown in the case of the one-dimensional isentropic gas dynamics.

ano.nymous@ccsd.cnrs.fr.invalid (Ahmad El Hajj), Ahmad El Hajj

Enhancing the safety of high-temperature reactors (HTRs) is based on the quality of the fuel particles, requiring good knowledge of the microstructure of the four-layer particles designed to retain the fission products during irradiation and under accidental conditions. This paper focuses on the intensive research work performed to characterize the micro- and nanostructure of each unirradiated layer (silicon carbide and pyrocarbon coatings). The analytic expertise developed in the 1970s has been recovered and innovative advanced characterization methods have been developed to improve the process parameters and to ensure the production reproducibility of coatings.

ano.nymous@ccsd.cnrs.fr.invalid (D. Helary), D. Helary

Electron back-scattering diffraction (EBSD) can be successfully performed on SiC coatings for HTR fuel particles. EBSD grain maps obtained from thick and thin unirradiated samples are presented, along with pole figures showing textures and a chart showing the distribution of grain aspect ratios. This information is of great interest, and contributes to improving the process parameters and ensuring the reproducibility of coatings

ano.nymous@ccsd.cnrs.fr.invalid (D. Helary), D. Helary

We present a first version of a software dedicated to an application of a classical nonlinear control theory problem to the study of compartmental models in biology. The software is being developed over a new free computer algebra library dedicated to differential and algebraic elimination.

ano.nymous@ccsd.cnrs.fr.invalid (François Boulier), François Boulier

Recently several authors considered finite mixture models with semi-/non-parametric component distributions. Identifiability of such model parameters is generally not obvious, and when it occurs, inference methods are rather specific to the mixture model under consideration. In this paper we propose a generalization of the EM algorithm to semiparametric mixture models. Our approach is methodological and can be applied to a wide class of semiparametric mixture models. The behavior of the EM type estimators we propose is studied numerically through several Monte Carlo experiments but also by comparison with alternative methods existing in the literature. In addition to these numerical experiments we provide applications to real data showing that our estimation methods behaves well, that it is fast and easy to be implemented.

ano.nymous@ccsd.cnrs.fr.invalid (Laurent Bordes), Laurent Bordes