Retour Accueil / Recherche / Publications sur H.A.L.
Time series anomaly detection (TSAD) focuses on identifying whether observations in streaming data deviate significantly from normal patterns. With the prevalence of connected devices, anomaly detection on time series has become paramount, as it enables real-time monitoring and early detection of irregular behaviors across various application domains. In this work, we introduce PatchTrAD, a Patch-based Transformer model for time series anomaly detection. Our approach leverages a Transformer encoder along with the use of patches under a reconstructionbased framework for anomaly detection. Empirical evaluations on multiple benchmark datasets show that PatchTrAD is on par, in terms of detection performance, with state-of-the-art deep learning models for anomaly detection while being time efficient during inference.
ano.nymous@ccsd.cnrs.fr.invalid (Samy-Melwan Vilhes), Samy-Melwan Vilhes
Sparsified Learning is ubiquitous in many machine learning tasks. It aims to regularize the objective function by adding a penalization term that considers the constraints made on the learned parameters. This paper considers the problem of learning heavy-tailed LSP. We develop a flexible and robust sparse learning framework capable of handling heavy-tailed data with locally stationary behavior and propose concentration inequalities. We further provide non-asymptotic oracle inequalities for different types of sparsity, including $\ell_1$-norm and total variation penalization for the least square loss.
ano.nymous@ccsd.cnrs.fr.invalid (Yingjie Wang), Yingjie Wang
Adversarial learning baselines for domain adaptation (DA) approaches in the context of semantic segmentation are under explored in semi-supervised framework. These baselines involve solely the available labeled target samples in the supervision loss. In this work, we propose to enhance their usefulness on both semantic segmentation and the single domain classifier neural networks. We design new training objective losses for cases when labeled target data behave as source samples or as real target samples. The underlying rationale is that considering the set of labeled target samples as part of source domain helps reducing the domain discrepancy and, hence, improves the contribution of the adversarial loss. To support our approach, we consider a complementary method that mixes source and labeled target data, then applies the same adaptation process. We further propose an unsupervised selection procedure using entropy to optimize the choice of labeled target samples for adaptation. We illustrate our findings through extensive experiments on the benchmarks GTA5, SYNTHIA, and Cityscapes. The empirical evaluation highlights competitive performance of our proposed approach.
ano.nymous@ccsd.cnrs.fr.invalid (Marwa Kechaou), Marwa Kechaou
Locally stationary processes (LSPs) provide a robust framework for modeling time-varying phenomena, allowing for smooth variations in statistical properties such as mean and variance over time. In this paper, we address the estimation of the conditional probability distribution of LSPs using Nadaraya-Watson (NW) type estimators. The NW estimator approximates the conditional distribution of a target variable given covariates through kernel smoothing techniques. We establish the convergence rate of the NW conditional probability estimator for LSPs in the univariate setting under the Wasserstein distance and extend this analysis to the multivariate case using the sliced Wasserstein distance. Theoretical results are supported by numerical experiments on both synthetic and real-world datasets, demonstrating the practical usefulness of the proposed estimators.
ano.nymous@ccsd.cnrs.fr.invalid (Jan Nino G. Tinio), Jan Nino G. Tinio
Functional time series (FTS) extend traditional methodologies to accommodate data observed as functions/curves. A significant challenge in FTS consists of accurately capturing the time-dependence structure, especially with the presence of time-varying covariates. When analyzing time series with time-varying statistical properties, locally stationary time series (LSTS) provide a robust framework that allows smooth changes in mean and variance over time. This work investigates Nadaraya-Watson (NW) estimation procedure for the conditional distribution of locally stationary functional time series (LSFTS), where the covariates reside in a semi-metric space endowed with a semi-metric. Under small ball probability and mixing condition, we establish convergence rates of NW estimator for LSFTS with respect to Wasserstein distance. The finite-sample performances of the model and the estimation method are illustrated through extensive numerical experiments both on functional simulated and real data.
ano.nymous@ccsd.cnrs.fr.invalid (Jan Nino G. Tinio), Jan Nino G. Tinio
Under limited available resources, strategies for mitigating the propagation of an epidemic such as random testing and contact tracing become inefficient. Here, we propose to accurately allocate the resources by computing over time an individual risk of infection based on the partial observation of the epidemic spreading on a contact network; this risk is defined as the probability of getting infected from any possible transmission chain up to length two, originating from recently detected individuals. To evaluate the performance of our method and the effects of some key parameters, we perform comparative simulated experiments using data generated by an agent-based model.
ano.nymous@ccsd.cnrs.fr.invalid (Gabriela Bayolo Soler), Gabriela Bayolo Soler
[...]
ano.nymous@ccsd.cnrs.fr.invalid (J. Rebelo Kornmeier), J. Rebelo Kornmeier
We introduce binacox, a prognostic method to deal with the problem of detecting multiple cut-points per feature in a multivariate setting where a large number of continuous features are available. The method is based on the Cox model and combines one-hot encoding with the binarsity penalty, which uses total-variation regularization together with an extra linear constraint, and enables feature selection. Original nonasymptotic oracle inequalities for prediction (in terms of Kullback-Leibler divergence) and estimation with a fast rate of convergence are established. The statistical performance of the method is examined in an extensive Monte Carlo simulation study, and then illustrated on three publicly available genetic cancer data sets. On these high-dimensional data sets, our proposed method outperforms state-of-the-art survival models regarding risk prediction in terms of the C-index, with a computing time orders of magnitude faster. In addition, it provides powerful interpretability from a clinical perspective by automatically pinpointing significant cut-points in relevant variables.
ano.nymous@ccsd.cnrs.fr.invalid (Simon Bussy), Simon Bussy
We analyze an optimization problem of the conductivity in a composite material arising in a heat conduction energy storage problem. The model is described by the heat equation that specifies the heat exchange between two types of materials with different conductive properties with Dirichlet-Neumann boundary conditions on the external part of the domain, and on the interface characterized by the resisting coefficient between the highly conductive material and the less conductive material. The main purpose of the paper is to compute a shape gradient of an optimization functional in order to accurately determine the optimal location of the conductive material using a classical shape optimization strategy. We also present some numerical experiments to illustrate the efficiency of the proposed method.
ano.nymous@ccsd.cnrs.fr.invalid (Mejdi Azaiez), Mejdi Azaiez
The hidden Markov models (HMM) are used in many different fields, to study the dynamics of a process that cannot be directly observed. However, in some cases, the structure of dependencies of a HMM is too simple to describe the dynamics of the hidden process. In particular, in some applications in finance or in ecology, the transition probabilities of the hidden Markov chain can also depend on the current observation. In this work we are interested in extending the classical HMM to this situation. We define a new model, referred to as the Observation Driven-Hidden Markov Model (OD-HMM). We present a complete study of the general non-parametric OD-HMM with discrete and finite state spaces (hidden and observed variables). We study its identifiability. Then we study the consistency of the maximum likelihood estimators. We derive the associated forward-backward equations for the E-step of the EM algorithm. The quality of the procedure is tested on simulated data sets. Finally, we illustrate the use of the model on an application on the study of annual plants dynamics. This works sets theoretical and practical foundations for a new framework that could be further extended, on one hand to the non-parametric context to simplify estimation, and on the other hand to the hidden semi-Markov models for more realism.
ano.nymous@ccsd.cnrs.fr.invalid (Hanna Bacave), Hanna Bacave
To obtain the highest confidence on the correction of numerical simulation programs for the resolution of Partial Differential Equations (PDEs), one has to formalize the mathematical notions and results that allow to establish the soundness of the approach. The finite element method is one of the popular tools for the numerical resolution of a wide range of PDEs. The purpose of this document is to provide the formal proof community with very detailed pen-and-paper proofs for the construction of the Lagrange finite elements of any degree on simplices in positive dimension.
ano.nymous@ccsd.cnrs.fr.invalid (François Clément), François Clément
This paper investigates statistical inference for weak FARIMA models in the frequency domain. We estimate the asymptotic covariance matrix of the classical Whittle estimator to achieve full inference, thereby addressing an open question posed by Shao, X. (2010). However, computing this matrix numerically is costly. To mitigate this issue, we propose an alternative approach that circumvents trispectrum estimation at the cost of a slower convergence rate. Additionally, we introduce a fast alternative to the Whittle estimator based on a one-step procedure. This method refines an initial Whittle estimator computed on a subsample using a single Fisher scoring step. The resulting estimator retains the same asymptotic properties as the Whittle estimator computed on the full sample while significantly reducing computational time.
ano.nymous@ccsd.cnrs.fr.invalid (Samir Ben-Hariz), Samir Ben-Hariz
In this paper, we design a posteriori estimates for finite element approximations of nonlinear elliptic problems satisfying strong-monotonicity and Lipschitz-continuity properties. These estimates include, and build on, any iterative linearization method that satisfies a few clearly identified assumptions; this encompasses the Picard, Newton, and Zarantonello linearizations. The estimates give a guaranteed upper bound on an augmented energy difference (reliability with constant one), as well as a lower bound (efficiency up to a generic constant). We prove that for the Zarantonello linearization, this generic constant only depends on the space dimension, the mesh shape regularity, and possibly the approximation polynomial degree in four or more space dimensions, making the estimates robust with respect to the strength of the nonlinearity. For the other linearizations, there is only a computable dependence on the local variation of the linearization operators. We also derive similar estimates for the usual energy difference that depend locally on the nonlinearity and improve the established bound. Numerical experiments illustrate and validate the theoretical results, for both smooth and singular solutions.
ano.nymous@ccsd.cnrs.fr.invalid (André Harnist), André Harnist
U-statistics are a fundamental class of statistics derived from modeling quantities of interest characterized by responses from multiple subjects. U-statistics make generalizations the empirical mean of a random variable X to the sum of all k-tuples of X observations. This paper examines a setting for nonparametric statistical curve estimation based on an infinite-dimensional covariate, including Stute’s estimator as a special case. In this functional context, the class of “delta sequence estimators” is defined and discussed. The orthogonal series method and the histogram method are both included in this class. We achieve almost complete uniform convergence with the rates of these estimators under certain broad conditions. Moreover, in the same context, we show the uniform almost-complete convergence for the nonparametric inverse probability of censoring weighted (I.P.C.W.) estimators of the regression function under random censorship, which is of its own interest. Among the potential applications are discrimination problems, metric learning and the time series prediction from the continuous set of past values.
ano.nymous@ccsd.cnrs.fr.invalid (Salim Bouzebda), Salim Bouzebda
[...]
ano.nymous@ccsd.cnrs.fr.invalid (Hanna Bacave), Hanna Bacave
The main goal of this research is to develop a data-driven reduced order model (ROM) strategy from high-fidelity simulation result data of a full order model (FOM). The goal is to predict at lower computational cost the time evolution of solutions of Fluid-Structure Interaction (FSI) problems. For some FSI applications like tire/water interaction, the FOM solid model (often chosen as quasistatic) can take far more computational time than the HF fluid one. In this context, for the sake of performance one could only derive a reduced-order model for the structure and try to achieve a partitioned HF fluid solver coupled with a ROM solid one. In this paper, we present a datadriven partitioned ROM on a study case involving a simplified 1D-1D FSI problem representing an axisymmetric elastic model of an arterial vessel, coupled with an incompressible fluid flow. We derive a purely data-driven solid ROM for FOM fluid-ROM structure partitioned coupling and present early results.
ano.nymous@ccsd.cnrs.fr.invalid (Azzeddine Tiba), Azzeddine Tiba
We deploy artificial neural networks to unfold neutron spectra from measured energy-integrated quantities. These neutron spectra represent an important parameter allowing to compute the absorbed dose and the kerma to serve radiation protection in addition to nuclear safety. The built architectures are inspired from convolutional neural networks. The first architecture is made up of residual transposed convolution's blocks while the second is a modified version of the U-net architecture. A large and balanced dataset is simulated following "realistic" physical constraints to train the architectures in an efficient way. Results show a high accuracy prediction of neutron spectra ranging from thermal up to fast spectrum. The dataset processing, the attention paid to performances' metrics and the hyperoptimization are behind the architectures' robustness.
ano.nymous@ccsd.cnrs.fr.invalid (Maha Bouhadida), Maha Bouhadida
Lebesgue integration is a well-known mathematical tool, used for instance in probability theory, real analysis, and numerical mathematics. Thus, its formalization in a proof assistant is to be designed to fit different goals and projects. Once the Lebesgue integral is formally defined and the first lemmas are proved, the question of the convenience of the formalization naturally arises. To check it, a useful extension is Tonelli's theorem, stating that the (double) integral of a nonnegative measurable function of two variables can be computed by iterated integrals, and allowing to switch the order of integration. This article describes the formal definition and proof in Coq of product sigma-algebras, product measures and their uniqueness, the construction of iterated integrals, up to Tonelli's theorem. We also advertise the Lebesgue induction principle provided by an inductive type for nonnegative measurable functions.
ano.nymous@ccsd.cnrs.fr.invalid (Sylvie Boldo), Sylvie Boldo
Lebesgue integration is a well-known mathematical tool, used for instance in probability theory, real analysis, and numerical mathematics. Thus, its formalization in a proof assistant is to be designed to fit different goals and projects. Once the Lebesgue integral is formally defined and the first lemmas are proved, the question of the convenience of the formalization naturally arises. To check it, a useful extension is Tonelli's theorem, stating that the (double) integral of a nonnegative measurable function of two variables can be computed by iterated integrals, and allowing to switch the order of integration. This article describes the formal definition and proof in Coq of product sigma-algebras, product measures and their uniqueness, the construction of iterated integrals, up to Tonelli's theorem. We also advertise the Lebesgue induction principle provided by an inductive type for nonnegative measurable functions.
ano.nymous@ccsd.cnrs.fr.invalid (Sylvie Boldo), Sylvie Boldo
The variational finite element solution of Cauchy's problem, expressed in the Steklov-Poincaré framework and regularized by the Lavrentiev method, has been introduced and computationally assessed in [Inverse Problems in Science and Engineering, 18, 1063-1086 (2011)]. The present work concentrates on the numerical analysis of the semi-discrete problem. We perform the mathematical study of the error to rigorously establish the convergence of the global bias-variance error.
ano.nymous@ccsd.cnrs.fr.invalid (Faker Ben Belgacem), Faker Ben Belgacem
We focus on the ill posed data completion problem and its finite element approximation, when recast via the variational duplication Kohn-Vogelius artifice and the condensation Steklov-Poincaré operators. We try to understand the useful hidden features of both exact and discrete problems. When discretized with finite elements of degree one, the discrete and exact problems behave in diametrically opposite ways. Indeed, existence of the discrete solution is always guaranteed while its uniqueness may be lost. In contrast, the solution of the exact problem may not exist, but it is unique. We show how existence of the so called "weak spurious modes", of the exact variational formulation, is source of instability and the reason why existence may fail. For the discrete problem, we find that the cause of non uniqueness is actually the occurrence of "spurious modes". We track their fading effect asymptotically when the mesh size tends to zero. In order to restore uniqueness, we recall the discrete version of the Holmgren principle, introduced in [Azaïez et al, IPSE, 18, 2011], and we discuss the effect on uniqueness of the finite element mesh, using some graph theory basic material.
ano.nymous@ccsd.cnrs.fr.invalid (F Ben Belgacem), F Ben Belgacem
[...]
ano.nymous@ccsd.cnrs.fr.invalid (Mustapha Mohammedi), Mustapha Mohammedi
Integration, just as much as differentiation, is a fundamental calculus tool that is widely used in many scientific domains. Formalizing the mathematical concept of integration and the associated results in a formal proof assistant helps in providing the highest confidence on the correctness of numerical programs involving the use of integration, directly or indirectly. By its capability to extend the (Riemann) integral to a wide class of irregular functions, and to functions defined on more general spaces than the real line, the Lebesgue integral is perfectly suited for use in mathematical fields such as probability theory, numerical mathematics, and real analysis. In this article, we present the Coq formalization of $\sigma$-algebras, measures, simple functions, and integration of nonnegative measurable functions, up to the full formal proofs of the Beppo Levi (monotone convergence) theorem and Fatou's lemma. More than a plain formalization of the known literature, we present several design choices made to balance the harmony between mathematical readability and usability of Coq theorems. These results are a first milestone toward the formalization of $L^p$~spaces such as Banach spaces.
ano.nymous@ccsd.cnrs.fr.invalid (Sylvie Boldo), Sylvie Boldo
Integration, just as much as differentiation, is a fundamental calculus tool that is widely used in many scientific domains. Formalizing the mathematical concept of integration and the associated results in a formal proof assistant helps in providing the highest confidence on the correctness of numerical programs involving the use of integration, directly or indirectly. By its capability to extend the (Riemann) integral to a wide class of irregular functions, and to functions defined on more general spaces than the real line, the Lebesgue integral is perfectly suited for use in mathematical fields such as probability theory, numerical mathematics, and real analysis. In this article, we present the Coq formalization of $\sigma$-algebras, measures, simple functions, and integration of nonnegative measurable functions, up to the full formal proofs of the Beppo Levi (monotone convergence) theorem and Fatou's lemma. More than a plain formalization of the known literature, we present several design choices made to balance the harmony between mathematical readability and usability of Coq theorems. These results are a first milestone toward the formalization of $L^p$~spaces such as Banach spaces.
ano.nymous@ccsd.cnrs.fr.invalid (Sylvie Boldo), Sylvie Boldo
Recent works in the Boundary Element Method (BEM) community have been devoted to the derivation of fast techniques to perform the matrix vector product needed in the iterative solver. Fast BEMs are now very mature. However, it has been shown that the number of iterations can significantly hinder the overall efficiency of fast BEMs. The derivation of robust preconditioners is now inevitable to increase the size of the problems that can be considered. Analytical precon-ditioners offer a very interesting strategy by improving the spectral properties of the boundary integral equations ahead from the discretization. The main contribution of this paper is to propose new analytical preconditioners to treat Neumann exterior scattering problems in 2D and 3D elasticity. These preconditioners are local approximations of the adjoint Neumann-to-Dirichlet map. We propose three approximations with different orders. The resulting boundary integral equations are preconditioned Combined Field Integral Equations (CFIEs). An analytical spectral study confirms the expected behavior of the preconditioners, i.e., a better eigenvalue clustering especially in the elliptic part contrary to the standard CFIE of the first-kind. We provide various 2D numerical illustrations of the efficiency of the method for different smooth and non smooth geometries. In particular, the number of iterations is shown to be independent of the density of discretization points per wavelength which is not the case of the standard CFIE. In addition, it is less sensitive to the frequency.
ano.nymous@ccsd.cnrs.fr.invalid (Stéphanie Chaillat), Stéphanie Chaillat
For a system, a priori identifiability is a theoretical property depending only on the model and guarantees that its parameters can be uniquely determined from observations. This paper provides a survey of the various and numerous definitions of a priori identifiability given in the literature, for both deterministic continuous and discrete-time models. A classification is done by distinguishing analytical and algebraic definitions as well as local and global ones. Moreover, this paper provides an overview on the distinct methods to test the parameter identifiability. They are classified into the so-called output equality approaches, local state isomorphism approaches and differential algebra approaches. A few examples are detailed to illustrate the methods and complete this survey.
ano.nymous@ccsd.cnrs.fr.invalid (Floriane Anstett-Collin), Floriane Anstett-Collin
We extend the general stochastic matching model on graphs introduced in [13], to matching models on multigraphs, that is, graphs with self-loops. The evolution of the model can be described by a discrete time Markov chain whose positive recurrence is investigated. Necessary and sufficient stability conditions are provided, together with the explicit form of the stationary probability in the case where the matching policy is 'First Come, First Matched'.
ano.nymous@ccsd.cnrs.fr.invalid (Jocelyn Begeot), Jocelyn Begeot
Dans le contexte du démantèlement des réacteurs de Fukushima Daiichi, plusieurs projets ont été subventionnés par le gouvernement japonais pour préparer les opérations de retrait du corium. Dans ce cadre, une étude conjointe menée entre ONET Technologies et les laboratoires du CEA et de l’IRSN a permis de démontrer la faisabilité de l’utilisation de la technique de découpe par laser et d’estimer le terme source aérosol ainsi généré. Deux simulants du corium synthétisés et caractérisés par le CEA-Cadarache ont fait l’objet d’essais de tirs laser sous air et sous eau au sein de l’installation DELIA du CEA Saclay, et les aérosols émis ont été caractérisés par l’IRSN. La caractérisation des particules émises en termes de concentration et de distribution granulométrique a permis d’apporter des informations pour prédire notamment le transport et le dépôt des particules, mais la connaissance de la composition chimique par classe de taille est une information nécessaire pour une meilleure gestion des risques professionnels et environnementaux. Cet article présente les résultats concernant la caractérisation de la composition chimique de l’aérosol d’un simulant du corium, en condition de découpe laser sous air, et la distribution granulométrique associée
ano.nymous@ccsd.cnrs.fr.invalid (Emmanuel Porcheron), Emmanuel Porcheron
We consider in this paper a model parabolic variational inequality. This problem is discretized with conforming Lagrange finite elements of order $p ≥ 1$ in space and with the backward Euler scheme in time. The nonlinearity coming from the complementarity constraints is treated with any semismooth Newton algorithm and we take into account in our analysis an arbitrary iterative algebraic solver. In the case $p = 1$, when the system of nonlinear algebraic equations is solved exactly, we derive an a posteriori error estimate on both the energy error norm and a norm approximating the time derivative error. When $p ≥ 1$, we provide a fully computable and guaranteed a posteriori estimate in the energy error norm which is valid at each step of the linearization and algebraic solvers. Our estimate, based on equilibrated flux reconstructions, also distinguishes the discretization, linearization, and algebraic error components. We build an adaptive inexact semismooth Newton algorithm based on stopping the iterations of both solvers when the estimators of the corresponding error components do not affect significantly the overall estimate. Numerical experiments are performed with the semismooth Newton-min algorithm and the semismooth Newton-Fischer-Burmeister algorithm in combination with the GMRES iterative algebraic solver to illustrate the strengths of our approach.
ano.nymous@ccsd.cnrs.fr.invalid (Jad Dabaghi), Jad Dabaghi
Dans le cadre d’un programme pluriannuel, des campagnes de sondages ont été réalisées sur les deux versants du col du Petit-Saint-Bernard(2188 m, Alpes occidentales), entre 750 et 3000 m d’altitude. La méthode de travail néglige les prospections au sol, au profit de la multiplication des sondages manuels, implantés dans des contextes topographiques sélectionnés et menés jusqu’à la base des remplissages holocènes. Les résultats obtenus documentent dans la longue durée l’évolution de la dynamique pédo-sédimentaire et la fréquentation des différents étages d’altitude. La signification des données archéologiques collectées est discutée par rapport à l’état des connaissances dans une zone de comparaison groupant les vallées voisines des Alpes occidentales, par rapport aux modèles de peuplement existants et par rapport aux indications taphonomiques apportées par l’étude pédo-sédimentaire. Un programme d’analyses complémentaires destiné à préciser le contexte, la taphonomie et le statut fonctionnel
ano.nymous@ccsd.cnrs.fr.invalid (Pierre-Jérôme Rey), Pierre-Jérôme Rey
In this work, we develop an a-posteriori-steered algorithm for a compositional two-phase flow with exchange of components between the phases in porous media. As a model problem, we choose the two-phase liquid-gas flow with appearance and disappearance of the gas phase formulated as a system of nonlinear evolutive partial differential equations with nonlinear complementarity constraints. The discretization of our model is based on the backward Euler scheme in time and the finite volume scheme in space. The resulting nonlinear system is solved via an inexact semismooth Newton method. The key ingredient for the a posteriori analysis are the discretization, linearization, and algebraic flux reconstructions allowing to devise estimators for each error component. These enable to formulate criteria for stopping the iterative algebraic solver and the iterative linearization solver whenever the corresponding error components do not affect significantly the overall error. Numerical experiments are performed using the Newton-min algorithm as well as the Newton-Fischer-Burmeister algorithm in combination with the GMRES iterative linear solver to show the efficiency of the proposed adaptive method.
ano.nymous@ccsd.cnrs.fr.invalid (Ibtihel Ben Gharbia), Ibtihel Ben Gharbia
We introduce the binacox, a prognostic method to deal with the problem of detect- ing multiple cut-points per features in a multivariate setting where a large number of continuous features are available. The method is based on the Cox model and com- bines one-hot encoding with the binarsity penalty, which uses total-variation regular- ization together with an extra linear constraint, and enables feature selection. Original nonasymptotic oracle inequalities for prediction (in terms of Kullback-Leibler diver- gence) and estimation with a fast rate of convergence are established. The statistical performance of the method is examined in an extensive Monte Carlo simulation study, and then illustrated on three publicly available genetic cancer datasets. On these high- dimensional datasets, our proposed method signi cantly outperforms state-of-the-art survival models regarding risk prediction in terms of the C-index, with a computing time orders of magnitude faster. In addition, it provides powerful interpretability from a clinical perspective by automatically pinpointing signi cant cut-points in relevant variables.
ano.nymous@ccsd.cnrs.fr.invalid (Simon Bussy), Simon Bussy
The γ-irradiation of a biphasic system composed of tri-n-butylphosphate in tetrapropylene hydrogen (TPH) in contact with palladium(II) nitrate in nitric acid aqueous solution led to the formation of two precipitates. A thorough characterization of these solids was performed by means of various analytical techniques including X-Ray Diffraction (XRD), Thermal Gravimetric Analysis coupled with a Differential Scanning Calorimeter (TGA-DSC), X-ray Photoelectron Spectroscopy (XPS), InfraRed (IR), RAMAN and Nuclear Magnetic Resonance (NMR) Spectroscopy, and ElectroSpray Ionization Mass Spectrometry (ESI-MS). Investigations showed that the two precipitates exhibit quite similar structures. They are composed at least of two compounds: palladium cyanide and palladium species containing ammonium, phosphorous or carbonyl groups. Several mechanisms are proposed to explain the formation of Pd(CN)2.
ano.nymous@ccsd.cnrs.fr.invalid (Bénédicte Simon), Bénédicte Simon
For each pair ε = (ε 1 , ε 2) of positive parameters, we define a perforated domain Ω ε by making a small hole of size ε 1 ε 2 in an open regular subset Ω of R n (n ≥ 3). The hole is situated at distance ε 1 from the outer boundary ∂Ω of the domain. Then, when ε → (0, 0) both the size of the hole and its distance from ∂Ω tend to zero, but the size shrinks faster than the distance. In such perforated domain Ω ε we consider a Dirichlet problem for the Laplace equation and we denote by u ε its solution. Our aim is to represent the map that takes ε to u ε in term of real analytic functions of ε defined in a neighborhood of (0, 0). In contrast with previous results valid only for restrictions of u ε to suitable subsets of Ω ε , we prove a global representation formula that holds on the whole of Ω ε. Such a formula allows to rigorously justify multi-scale expansions, which we subsequently construct.
ano.nymous@ccsd.cnrs.fr.invalid (Virginie Bonnaillie-Noël), Virginie Bonnaillie-Noël
L'analyse par microsonde électronique (EPMA) permet de quantifier, avec une grande précision, les concentrations élémentaires d'échantillons de compositions inconnues. Elle permet, par exemple, de quantifier les actinides présents dans les combustibles nucléaires neufs ou irradiés, d'aider à la gestion des déchets nucléaires ou encore de dater certaines roches. Malheureusement, ces analyses quantitatives ne sont pas toujours réalisables dû à l'indisponibilité des étalons de référence pour certains actinides. Afin de pallier cette difficulté, une méthode d'analyse dite « sans standard » peut-être employée au moyen d'étalons virtuels. Ces derniers sont obtenus à partir de formules empiriques ou à partir de calculs basés sur des modèles théoriques. Toutefois, ces calculs requièrent la connaissance de paramètres physiques généralement mal connus, comme c'est le cas pour les sections efficaces de production de rayons X. La connaissance précise de ces sections efficaces est requise dans de nombreuses applications telles que dans les codes de transport de particules et dans les simulations Monte-Carlo. Ces codes de calculs sont très utilisés en médecine et particulièrement en imagerie médicale et dans les traitements par faisceau d'électrons. Dans le domaine de l'astronomie, ces données sont utilisées pour effectuer des simulations servant à prédire les compositions des étoiles et des nuages galactiques ainsi que la formation des systèmes planétaires.Au cours de ce travail, les sections efficaces de production des raies L et M du plomb, du thorium et de l'uranium ont été mesurées par impact d'électrons sur des cibles minces autosupportées d'épaisseur variant de 0,2 à 8 nm. Les résultats expérimentaux ont été comparés avec les prédictions théoriques de sections efficaces d'ionisation calculées grâce à l'approximation de Born en ondes distordues (DWBA) et avec les prédictions de formules analytiques utilisées dans les applications pratiques. Les sections efficaces d'ionisation ont été converties en sections efficaces de productions de rayons X grâce aux paramètres de relaxation atomique extraits de la littérature. Les résultats théoriques du modèle DWBA sont en excellents accords avec les résultats expérimentaux. Ceci permet de confirmer les prédictions de ce modèle et de valider son utilisation pour le calcul de standards virtuels.Les prédictions de ce modèle ont été intégrées dans le code Monte-Carlo PENELOPE afin de calculer l'intensité de rayons X produite par des standards pur d'actinides. Les calculs ont été réalisés pour les éléments dont le numéro atomique est 89 ≤ Z ≤ 99 et pour des tensions d'accélération variant du seuil d'ionisation jusque 40 kV, par pas de 0,5 kV. Pour une utilisation pratique, les intensités calculées pour les raies L et M les plus intenses ont été regroupées dans une base de données.Les prédictions des standards virtuels ainsi obtenus ont été comparées avec des mesures effectuées sur des échantillons de composition connue (U, UO2, ThO2, ThF4, PuO2…) et avec les données acquises lors de précédentes campagnes de mesures. Le dosage des actinides à l'aide de ces standards virtuels a montré un bon accord avec les résultats attendus. Ceci confirme la fiabilité des standards virtuels développés et démontre que la quantification des actinides par microsonde électronique est réalisable sans standards d'actinides et avec un bon niveau de confiance.
ano.nymous@ccsd.cnrs.fr.invalid (Aurélien Moy), Aurélien Moy
One of the important challenges for the decommissioning of the damaged reactors of the Fukushima Daiichi Nuclear Power Plant is the safe retrieval of the fuel debris or corium. It is especially primordial to investigate the cutting conditions for air configuration and for underwater configuration at different water levels. Concerning the cutting techniques, the laser technique is well adapted to the cutting of expected material such as corium that has an irregular shape and heterogeneous composition. A French consortium (ONET Technologies, CEA and IRSN) is being subsidized by the Japanese government to implement R&D related to the laser cutting of Fukushima Daiichi fuel debris and related to dust collection technology. Debris simulant have been manufactured in the PLINIUS platform to represent Molten Core Concrete Interaction as estimated from Fukushima Daiichi calculations. In this simulant, uranium is replaced by hafnium and the major fission products have been replaced by their natural isotopes. During laser cutting experiments in the DELIA facility, aerosols have been collected thanks to filters and impactors. The collected aerosols have been analyzed. Both chemical analysis (dissolution + ICP MS and ICP AES) and microscopic analyses (SEM EDS) will be presented and discussed. These data provide insights on the expected dust releases during cutting and can be converted to provide radioactivity estimates. They have also been successfully compared to thermodynamic calculations with the NUCLEA database.
ano.nymous@ccsd.cnrs.fr.invalid (Christophe Journeau), Christophe Journeau
The purpose is a finite element approximation of the heat diffusion problem in composite media, with non-linear contact resistance at the interfaces. As already explained in [Journal of Scientific Computing, {\bf 63}, 478-501(2015)], hybrid dual formulations are well fitted to complicated composite geometries and provide tractable approaches to variationally express the jumps of the temperature. The finite elements spaces are standard. Interface contributions are added to the variational problem to account for the contact resistance. This is an important advantage for computing codes developers. We undertake the analysis of the non-linear heat problem for a large range of contact resistance and we investigate its discretization by hybrid dual finite element methods. Numerical experiments are presented at the end to support the theoretical results.
ano.nymous@ccsd.cnrs.fr.invalid (F Ben Belgacem), F Ben Belgacem
We introduce a new algorithm of proper generalized decomposition (PGD) for parametric symmetric elliptic partial differential equations. For any given dimension, we prove the existence of an optimal subspace of at most that dimension which realizes the best approximation---in the mean parametric norm associated to the elliptic operator---of the error between the exact solution and the Galerkin solution calculated on the subspace. This is analogous to the best approximation property of the proper orthogonal decomposition (POD) subspaces, except that in our case the norm is parameter-dependent. We apply a deflation technique to build a series of approximating solutions on finite-dimensional optimal subspaces, directly in the online step, and we prove that the partial sums converge to the continuous solution in the mean parametric elliptic norm. We show that the standard PGD for the considered parametric problem is strongly related to the deflation algorithm introduced in this paper. This opens the possibility of computing the PGD expansion by directly solving the optimization problems that yield the optimal subspaces.
ano.nymous@ccsd.cnrs.fr.invalid (M. Azaïez), M. Azaïez
We consider a degenerate parabolic system modelling the flow of fresh and saltwater in an anisotropic porous medium in the context of seawater intrusion. We propose and analyze a nonlinear Control Volume Finite Element scheme. This scheme ensures the nonnegativity of the discrete solution without any restriction on the mesh and on the anisotropy tensor. Moreover It also provides a control on the entropy. Based on these nonlinear stability results, we show that the scheme converges towards a weak solution to the problem. Numerical results are provided to illustrate the behavior of the model and of the scheme.
ano.nymous@ccsd.cnrs.fr.invalid (Ahmed Ait Hammou Oulhaj), Ahmed Ait Hammou Oulhaj
Résumé du papier "A Coq formal proof of the Lax-Milgram Theorem", CPP 2017.
ano.nymous@ccsd.cnrs.fr.invalid (Sylvie Boldo), Sylvie Boldo
We introduce in this paper a technique for the reduced order approximation of parametric symmetric elliptic partial differential equations. For any given dimension, we prove the existence of an optimal subspace of at most that dimension which realizes the best approximation in mean of the error with respect to the parameter in the quadratic norm associated to the elliptic operator between the exact solution and the Galerkin solution calculated on the subspace. This is analogous to the best approximation property of the Proper Orthogonal Decomposition (POD) subspaces, excepting that in our case the norm is parameter-depending, and then the POD optimal sub-spaces cannot be characterized by means of a spectral problem. We apply a deflation technique to build a series of approximating solutions on finite-dimensional optimal subspaces, directly in the on-line step. We prove that the partial sums converge to the continuous solutions in mean quadratic elliptic norm.
ano.nymous@ccsd.cnrs.fr.invalid (Mejdi Azaiez), Mejdi Azaiez
The fast multipole method is an efficient technique to accelerate the solution of large scale 3D scattering problems with boundary integral equations. However, the fast multipole accelerated boundary element method (FM-BEM) is intrinsically based on an iterative solver. It has been shown that the number of iterations can significantly hinder the overall efficiency of the FM-BEM. The derivation of robust preconditioners for FM-BEM is now inevitable to increase the size of the problems that can be considered. The main constraint in the context of the FM-BEM is that the complete system is not assembled to reduce computational times and memory requirements. Analytic preconditioners offer a very interesting strategy by improving the spectral properties of the boundary integral equations ahead from the discretization. The main contribution of this paper is to combine an approximate adjoint Dirichlet to Neumann (DtN) map as an analytic preconditioner with a FM-BEM solver to treat Dirichlet exterior scattering problems in 3D elasticity. The approximations of the adjoint DtN map are derived using tools proposed in [40]. The resulting boundary integral equations are preconditioned Combined Field Integral Equations (CFIEs). We provide various numerical illustrations of the efficiency of the method for different smooth and non smooth geometries. In particular, the number of iterations is shown to be completely independent of the number of degrees of freedom and of the frequency for convex obstacles.
ano.nymous@ccsd.cnrs.fr.invalid (Stéphanie Chaillat), Stéphanie Chaillat
L’objectif de ce travail est de prendre en compte l’influence de la présence de défauts surfaciques sur le comportement jusqu’à rupture des structures et ce sans description fine de la géométrie des perturbations. L’approche proposée s’appuie principalement sur deux outils : une analyse asymptotique fine des équations de Navier et l’utilisation des modèles à discontinuité forte. Une stratégie de couplage des deux approches permettant l’analyse du comportement de la structure jusqu’à rupture est également présentée.
ano.nymous@ccsd.cnrs.fr.invalid (Delphine Brancherie), Delphine Brancherie
Faults and geological barriers can drastically affect the flow patterns in porous media. Such fractures can be modeled as interfaces that interact with the surrounding matrix. We propose a new technique for the estimation of the location and hydrogeological properties of a small number of large fractures in a porous medium from given distributed pressure or flow data. At each iteration, the algorithm builds a short list of candidates by comparing fracture indicators. These indicators quantify at the first order the decrease of a data misfit function; they are cheap to compute. Then, the best candidate is picked up by minimization of the objective function for each candidate. Optimally driven by the fit to the data, the approach has the great advantage of not requiring remeshing, nor shape derivation. The stability of the algorithm is shown on a series of numerical examples representative of typical situations.
ano.nymous@ccsd.cnrs.fr.invalid (Hend Ben Ameur), Hend Ben Ameur
Using a preconditioned Richardson iterative method as a regularization to the data completion problem is the aim of the contribution. The problem is known to be exponentially ill posed that makes its numerical treatment a hard task. The approach we present relies on the Steklov-Poincaré variational framework introduced in [Inverse Problems, vol. 21, 2005]. The resulting algorithm turns out to be equivalent to the Kozlov-Maz’ya-Fomin method in [Comp. Math. Phys., vol. 31, 1991]. We conduct a comprehensive analysis on the suitable stopping rules that provides some optimal estimates under the General Source Condition on the exact solution. Some numerical examples are finally discussed to highlight the performances of the method.
ano.nymous@ccsd.cnrs.fr.invalid (Duc Thang Du), Duc Thang Du
We derive rates of contraction of posterior distributions on non-parametric models resulting from sieve priors. The aim of the study was to provide general conditions to get posterior rates when the parameter space has a general structure, and rate adaptation when the parameter is, for example, a Sobolev class. The conditions employed, although standard in the literature, are combined in a different way. The results are applied to density, regression, nonlinear autoregression and Gaussian white noise models. In the latter we have also considered a loss function which is different from the usual l2 norm, namely the pointwise loss. In this case it is possible to prove that the adaptive Bayesian approach for the l2 loss is strongly suboptimal and we provide a lower bound on the rate.
ano.nymous@ccsd.cnrs.fr.invalid (Julyan Arbel), Julyan Arbel
It has been proven that the knowledge of an accurate approximation of the Dirichlet-to-Neumann (DtN) map is useful for a large range of applications in wave scattering problems. We are concerned in this paper with the construction of an approximate local DtN operator for time-harmonic elastic waves. The main contributions are the following. First, we derive exact operators using Fourier analysis in the case of an elastic half-space. These results are then extended to a general three-dimensional smooth closed surface by using a local tangent plane approximation. Next, a regularization step improves the accuracy of the approximate DtN operators and a localization process is proposed. Finally, a first application is presented in the context of the On-Surface Radiation Conditions method. The efficiency of the approach is investigated for various obstacle geometries at high frequencies.
ano.nymous@ccsd.cnrs.fr.invalid (Stéphanie Chaillat), Stéphanie Chaillat
In recent years, many MAC protocols for wireless sensor networks have been proposed and most of them are evaluated using Matlab simulator and/or network simulators (OMNeT++, NS2, etc). However, most of them have a static behavior and few network simulations are available for adaptive protocols. Specially, in OMNeT++/MiXiM, there are few energy efficient MAC protocols for WSNs (B-MAC & L-MAC) and no adaptive ones. To this end, the TAD-MAC (Traffic Aware Dynamic MAC) protocol has been simulated in OMNeT++ with the MiXiM framework and implementation details are given in this paper. The simulation results have been used to evaluate the performance of TAD-MAC through comparisons with B-MAC and L-MAC protocols.
ano.nymous@ccsd.cnrs.fr.invalid (Van-Thiep Nguyen), Van-Thiep Nguyen
We consider the question of giving an upper bound for the first nontrivial eigenvalue of the Wentzell-Laplace operator of a domain $\Omega$, involving only geometrical informations. We provide such an upper bound, by generalizing Brock's inequality concerning Steklov eigenvalues, and we conjecture that balls maximize the Wentzell eigenvalue, in a suitable class of domains, which would improve our bound. To support this conjecture, we prove that balls are critical domains for the Wentzell eigenvalue, in any dimension, and that they are local maximizers in dimension 2 and 3, using an order two sensitivity analysis. We also provide some numerical evidence.
ano.nymous@ccsd.cnrs.fr.invalid (Marc Dambrine), Marc Dambrine
Karhunen-Loève's decompositions (KLD) or the proper orthogonal decompositions (POD) of bivariate functions are revisited in this work. We investigate the truncation error first for regular functions and try to improve and sharpen bounds found in the literature. However it happens that (KL)-series expansions are in fact more sensitive to the liability of fields to approximate to be well represented by a small sum of products of separated variables functions. We consider this very issue for some interesting fields solutions of partial differential equations such as the transient heat problem and Poisson's equation. The main tool to state approximation bounds is linear algebra. We show how the singular value decomposition underlying the (KL)-expansion is connected to the spectrum of some Gram matrices. Deriving estimates on the truncation error is thus strongly tied to the spectral properties of these Gram matrices which are structured matrices with low displacement ranks.
ano.nymous@ccsd.cnrs.fr.invalid (Mejdi Azaïez), Mejdi Azaïez
The inverse problem under investigation consists of the boundary data completion in a deoxygenation-reaeration model in stream-waters. The unidimensional transport model we deal with is based on the one introduced by Streeter and Phelps, augmented by Taylor dispersion terms. The missing boundary condition is the load or/and the flux of the biochemical oxygen demand indicator at the outfall point. The counterpart is the availability of two boundary conditions on the dissolved oxygen tracer at the same point. The major consequences of these non-standard boundary conditions is that dispersive transport equations on both oxygen tracers are strongly coupled and the resulting system becomes ill-posed. The main purpose is a finite element space-discretization of the variational problem put under a non-symmetric mixed form. Combining analytical calculations, numerical computations and theoretical justifications, we try to elucidate the characteristics related to the ill-posedness of this data completion dynamical problem and understand its mathematical structure.
ano.nymous@ccsd.cnrs.fr.invalid (Faker Ben Belgacem), Faker Ben Belgacem
Nous considérons une ́equation qui modélise la diffusion de la température dans une mousse de graphite contenant des capsules de sel. Les conditions de transition de la température entre le graphite et le sel doivent être traitées correctement. Nous effectuons l'analyse de ce modèle et prouvons qu'il est bien posé. Puis nous en proposons une discrétisation par éléments finis et effectuons l'analyse a priori du problème discret. Quelques expériences numériques confirment l'intérêt de cette approche.
ano.nymous@ccsd.cnrs.fr.invalid (Faker Ben Belgacem), Faker Ben Belgacem
We consider an inverse problem that arises in the management of water resources and pertains to the analysis of the surface waters pollution by organic matter. Most of physical models used by engineers derive from various additions and corrections to enhance the earlier deoxygenation-reaeration model proposed by Streeter and Phelps in 1925, the unknowns being the biochemical oxygen demand (BOD) and the dissolved oxygen (DO) concentrations. The one we deal with includes Taylor's dispersion to account for the heterogeneity of the contamination in all space directions. The system we obtain is then composed of two reaction-dispersion equations. The particularity is that both Neumann and Dirichlet boundary conditions are available on the DO tracer while the BOD density is free of any condition. In fact, for real-life concerns, measurements on the dissolved oxygen are easy to obtain and to save. In the contrary, collecting data on the biochemical oxygen demand is a sensitive task and turns out to be a long-time process. The global model pursues the reconstruction of the BOD density, and especially of its flux along the boundary. Not only this problem is plainly worth studying for its own interest but it can be also a mandatory step in other applications such as the identification of the pollution sources location. The non-standard boundary conditions generate two difficulties in mathematical and computational grounds. They set up a severe coupling between both equations and they are cause of ill-posedness for the data reconstruction problem. Existence and stability fail. Identifiability is therefore the only positive result one can seek after ; it is the central purpose of the paper. We end by some computational experiences to assess the capability of the mixed finite element capability in the missing data recovery (on the biochemical oxygen demand).
ano.nymous@ccsd.cnrs.fr.invalid (Mejdi Azaïez), Mejdi Azaïez
We are interested in an inverse problem of recovering the position of a pollutant or contaminant source in a stream water. Advection, dispersive transport and the reaction of the solute is commonly modeled by a linear or non-linear parabolic equation. In former works, it is established that a point-wise source is fully identifiable from measurements recorded by a couple of sensors placed, one up-stream and the other down-stream of the pollution source. The observability question we try to solve here is related to the redundancy of sensors when additional information is available on the point-wise source. It may occur, in hydrological engineering, that the intensity of the pollutant is known in advance. In this case, we pursue an identifiability result of a moving source location using a single observation. The chief mathematical tools to prove identifiability are the unique continuation theorem together with an appropriate maximum principle for the parabolic equation under investigation.
ano.nymous@ccsd.cnrs.fr.invalid (Faker Ben Belgacem), Faker Ben Belgacem
This article concerns maximum-likelihood estimation for discrete time homogeneous nonparametric semi-Markov models with finite state space. In particular, we present the exact maximum-likelihood estimator of the semi-Markov kernel which governs the evolution of the semi-Markov chain (SMC). We study its asymptotic properties in the following cases: (i) for one observed trajectory, when the length of the observation tends to infinity, and (ii) for parallel observations of independent copies of an SMC censored at a fixed time, when the number of copies tends to infinity. In both cases, we obtain strong consistency, asymptotic normality, and asymptotic efficiency for every finite dimensional vector of this estimator. Finally, we obtain explicit forms for the covariance matrices of the asymptotic distributions.
ano.nymous@ccsd.cnrs.fr.invalid (Samis Trevezas), Samis Trevezas
This paper addresses a complex multi-physical phenomemon involving cardiac electrophysiology and hemodynamics. The purpose is to model and simulate a phenomenon that has been observed in MRI machines: in the presence of a strong magnetic field, the T-wave of the electrocardiogram (ECG) gets bigger, which may perturb ECG-gated imaging. This is due a magnetohydrodynamic (MHD) eff ect occurring in the aorta. We reproduce this experimental observation through computer simulations on a realistic anatomy, and with a three-compartment model: inductionless magnetohydrodynamic equations in the aorta, bidomain equations in the heart and electrical di ffusion in the rest of the body. These compartments are strongly coupled and solved using fi nite elements. Several benchmark tests are proposed to assess the numerical solutions and the validity of some modeling assumptions. Then, ECGs are simulated for a wide range of magnetic field intensities (from 0 to 20 Tesla).
ano.nymous@ccsd.cnrs.fr.invalid (Vincent Martin), Vincent Martin
Ventcel boundary conditions are second order di erential conditions that appear in asymptotic models. Like Robin boundary conditions, they lead to well-posed variational problems under a sign condition of the coe cient. This is achieved when physical situations are considered. Nevertheless, situations where this condition is violated appeared in several recent works where absorbing boundary conditions or equivalent boundary conditions on rough surface are sought for numerical purposes. The well-posedness of such problems was recently investigated : up to a countable set of parameters, existence and uniqueness of the solution for the Ventcel boundary value problem holds without the sign condition. However, the values to be avoided depend on the domain where the boundary value problem is set. In this work, we address the question of the persistency of the solvability of the boundary value problem under domain deformation.
ano.nymous@ccsd.cnrs.fr.invalid (Marc Dambrine), Marc Dambrine
We develop the shape derivative analysis of solutions to the problem of scattering of time-harmonic electromagnetic waves by a bounded penetrable obstacle. Since boundary integral equations are a classical tool to solve electromagnetic scattering problems, we study the shape differentiability properties of the standard electromagnetic boundary integral operators. The latter are typically bounded on the space of tangential vector fields of mixed regularity $TH\sp{-\frac{1}{2}}(\Div_{\Gamma},\Gamma)$. Using Helmholtz decomposition, we can base their analysis on the study of pseudo-differential integral operators in standard Sobolev spaces, but we then have to study the Gâteaux differentiability of surface differential operators. We prove that the electromagnetic boundary integral operators are infinitely differentiable without loss of regularity. We also give a characterization of the first shape derivative of the solution of the dielectric scattering problem as a solution of a new electromagnetic scattering problem.
ano.nymous@ccsd.cnrs.fr.invalid (Martin Costabel), Martin Costabel
In this paper we study the shape differentiability properties of a class of boundary integral operators and of potentials with weakly singular pseudo-homogeneous kernels acting between classical Sobolev spaces, with respect to smooth deformations of the boundary. We prove that the boundary integral operators are infinitely differentiable without loss of regularity. The potential operators are infinitely shape differentiable away from the boundary, whereas their derivatives lose regularity near the boundary. We study the shape differentiability of surface differential operators. The shape differentiability properties of the usual strongly singular or hypersingular boundary integral operators of interest in acoustic, elastodynamic or electromagnetic potential theory can then be established by expressing them in terms of integral operators with weakly singular kernels and of surface differential operators.
ano.nymous@ccsd.cnrs.fr.invalid (Martin Costabel), Martin Costabel
The interface problem describing the scattering of time-harmonic electromagnetic waves by a dielectric body is often formulated as a pair of coupled boundary integral equations for the electric and magnetic current densities on the interface Γ. In this paper, following an idea developed by Kleinman and Martin for acoustic scattering problems, we consider methods for solving the dielectric scattering problem using a single integral equation over Γ. for a single unknown density. One knows that such boundary integral formulations of the Maxwell equations are not uniquely solvable when the exterior wave number is an eigenvalue of an associated interior Maxwell boundary value problem. We obtain four different families of integral equations for which we can show that by choosing some parameters in an appropriate way, they become uniquely solvable for all real frequencies. We analyze the well-posedness of the integral equations in the space of finite energy on smooth and non-smooth boundaries.
ano.nymous@ccsd.cnrs.fr.invalid (Martin Costabel), Martin Costabel
We consider a model for fluid flow in a porous medium with a fracture. In this model, the fracture is represented as an interface between subdomains, where specific equations have to be solved. In this article we analyse the discrete problem, assuming that the fracture mesh and the subdomain meshes are completely independent, but that the geometry of the fracture is respected. We show that despite this non-conformity, first order convergence is preserved with the lowest order Raviart-Thomas(-Nedelec) mixed finite elements. Numerical simulations confirm this result.
ano.nymous@ccsd.cnrs.fr.invalid (Najla Frih), Najla Frih
We develop the shape derivative analysis of solutions to the problem of scattering of time-harmonic electromagnetic waves by a bounded penetrable obstacle. Since boundary integral equations are a classical tool to solve electromagnetic scattering problems, we study the shape differentiability properties of the standard electromagnetic boundary integral operators. Using Helmholtz decomposition, we can base their analysis on the study of scalar integral operators in standard Sobolev spaces, but we then have to study the Gâteaux differentiability of surface differential operators. We prove that the electromagnetic boundary integral operators are infinitely differentiable without loss of regularity and that the solutions of the scattering problem are infinitely shape differentiable away from the boundary of the obstacle, whereas their derivatives lose regularity on the boundary. We also give a characterization of the first shape derivative as a solution of a new electromagnetic scattering problem.
ano.nymous@ccsd.cnrs.fr.invalid (Martin Costabel), Martin Costabel
We are interested in the optimal control problem of the heat equation where the quadratic cost functional involves a final observation and the control variable is a Dirichlet boundary condition. We first prove that this problem is well-posed. Next, we check its equivalence with a fixed point problem for a space-time mixed system of parabolic equations. Finally, we introduce a Robin penalization on the Dirichlet boundary control for the mixed problem and analyze the convergence when the penalty parameter tends to zero.
ano.nymous@ccsd.cnrs.fr.invalid (Faker Ben Belgacem), Faker Ben Belgacem
We consider the flow of a viscous incompressible fluid in a rigid homogeneous porous medium provided with boundary conditions on the pressure around a circular well. When the boundary pressure presents high variations, the permeability of the medium depends on the pressure, so that the model is nonlinear. We propose a spectral discretization of the resulting system of equations which takes into account the axisymmetry of the domain and of the flow. We prove optimal error estimates and present some numerical experiments which confirm the interest of the discretization.
ano.nymous@ccsd.cnrs.fr.invalid (Mejdi Azaïez), Mejdi Azaïez
Enhancing the safety of high-temperature reactors (HTRs) is based on the quality of the fuel particles, requiring good knowledge of the microstructure of the four-layer particles designed to retain the fission products during irradiation and under accidental conditions. This paper focuses on the intensive research work performed to characterize the micro- and nanostructure of each unirradiated layer (silicon carbide and pyrocarbon coatings). The analytic expertise developed in the 1970s has been recovered and innovative advanced characterization methods have been developed to improve the process parameters and to ensure the production reproducibility of coatings.
ano.nymous@ccsd.cnrs.fr.invalid (D. Helary), D. Helary
Electron back-scattering diffraction (EBSD) can be successfully performed on SiC coatings for HTR fuel particles. EBSD grain maps obtained from thick and thin unirradiated samples are presented, along with pole figures showing textures and a chart showing the distribution of grain aspect ratios. This information is of great interest, and contributes to improving the process parameters and ensuring the reproducibility of coatings
ano.nymous@ccsd.cnrs.fr.invalid (D. Helary), D. Helary
We propose a model for a medical device, called a stent, designed for the treatment of cerebral aneurysms. The stent consists of a grid, immersed in the blood flow and located at the inlet of the aneurysm. It aims at promoting a clot within the aneurysm. The blood flow is modelled by the incompressible Navier-Stokes equations and the stent by a dissipative surface term. We propose a stabilized finite element method for this model and we analyse its convergence in the case of the Stokes equations. We present numerical results for academical test cases, and on a realistic aneurysm obtained from medical imaging.
ano.nymous@ccsd.cnrs.fr.invalid (Miguel Angel Fernández), Miguel Angel Fernández
In this work, we consider singular perturbations of the boundary of a smooth domain. We describe the asymptotic behavior of the solution uε of a second order elliptic equation posed in the perturbed domain with respect to the size parameter ε of the deformation. We are also interested in the variations of the energy functional. We propose a numerical method for the approximation of uε based on a multiscale superposition of the unperturbed solution u0 and a profile defined in a model domain. We conclude with numerical results.
ano.nymous@ccsd.cnrs.fr.invalid (Marc Dambrine), Marc Dambrine
Topological optimization of networks is a complex multi-constraint and multi-criterion optimisation problem in many real world fields (telecommunications, electricity distribution etc.). This paper describes an heuristic algorithm using Binary Decisions Diagrams (BDD) to solve the reliable communication network design problem (RCND) \cite{ga1}. The aim is to design a communication network topology with minimal cost that satisfies a given reliability constraint.
ano.nymous@ccsd.cnrs.fr.invalid (Gary Hardy), Gary Hardy
In this paper, we present a network decomposition method using Binary Decision Diagram (BDD), a state-of-the-art data structure to encode and manipulate boolean functions, for computing the reliability of networks such as computer, communication or power networks. We consider the \textit{so-called} $K$-terminal reliability measure $R_K$ which is defined as the probability that a subset $K$ of nodes can communicate to each other, taking into account the possible failures of the network components (nodes and links). We present an exact algorithm for computing the $K$-terminal reliability of graph $G=(V,E)$ in $O(|E|.F_{max}.2^{F_{max}}.B_{F_{max}})$ where $B_{F_{max}}$ is the Bell number of the maximum boundary set $F_{max}$. Other reliability measures are also discussed. Several examples and experiments show the effectiveness of this approach \footnote{This research was supported by the \emph{Conseil Regional de Picardie}.}.
ano.nymous@ccsd.cnrs.fr.invalid (Gary Hardy), Gary Hardy