Recherche

Publications sur H.A.L.

Retour Accueil / Recherche / Publications sur H.A.L.

[hal-04921627] Convergence of a semi-explicit scheme for a one dimensional periodic nonlocal eikonal equation modeling dislocation dynamics

In this paper, we derive a periodic model from a one dimensional nonlocal eikonal equation set on the full space modeling dislocation dynamics. Thanks to a gradient entropy estimate, we show that this periodic model converges toward the initial one when the period goes to infinity. Moreover, we design a semi-explicit numerical scheme for the periodic model that we introduce. We show the well-posedness of the scheme and a discrete gradient entropy inequality. We also prove the convergence of the scheme and we present some numerical experiments.

ano.nymous@ccsd.cnrs.fr.invalid (Diana Al Zareef), Diana Al Zareef

[hal-04152611] On the variable bandwidth kernel estimation of conditional U -statistics at optimal rates in sup-norm

[...]

ano.nymous@ccsd.cnrs.fr.invalid (Salim Bouzebda), Salim Bouzebda

[hal-05149788] Steady Conduction Problems with Non-Linear Kapitza contact Resistance : Existence and Bifurcation

We focus on the mathematical analysis of the steady-state heat conduction problem in a two-layered domain, with a non-linear Kapitza contact resistance at the interface, which slows down heat transfer. A maximum principle is first established, followed by a proof of existence by applying the Schauder fixed point theorem to the variational formulation. Non-uniqueness is then illustrated through a simple one-dimensional example. Solutions are computed as fixed points of certain algebraic mappings, using Picard's iterative method. A co-dimension-one bifurcation analysis of these maps is presented, with the ratio of the conductivities of the two layers used as the control parameter. Two classes of (power-law) Kapitza conductance give rise to different types of bifurcations, including transcritical, supercritical, flip, and saddle-node bifurcations.

ano.nymous@ccsd.cnrs.fr.invalid (E Bejaoui), E Bejaoui

[hal-05136705] Singularities and Regular Correction for Elliptic Problems with Non-Constant Coefficients and Dirac Sources on the Boundary

We focus on the singularity of the potentials generated by Dirac sources located on the boundary. The diffusivity parameters of the medium are non-constant. We present and prove a singular/regular expansion of these potentials, following a prediction-correction approach. The singularity is made explicit using the fundamental Green's kernel of the Laplace operator. The regular correction problem can be efficiently solved using classical finite element methods. A numerical discussion highlights the relevance of this approach in achieving significant accuracy.

ano.nymous@ccsd.cnrs.fr.invalid (Ameni Béjaoui), Ameni Béjaoui

[hal-05128905] A Rocq Formalization of Simplicial Lagrange Finite Elements

The finite element method is a popular method to numerically solve partial differential equations. In the long-term goal of proving its correctness, we focus here on the formal definition of what is a finite element: a record in the Rocq proof assistant with both values and proofs of validity, including the main one called unisolvence. We then instantiate this record with the most popular and useful, the simplicial Lagrange finite elements for evenly distributed nodes, for any dimension and any polynomial degree. These proofs require many results (definitions, lemmas, canonical structures) about finite families, affine spaces, multidimensional polynomials, in the context of finite or infinite-dimensional spaces.

ano.nymous@ccsd.cnrs.fr.invalid (Sylvie Boldo), Sylvie Boldo

[hal-05128954] A Rocq Formalization of Simplicial Lagrange Finite Elements

The finite elements method is a popular method to numerically solve partial differential equations. In the long-term goal of proving its correctness, we focus here on the formal definition of what is a finite element: a record in the Rocq proof assistant with both values and proofs of validity, including the main one called unisolvence. We then instantiate this record with the most popular and useful, the simplicial Lagrange finite elements for evenly distributed nodes, for any dimension and any polynomial degree. These proofs require many results (definitions, lemmas, canonical structures) about finite families, affine spaces, multidimensional polynomials, in the context of finite or infinite-dimensional spaces.

ano.nymous@ccsd.cnrs.fr.invalid (Sylvie Boldo), Sylvie Boldo

[hal-04702353] Full Whittle inference for weak FARIMA models

This paper investigates statistical inference for weak FARIMA models in the frequency domain. We estimate the asymptotic covariance matrix of the classical Whittle estimator to achieve full inference, thereby addressing an open question posed by Shao, X. (2010). However, computing this matrix numerically is costly. To mitigate this issue, we propose an alternative approach that circumvents trispectrum estimation at the cost of a slower convergence rate. Additionally, we introduce a fast alternative to the Whittle estimator based on a one-step procedure. This method refines an initial Whittle estimator computed on a subsample using a single Fisher scoring step. The resulting estimator retains the same asymptotic properties as the Whittle estimator computed on the full sample while significantly reducing computational time.

ano.nymous@ccsd.cnrs.fr.invalid (Samir Ben-Hariz), Samir Ben-Hariz

[hal-03560951] Binacox : automatic cut‐point detection in high‐dimensional Cox model with applications in genetics

We introduce binacox, a prognostic method to deal with the problem of detecting multiple cut-points per feature in a multivariate setting where a large number of continuous features are available. The method is based on the Cox model and combines one-hot encoding with the binarsity penalty, which uses total-variation regularization together with an extra linear constraint, and enables feature selection. Original nonasymptotic oracle inequalities for prediction (in terms of Kullback-Leibler divergence) and estimation with a fast rate of convergence are established. The statistical performance of the method is examined in an extensive Monte Carlo simulation study, and then illustrated on three publicly available genetic cancer data sets. On these high-dimensional data sets, our proposed method outperforms state-of-the-art survival models regarding risk prediction in terms of the C-index, with a computing time orders of magnitude faster. In addition, it provides powerful interpretability from a clinical perspective by automatically pinpointing significant cut-points in relevant variables.

ano.nymous@ccsd.cnrs.fr.invalid (Simon Bussy), Simon Bussy

[hal-04844328] A finite volume scheme for the local sensing chemotaxis model

In this paper we design, analyze and simulate a finite volume scheme for a cross-diffusion system which models chemotaxis with local sensing. This system has the same gradient flow structure as the celebrated minimal Keller-Segel system, but unlike the latter, its solutions are known to exist globally in 2D. The long-time behavior of solutions is only partially understood which motivates numerical exploration with a reliable numerical method. We propose a linearly implicit, two-point flux finite volume approximation of the system. We show that the scheme preserves, at the discrete level, the main features of the continuous system, namely mass, non-negativity of solution, entropy, and duality estimates. These properties allow us to prove the well-posedness, unconditional stability and convergence of the scheme. We also show rigorously that the scheme possesses an asymptotic preserving (AP) property in the quasi-stationary limit. We complement our analysis with thorough numerical experiments investigating convergence and AP properties of the scheme as well as its reliability with respect to stability properties of steady solutions.

ano.nymous@ccsd.cnrs.fr.invalid (Maxime Herda), Maxime Herda

[hal-04713897] Finite element method. Detailed proofs to be formalized in Coq

To obtain the highest confidence on the correction of numerical simulation programs for the resolution of Partial Differential Equations (PDEs), one has to formalize the mathematical notions and results that allow to establish the soundness of the approach. The finite element method is one of the popular tools for the numerical resolution of a wide range of PDEs. The purpose of this document is to provide the formal proof community with very detailed pen-and-paper proofs for the construction of the Lagrange finite elements of any degree on simplices in positive dimension.

ano.nymous@ccsd.cnrs.fr.invalid (François Clément), François Clément

[hal-04630443] A nonlocal regularization of a generalized Busenberg-Travis cross-diffusion system

A cross-diffusion system with Lotka--Volterra reaction terms in a bounded domain with no-flux boundary conditions is analyzed. The system is a nonlocal regularization of a generalized Busenberg--Travis model, which describes segregating population species with local averaging. The partial velocities are the solutions of an elliptic regularization of Darcy's law, which can be interpreted as a Brinkman's law. The following results are proved: the existence of global weak solutions; localization limit; boundedness and uniqueness of weak solutions (in one space dimension); exponential decay of the solutions. Moreover, the weak--strong uniqueness property for the limiting system is shown.

ano.nymous@ccsd.cnrs.fr.invalid (Ansgar Jüngel), Ansgar Jüngel

[hal-04033438] Robust augmented energy a posteriori estimates for Lipschitz and strongly monotone elliptic problems

In this paper, we design a posteriori estimates for finite element approximations of nonlinear elliptic problems satisfying strong-monotonicity and Lipschitz-continuity properties. These estimates include, and build on, any iterative linearization method that satisfies a few clearly identified assumptions; this encompasses the Picard, Newton, and Zarantonello linearizations. The estimates give a guaranteed upper bound on an augmented energy difference (reliability with constant one), as well as a lower bound (efficiency up to a generic constant). We prove that for the Zarantonello linearization, this generic constant only depends on the space dimension, the mesh shape regularity, and possibly the approximation polynomial degree in four or more space dimensions, making the estimates robust with respect to the strength of the nonlinearity. For the other linearizations, there is only a computable dependence on the local variation of the linearization operators. We also derive similar estimates for the usual energy difference that depend locally on the nonlinearity and improve the established bound. Numerical experiments illustrate and validate the theoretical results, for both smooth and singular solutions.

ano.nymous@ccsd.cnrs.fr.invalid (André Harnist), André Harnist

[hal-04543367] A Semi-Markov Model with Geometric Renewal Processes

We consider a repairable system modeled by a semi-Markov process (SMP), where we include a geometric renewal process for system degradation upon repair, and replacement strategies for non-repairable failure or upon N repairs. First Pérez-Ocón and Torres-Castro studied this system (Pérez-Ocón and Torres-Castro in Appl Stoch Model Bus Ind 18(2):157–170, 2002) and proposed availability calculation using the Laplace Transform. In our work, we consider an extended state space for up and down times separately. This allows us to leverage the standard theory for SMP to obtain all reliability related measurements such as reliability, availability (point and steady-state), mean times and rate of occurrence of failures of the system with general initial law. We proceed with a convolution algebra, which allows us to obtain final closed form formulas for the above measurements. Finally, numerical examples are given to illustrate the methodology.

ano.nymous@ccsd.cnrs.fr.invalid (Jingqi Zhang), Jingqi Zhang

[hal-04458367] Recursive POD expansion for reaction-diffusion equation

This paper focuses on the low-dimensional representation of multivariate functions. We study a recursive POD representation, based upon the use of the power iterate algorithm to recursively expand the modes retained in the previous step. We obtain general error estimates for the truncated expansion, and prove that the recursive POD representation provides a quasi-optimal approximation in $$L^2$$ L 2 norm. We also prove an exponential rate of convergence, when applied to the solution of the reaction-diffusion partial differential equation. Some relevant numerical experiments show that the recursive POD is computationally more accurate than the Proper Generalized Decomposition for multivariate functions. We also recover the theoretical exponential convergence rate for the solution of the reaction-diffusion equation.

ano.nymous@ccsd.cnrs.fr.invalid (M. Azaïez), M. Azaïez

[hal-04180133] Non parametric observation driven HMM

[...]

ano.nymous@ccsd.cnrs.fr.invalid (Hanna Bacave), Hanna Bacave

[hal-04171324] Extensions of the empirical interpolation method to vector-valued functions

In industrial Computer-Assisted Engineering, it is common to deal with vector fields or multiple field variables. In this paper, different vector-valued extensions of the Empirical Interpolation Method (EIM) are considered. EIM has been shown to be a valuable tool for dimensionality reduction, reduced-order modeling for nonlinear problems and/or synthesis of families of solutions for parametric problems. Besides already existing vector-valued extensions, a new vector-valued EIM-the so-called VEIM approach-allowing interpolation on all the vector components is proposed and analyzed in this paper. This involves vector-valued basis functions, same magic points shared by all the components and linear combination matrices rather than scalar coefficients. Coefficient matrices are determined under constraints of point-wise interpolation properties for all the components and exact reconstruction property for the snapshots selected during the greedy iterative process. For numerical experiments, various vector-valued approaches including VEIM are tested and compared on various one, two and three-dimensional problems. All methods return robustness, stability and rather good convergence properties as soon as the Kolmogorov width of the dataset is not too big. Depending of the use case, a suitable and convenient method can be chosen among the different vector-valued EIM candidates.

ano.nymous@ccsd.cnrs.fr.invalid (Florian de Vuyst), Florian de Vuyst

[hal-04129681] Accounting for inspection errors and change in maintenance behaviour

We propose a way to account for inspection errors in a particular framework. We consider a situation where the lifetime of a system depends essentially of a particular part. A deterioration of this part is regarded as an unacceptable state for the safety of the system and a major renewal is deemed necessary. Thus the statistical analysis of the deterioration time distribution of this part is of primary interest for the preventive maintenance of the system. In this context, we faced the following problem. In the early life of the system, unwarranted renewals of the part are decided upon, caused by overly cautious behaviour. Such unnecessary renewals make the statistical analysis of deterioration time data difficult and can induce an underestimation of the mean life of the part. To overcome this difficulty, we propose to regard the problem as an incomplete data model. We present its estimation under the maximum likelihood methodology. Numerical experiments show that this approach eliminates the pessimistic bias in the estimation of the mean life of the part. We also present a Bayesian analysis of the problem which can be useful in a small sample setting.

ano.nymous@ccsd.cnrs.fr.invalid (Gilles Celeux), Gilles Celeux

[hal-03564379] Lebesgue Induction and Tonelli’s Theorem in Coq

Lebesgue integration is a well-known mathematical tool, used for instance in probability theory, real analysis, and numerical mathematics. Thus, its formalization in a proof assistant is to be designed to fit different goals and projects. Once the Lebesgue integral is formally defined and the first lemmas are proved, the question of the convenience of the formalization naturally arises. To check it, a useful extension is Tonelli's theorem, stating that the (double) integral of a nonnegative measurable function of two variables can be computed by iterated integrals, and allowing to switch the order of integration. This article describes the formal definition and proof in Coq of product sigma-algebras, product measures and their uniqueness, the construction of iterated integrals, up to Tonelli's theorem. We also advertise the Lebesgue induction principle provided by an inductive type for nonnegative measurable functions.

ano.nymous@ccsd.cnrs.fr.invalid (Sylvie Boldo), Sylvie Boldo

[hal-03105815] Lebesgue integration. Detailed proofs to be formalized in Coq

To obtain the highest confidence on the correction of numerical simulation programs implementing the finite element method, one has to formalize the mathematical notions and results that allow to establish the soundness of the method. Sobolev spaces are the mathematical framework in which most weak formulations of partial derivative equations are stated, and where solutions are sought. These functional spaces are built on integration and measure theory. Hence, this chapter in functional analysis is a mandatory theoretical cornerstone for the definition of the finite element method. The purpose of this document is to provide the formal proof community with very detailed pen-and-paper proofs of the main results from integration and measure theory.

ano.nymous@ccsd.cnrs.fr.invalid (François Clément), François Clément

[hal-03889276] A Coq Formalization of Lebesgue Induction Principle and Tonelli’s Theorem

Lebesgue integration is a well-known mathematical tool, used for instance in probability theory, real analysis, and numerical mathematics. Thus, its formalization in a proof assistant is to be designed to fit different goals and projects. Once the Lebesgue integral is formally defined and the first lemmas are proved, the question of the convenience of the formalization naturally arises. To check it, a useful extension is Tonelli's theorem, stating that the (double) integral of a nonnegative measurable function of two variables can be computed by iterated integrals, and allowing to switch the order of integration. This article describes the formal definition and proof in Coq of product sigma-algebras, product measures and their uniqueness, the construction of iterated integrals, up to Tonelli's theorem. We also advertise the Lebesgue induction principle provided by an inductive type for nonnegative measurable functions.

ano.nymous@ccsd.cnrs.fr.invalid (Sylvie Boldo), Sylvie Boldo

[hal-03879762] AptaMat : a matrix-based algorithm to compare single-stranded oligonucleotides secondary structures

Motivation: Comparing single-stranded nucleic acids (ssNAs) secondary structures is fundamental when investigating their function and evolution and predicting the effect of mutations on their structures. Many comparison metrics exist, although they are either too elaborate or not sensitive enough to distinguish close ssNAs structures. Results: In this context, we developed AptaMat, a simple and sensitive algorithm for ssNAs secondary structures comparison based on matrices representing the ssNAs secondary structures and a metric built upon the Manhattan distance in the plane. We applied AptaMat to several examples and compared the results to those obtained by the most frequently used metrics, namely the Hamming distance and the RNAdistance, and by a recently developed image-based approach. We showed that AptaMat is able to discriminate between similar sequences, outperforming all the other here considered metrics. In addition, we showed that AptaMat was able to correctly classify 14 RFAM families within a clustering procedure.

ano.nymous@ccsd.cnrs.fr.invalid (Thomas Binet), Thomas Binet

[hal-03877389] Space-time-parameter PCA for data-driven modeling with application to Bioengineering

Principal component analysis is a recognized powerful and practical method in statistics and data science. It can also be used in modeling as a dimensionality reduction tool to achieve low-order models of complex multiphysics or engineering systems. Model-order reduction (MOR) methodologies today are an important topic for engineering design and analysis. Design space exploration or accelerated numerical optimization for example are made easier by the use of reduced-order models. In this chapter, we will talk about the use of higher-order singular value decompositions (HOSVD) applied to spatiotemporal problems that are parameterized by a set of design variables or physical parameters. Here we consider a data-driven reduced order modeling based on a design of computer experiment: from high-dimensional computational results returned by high-fidelity solvers (e.g. finite element ones), the HOSVD allows us to determine spatial, time and parameters principal components. The dynamics of the system can then be retrieved by identifying the low-order discrete dynamical system. As application, we will consider the dynamics of deformable capsules flowing into microchannels. The study of such fluid-structure interaction problems is motivated by the use of microcapsules as innovative drug delivery carriers through blood vessels.

ano.nymous@ccsd.cnrs.fr.invalid (Florian de Vuyst), Florian de Vuyst

[hal-03858196] Uniqueness’ Failure for the Finite Element Cauchy-Poisson’s Problem

We focus on the ill posed data completion problem and its finite element approximation, when recast via the variational duplication Kohn-Vogelius artifice and the condensation Steklov-Poincaré operators. We try to understand the useful hidden features of both exact and discrete problems. When discretized with finite elements of degree one, the discrete and exact problems behave in diametrically opposite ways. Indeed, existence of the discrete solution is always guaranteed while its uniqueness may be lost. In contrast, the solution of the exact problem may not exist, but it is unique. We show how existence of the so called "weak spurious modes", of the exact variational formulation, is source of instability and the reason why existence may fail. For the discrete problem, we find that the cause of non uniqueness is actually the occurrence of "spurious modes". We track their fading effect asymptotically when the mesh size tends to zero. In order to restore uniqueness, we recall the discrete version of the Holmgren principle, introduced in [Azaïez et al, IPSE, 18, 2011], and we discuss the effect on uniqueness of the finite element mesh, using some graph theory basic material.

ano.nymous@ccsd.cnrs.fr.invalid (F Ben Belgacem), F Ben Belgacem

[hal-03471095] A Coq Formalization of Lebesgue Integration of Nonnegative Functions

Integration, just as much as differentiation, is a fundamental calculus tool that is widely used in many scientific domains. Formalizing the mathematical concept of integration and the associated results in a formal proof assistant helps in providing the highest confidence on the correctness of numerical programs involving the use of integration, directly or indirectly. By its capability to extend the (Riemann) integral to a wide class of irregular functions, and to functions defined on more general spaces than the real line, the Lebesgue integral is perfectly suited for use in mathematical fields such as probability theory, numerical mathematics, and real analysis. In this article, we present the Coq formalization of $\sigma$-algebras, measures, simple functions, and integration of nonnegative measurable functions, up to the full formal proofs of the Beppo Levi (monotone convergence) theorem and Fatou's lemma. More than a plain formalization of the known literature, we present several design choices made to balance the harmony between mathematical readability and usability of Coq theorems. These results are a first milestone toward the formalization of $L^p$~spaces such as Banach spaces.

ano.nymous@ccsd.cnrs.fr.invalid (Sylvie Boldo), Sylvie Boldo

[hal-03194113] A Coq Formalization of Lebesgue Integration of Nonnegative Functions

Integration, just as much as differentiation, is a fundamental calculus tool that is widely used in many scientific domains. Formalizing the mathematical concept of integration and the associated results in a formal proof assistant helps in providing the highest confidence on the correctness of numerical programs involving the use of integration, directly or indirectly. By its capability to extend the (Riemann) integral to a wide class of irregular functions, and to functions defined on more general spaces than the real line, the Lebesgue integral is perfectly suited for use in mathematical fields such as probability theory, numerical mathematics, and real analysis. In this article, we present the Coq formalization of $\sigma$-algebras, measures, simple functions, and integration of nonnegative measurable functions, up to the full formal proofs of the Beppo Levi (monotone convergence) theorem and Fatou's lemma. More than a plain formalization of the known literature, we present several design choices made to balance the harmony between mathematical readability and usability of Coq theorems. These results are a first milestone toward the formalization of $L^p$~spaces such as Banach spaces.

ano.nymous@ccsd.cnrs.fr.invalid (Sylvie Boldo), Sylvie Boldo

[hal-02987394] Application of three approaches for quantitative AOP development to renal toxicity

[...]

ano.nymous@ccsd.cnrs.fr.invalid (Elias Zgheib), Elias Zgheib

[hal-02512652] Analytical preconditioners for Neumann elastodynamic Boundary Element Methods

Recent works in the Boundary Element Method (BEM) community have been devoted to the derivation of fast techniques to perform the matrix vector product needed in the iterative solver. Fast BEMs are now very mature. However, it has been shown that the number of iterations can significantly hinder the overall efficiency of fast BEMs. The derivation of robust preconditioners is now inevitable to increase the size of the problems that can be considered. Analytical precon-ditioners offer a very interesting strategy by improving the spectral properties of the boundary integral equations ahead from the discretization. The main contribution of this paper is to propose new analytical preconditioners to treat Neumann exterior scattering problems in 2D and 3D elasticity. These preconditioners are local approximations of the adjoint Neumann-to-Dirichlet map. We propose three approximations with different orders. The resulting boundary integral equations are preconditioned Combined Field Integral Equations (CFIEs). An analytical spectral study confirms the expected behavior of the preconditioners, i.e., a better eigenvalue clustering especially in the elliptic part contrary to the standard CFIE of the first-kind. We provide various 2D numerical illustrations of the efficiency of the method for different smooth and non smooth geometries. In particular, the number of iterations is shown to be independent of the density of discretization points per wavelength which is not the case of the standard CFIE. In addition, it is less sensitive to the frequency.

ano.nymous@ccsd.cnrs.fr.invalid (Stéphanie Chaillat), Stéphanie Chaillat

[hal-03339115] A Data-Driven Space-Time-Parameter Reduced-Order Model with Manifold Learning for Coupled Problems : Application to Deformable Capsules Flowing in Microchannels

An innovative data-driven model-order reduction technique is proposed to model dilute micrometric or nanometric suspensions of microcapsules, i.e., microdrops protected in a thin hyperelastic membrane, which are used in Healthcare as innovative drug vehicles. We consider a microcapsule flowing in a similar-size microfluidic channel and vary systematically the governing parameter, namely the capillary number, ratio of the viscous to elastic forces, and the confinement ratio, ratio of the capsule to tube size. The resulting space-time-parameter problem is solved using two global POD reduced bases, determined in the offline stage for the space and parameter variables, respectively. A suitable low-order spatial reduced basis is then computed in the online stage for any new parameter instance. The time evolution of the capsule dynamics is achieved by identifying the nonlinear low-order manifold of the reduced variables; for that, a point cloud of reduced data is computed and a diffuse approximation method is used. Numerical comparisons between the full-order fluid-structure interaction model and the reduced-order one confirm both accuracy and stability of the reduction technique over the whole admissible parameter domain. We believe that such an approach can be applied to a broad range of coupled problems especially involving quasistatic models of structural mechanics.

ano.nymous@ccsd.cnrs.fr.invalid (Toufik Boubehziz), Toufik Boubehziz

[hal-02921498] A Game Theoretic Approach for Privacy Preserving Model in IoT-Based Transportation

Internet of Things (IoT) applications using sensors and actuators raise new privacy related threats such as drivers and vehicles tracking and profiling. These threats can be addressed by developing adaptive and context-aware privacy protection solutions to face the environmental constraints (memory, energy, communication channel, etc.), which cause a number of limitations of applying cryptographic schemes. This paper proposes a privacy preserving solution in ITS context relying on a game theory model between two actors (data holder and data requester) using an incentive motivation against a privacy concession, or leading an active attack. We describe the game elements (actors, roles, states, strategies, and transitions), and find an equilibrium point reaching a compromise between privacy concessions and incentive motivation. Finally, we present numerical results to analyze and evaluate the game theory-based theoretical formulation.

ano.nymous@ccsd.cnrs.fr.invalid (Arbia Riahi Sfar), Arbia Riahi Sfar

[hal-02635638] Caracterisation de la composition chimique des aerosols issus de la decoupe laser de simulants du corium

Dans le contexte du démantèlement des réacteurs de Fukushima Daiichi, plusieurs projets ont été subventionnés par le gouvernement japonais pour préparer les opérations de retrait du corium. Dans ce cadre, une étude conjointe menée entre ONET Technologies et les laboratoires du CEA et de l’IRSN a permis de démontrer la faisabilité de l’utilisation de la technique de découpe par laser et d’estimer le terme source aérosol ainsi généré. Deux simulants du corium synthétisés et caractérisés par le CEA-Cadarache ont fait l’objet d’essais de tirs laser sous air et sous eau au sein de l’installation DELIA du CEA Saclay, et les aérosols émis ont été caractérisés par l’IRSN. La caractérisation des particules émises en termes de concentration et de distribution granulométrique a permis d’apporter des informations pour prédire notamment le transport et le dépôt des particules, mais la connaissance de la composition chimique par classe de taille est une information nécessaire pour une meilleure gestion des risques professionnels et environnementaux. Cet article présente les résultats concernant la caractérisation de la composition chimique de l’aérosol d’un simulant du corium, en condition de découpe laser sous air, et la distribution granulométrique associée

ano.nymous@ccsd.cnrs.fr.invalid (Emmanuel Porcheron), Emmanuel Porcheron

[hal-02274493] A posteriori estimates distinguishing the error components and adaptive stopping criteria for numerical approximations of parabolic variational inequalities

We consider in this paper a model parabolic variational inequality. This problem is discretized with conforming Lagrange finite elements of order $p ≥ 1$ in space and with the backward Euler scheme in time. The nonlinearity coming from the complementarity constraints is treated with any semismooth Newton algorithm and we take into account in our analysis an arbitrary iterative algebraic solver. In the case $p = 1$, when the system of nonlinear algebraic equations is solved exactly, we derive an a posteriori error estimate on both the energy error norm and a norm approximating the time derivative error. When $p ≥ 1$, we provide a fully computable and guaranteed a posteriori estimate in the energy error norm which is valid at each step of the linearization and algebraic solvers. Our estimate, based on equilibrated flux reconstructions, also distinguishes the discretization, linearization, and algebraic error components. We build an adaptive inexact semismooth Newton algorithm based on stopping the iterations of both solvers when the estimators of the corresponding error components do not affect significantly the overall estimate. Numerical experiments are performed with the semismooth Newton-min algorithm and the semismooth Newton-Fischer-Burmeister algorithm in combination with the GMRES iterative algebraic solver to illustrate the strengths of our approach.

ano.nymous@ccsd.cnrs.fr.invalid (Jad Dabaghi), Jad Dabaghi

[hal-01666845] Adaptive inexact semismooth Newton methods for the contact problem between two membranes

We propose an adaptive inexact version of a class of semismooth Newton methods that is aware of the continuous (variational) level. As a model problem, we study the system of variational inequalities describing the contact between two membranes. This problem is discretized with conforming finite elements of order $p \geq 1$, yielding a nonlinear algebraic system of variational inequalities. We consider any iterative semismooth linearization algorithm like the Newton-min or the Newton--Fischer--Burmeister which we complementby any iterative linear algebraic solver. We then derive an a posteriori estimate on the error between the exact solution at the continuous level and the approximate solution which is valid at any step of the linearization and algebraic resolutions. Our estimate is based on flux reconstructions in discrete subspaces of $\mathbf{H}(\mathrm{div}, \Omega)$ and on potential reconstructions in discrete subspaces of $H^1(\Omega)$ satisfying the constraints. It distinguishes the discretization, linearization, and algebraic components of the error. Consequently, we can formulate adaptive stopping criteria for both solvers, giving rise to an adaptive version of the considered inexact semismooth Newton algorithm. Under these criteria, the efficiency of the leading estimates is also established, meaning that we prove them equivalent with the error up to a generic constant. Numerical experiments for the Newton-min algorithm in combination with the GMRES algebraic solver confirm the efficiency of the developed adaptive method.

ano.nymous@ccsd.cnrs.fr.invalid (Jad Dabaghi), Jad Dabaghi

[hal-01349456] Approche d’un territoire de montagne : occupations humaines et contexte pédo-sédimentaire des versants du col du Petit-Saint-Bernard, de la Préhistoire à l’Antiquité

Dans le cadre d’un programme pluriannuel, des campagnes de sondages ont été réalisées sur les deux versants du col du Petit-Saint-Bernard(2188 m, Alpes occidentales), entre 750 et 3000 m d’altitude. La méthode de travail néglige les prospections au sol, au profit de la multiplication des sondages manuels, implantés dans des contextes topographiques sélectionnés et menés jusqu’à la base des remplissages holocènes. Les résultats obtenus documentent dans la longue durée l’évolution de la dynamique pédo-sédimentaire et la fréquentation des différents étages d’altitude. La signification des données archéologiques collectées est discutée par rapport à l’état des connaissances dans une zone de comparaison groupant les vallées voisines des Alpes occidentales, par rapport aux modèles de peuplement existants et par rapport aux indications taphonomiques apportées par l’étude pédo-sédimentaire. Un programme d’analyses complémentaires destiné à préciser le contexte, la taphonomie et le statut fonctionnel

ano.nymous@ccsd.cnrs.fr.invalid (Pierre-Jérôme Rey), Pierre-Jérôme Rey

[hal-01919067] A posteriori error estimates for a compositional two-phase flow with nonlinear complementarity constraints

In this work, we develop an a-posteriori-steered algorithm for a compositional two-phase flow with exchange of components between the phases in porous media. As a model problem, we choose the two-phase liquid-gas flow with appearance and disappearance of the gas phase formulated as a system of nonlinear evolutive partial differential equations with nonlinear complementarity constraints. The discretization of our model is based on the backward Euler scheme in time and the finite volume scheme in space. The resulting nonlinear system is solved via an inexact semismooth Newton method. The key ingredient for the a posteriori analysis are the discretization, linearization, and algebraic flux reconstructions allowing to devise estimators for each error component. These enable to formulate criteria for stopping the iterative algebraic solver and the iterative linearization solver whenever the corresponding error components do not affect significantly the overall error. Numerical experiments are performed using the Newton-min algorithm as well as the Newton-Fischer-Burmeister algorithm in combination with the GMRES iterative linear solver to show the efficiency of the proposed adaptive method.

ano.nymous@ccsd.cnrs.fr.invalid (Ibtihel Ben Gharbia), Ibtihel Ben Gharbia

[tel-01084237] Contribution à la modélisation physique du dosage des actinides par microanalyse électronique

L'analyse par microsonde électronique (EPMA) permet de quantifier, avec une grande précision, les concentrations élémentaires d'échantillons de compositions inconnues. Elle permet, par exemple, de quantifier les actinides présents dans les combustibles nucléaires neufs ou irradiés, d'aider à la gestion des déchets nucléaires ou encore de dater certaines roches. Malheureusement, ces analyses quantitatives ne sont pas toujours réalisables dû à l'indisponibilité des étalons de référence pour certains actinides. Afin de pallier cette difficulté, une méthode d'analyse dite « sans standard » peut-être employée au moyen d'étalons virtuels. Ces derniers sont obtenus à partir de formules empiriques ou à partir de calculs basés sur des modèles théoriques. Toutefois, ces calculs requièrent la connaissance de paramètres physiques généralement mal connus, comme c'est le cas pour les sections efficaces de production de rayons X. La connaissance précise de ces sections efficaces est requise dans de nombreuses applications telles que dans les codes de transport de particules et dans les simulations Monte-Carlo. Ces codes de calculs sont très utilisés en médecine et particulièrement en imagerie médicale et dans les traitements par faisceau d'électrons. Dans le domaine de l'astronomie, ces données sont utilisées pour effectuer des simulations servant à prédire les compositions des étoiles et des nuages galactiques ainsi que la formation des systèmes planétaires.Au cours de ce travail, les sections efficaces de production des raies L et M du plomb, du thorium et de l'uranium ont été mesurées par impact d'électrons sur des cibles minces autosupportées d'épaisseur variant de 0,2 à 8 nm. Les résultats expérimentaux ont été comparés avec les prédictions théoriques de sections efficaces d'ionisation calculées grâce à l'approximation de Born en ondes distordues (DWBA) et avec les prédictions de formules analytiques utilisées dans les applications pratiques. Les sections efficaces d'ionisation ont été converties en sections efficaces de productions de rayons X grâce aux paramètres de relaxation atomique extraits de la littérature. Les résultats théoriques du modèle DWBA sont en excellents accords avec les résultats expérimentaux. Ceci permet de confirmer les prédictions de ce modèle et de valider son utilisation pour le calcul de standards virtuels.Les prédictions de ce modèle ont été intégrées dans le code Monte-Carlo PENELOPE afin de calculer l'intensité de rayons X produite par des standards pur d'actinides. Les calculs ont été réalisés pour les éléments dont le numéro atomique est 89 ≤ Z ≤ 99 et pour des tensions d'accélération variant du seuil d'ionisation jusque 40 kV, par pas de 0,5 kV. Pour une utilisation pratique, les intensités calculées pour les raies L et M les plus intenses ont été regroupées dans une base de données.Les prédictions des standards virtuels ainsi obtenus ont été comparées avec des mesures effectuées sur des échantillons de composition connue (U, UO2, ThO2, ThF4, PuO2…) et avec les données acquises lors de précédentes campagnes de mesures. Le dosage des actinides à l'aide de ces standards virtuels a montré un bon accord avec les résultats attendus. Ceci confirme la fiabilité des standards virtuels développés et démontre que la quantification des actinides par microsonde électronique est réalisable sans standards d'actinides et avec un bon niveau de confiance.

ano.nymous@ccsd.cnrs.fr.invalid (Aurélien Moy), Aurélien Moy

[hal-01700663] A Lagrange multiplier method for a discrete fracture model for flow in porous media

In this work we present a novel discrete fracture model for single-phase Darcy flow in porous media with fractures of co-dimension one, which introduces an additional unknown at the fracture interface. Inspired by the fictitious domain method this Lagrange multiplier couples fracture and matrix domain and represents a local exchange of the fluid. The multipliers naturally impose the equality of the pressures at the fracture interface. The model is thus appropriate for domains with fractures of permeability higher than that in the surrounding bulk domain. In particular the novel approach allows for independent, regular meshing of fracture and matrix domain and therefore avoids the generation of small elements. We show existence and uniqueness of the weak solution of the continuous primal formulation. Moreover we discuss the discrete inf-sup condition of two different finite element formulations. Several numerical examples verify the accuracy and convergence of proposed method.

ano.nymous@ccsd.cnrs.fr.invalid (Markus Köppel), Markus Köppel

[hal-01761591] A stabilized Lagrange multiplier finite-element method for flow in porous media with fractures

In this work we introduce a stabilized, numerical method for a multi-dimensional, discrete-fracture model (DFM) for single-phase Darcy flow in fractured porous media. In the model, introduced in an earlier work, flow in the (n − 1)-dimensional fracture domain is coupled with that in the n-dimensional bulk or matrix domain by the use of Lagrange multipliers. Thus the model permits a finite element discretization in which the meshes in the fracture and matrix domains are independent so that irregular meshing and in particular the generation of small elements can be avoided. In this paper we introduce in the numerical formulation, which is a saddle-point problem based on a primal, variational formulation for flow in the matrix domain and in the fracture system, a consistent stabilizing term which penalizes discontinuities in the Lagrange multipliers. For this penalized scheme we show stability and prove convergence. With numerical experiments we analyze the performance of the method for various choices of the penalization parameter and compare with other numerical DFM's.

ano.nymous@ccsd.cnrs.fr.invalid (Markus Köppel), Markus Köppel

[hal-01581807] Preuve formelle du théorème de Lax–Milgram

Résumé du papier "A Coq formal proof of the Lax-Milgram Theorem", CPP 2017.

ano.nymous@ccsd.cnrs.fr.invalid (Sylvie Boldo), Sylvie Boldo

[hal-01525249] Shape sensitivity analysis for elastic structures with generalized impedance boundary conditions of the Wentzell type -Application to compliance minimization

This paper focuses on Generalized Impedance Boundary Conditions (GIBC) with second order derivatives in the context of linear elasticity and general curved interfaces. A condition of the Wentzell type modeling thin layer coatings on some elastic structure is obtained through an asymptotic analysis of order one of the transmission problem at the thin layer interfaces with respect to the thickness parameter. We prove the well-posedness of the approximate problem and the theoretical quadratic accuracy of the boundary conditions. Then we perform a shape sensitivity analysis of the GIBC model in order to study a shape optimization/optimal design problem. We prove the existence and characterize the first shape derivative of this model. A comparison with the asymptotic expansion of the first shape derivative associated to the original thin layer transmission problem shows that we can interchange the asymptotic and shape derivative analysis. Finally we apply these results to the compliance minimization problem. We compute the shape derivative of the compliance in this context and present some numerical simulations.

ano.nymous@ccsd.cnrs.fr.invalid (Fabien Caubet), Fabien Caubet

[hal-01523020] Fast iterative boundary element methods for high-frequency scattering problems in 3D elastodynamics

The fast multipole method is an efficient technique to accelerate the solution of large scale 3D scattering problems with boundary integral equations. However, the fast multipole accelerated boundary element method (FM-BEM) is intrinsically based on an iterative solver. It has been shown that the number of iterations can significantly hinder the overall efficiency of the FM-BEM. The derivation of robust preconditioners for FM-BEM is now inevitable to increase the size of the problems that can be considered. The main constraint in the context of the FM-BEM is that the complete system is not assembled to reduce computational times and memory requirements. Analytic preconditioners offer a very interesting strategy by improving the spectral properties of the boundary integral equations ahead from the discretization. The main contribution of this paper is to combine an approximate adjoint Dirichlet to Neumann (DtN) map as an analytic preconditioner with a FM-BEM solver to treat Dirichlet exterior scattering problems in 3D elasticity. The approximations of the adjoint DtN map are derived using tools proposed in [40]. The resulting boundary integral equations are preconditioned Combined Field Integral Equations (CFIEs). We provide various numerical illustrations of the efficiency of the method for different smooth and non smooth geometries. In particular, the number of iterations is shown to be completely independent of the number of degrees of freedom and of the frequency for convex obstacles.

ano.nymous@ccsd.cnrs.fr.invalid (Stéphanie Chaillat), Stéphanie Chaillat

[hal-01492141] Approche à deux échelles pour la prise en compte de défauts surfaciques dans l’analyse à rupture des structures

L’objectif de ce travail est de prendre en compte l’influence de la présence de défauts surfaciques sur le comportement jusqu’à rupture des structures et ce sans description fine de la géométrie des perturbations. L’approche proposée s’appuie principalement sur deux outils : une analyse asymptotique fine des équations de Navier et l’utilisation des modèles à discontinuité forte. Une stratégie de couplage des deux approches permettant l’analyse du comportement de la structure jusqu’à rupture est également présentée.

ano.nymous@ccsd.cnrs.fr.invalid (Delphine Brancherie), Delphine Brancherie

[hal-01391578] A Coq formal proof of the Lax–Milgram theorem

The Finite Element Method is a widely-used method to solve numerical problems coming for instance from physics or biology. To obtain the highest confidence on the correction of numerical simulation programs implementing the Finite Element Method, one has to formalize the mathematical notions and results that allow to establish the sound-ness of the method. The Lax–Milgram theorem may be seen as one of those theoretical cornerstones: under some completeness and coercivity assumptions, it states existence and uniqueness of the solution to the weak formulation of some boundary value problems. This article presents the full formal proof of the Lax–Milgram theorem in Coq. It requires many results from linear algebra, geometry, functional analysis , and Hilbert spaces.

ano.nymous@ccsd.cnrs.fr.invalid (Sylvie Boldo), Sylvie Boldo

[hal-01279503] First-order indicators for the estimation of discrete fractures in porous media

Faults and geological barriers can drastically affect the flow patterns in porous media. Such fractures can be modeled as interfaces that interact with the surrounding matrix. We propose a new technique for the estimation of the location and hydrogeological properties of a small number of large fractures in a porous medium from given distributed pressure or flow data. At each iteration, the algorithm builds a short list of candidates by comparing fracture indicators. These indicators quantify at the first order the decrease of a data misfit function; they are cheap to compute. Then, the best candidate is picked up by minimization of the objective function for each candidate. Optimally driven by the fit to the data, the approach has the great advantage of not requiring remeshing, nor shape derivation. The stability of the algorithm is shown on a series of numerical examples representative of typical situations.

ano.nymous@ccsd.cnrs.fr.invalid (Hend Ben Ameur), Hend Ben Ameur

[hal-01344090] The Lax–Milgram theorem. A detailed proof to be formalized in Coq

To obtain the highest confidence on the correction of numerical simulation programs implementing the finite element method, one has to formalize the mathematical notions and results that allow to establish the soundness of the method. The Lax-Milgram theorem may be seen as one of those theoretical cornerstones: under some completeness and coercivity assumptions, it states existence and uniqueness of the solution to the weak formulation of some boundary value problems. The purpose of this document is to provide the formal proof community with a very detailed pen-and-paper proof of the Lax-Milgram theorem.

ano.nymous@ccsd.cnrs.fr.invalid (François Clément), François Clément

[hal-01280269] Stationary Flow of Blood in a Rigid Vessel in the Presence of an External Magnetic Field : Considerations about the Forces and Wall Shear Stresses

The magnetohydrodynamics laws govern the motion of a conducting fluid, such as blood, in an externally applied static magnetic field B 0. When an artery is exposed to a magnetic field, the blood charged particles are deviated by the Lorentz force thus inducing electrical currents and voltages along the vessel walls and in the neighboring tissues. Such a situation may occur in several bio-medical applications: magnetic resonance imaging (MRI), magnetic drug transport and targeting, tissue engineering… In this paper, we consider the steady unidirectional blood flow in a straight circular rigid vessel with non-conducting walls, in the presence of an exterior static magnetic field. The exact solution of Gold (1962) (with the induced fields not neglected) is revisited. It is shown that the integration over a cross section of the vessel of the longitudinal projection of the Lorentz force is zero, and that this result is related to the existence of current return paths, whose contributions compensate each other over the section. It is also demonstrated that the classical definition of the shear stresses cannot apply in this situation of magnetohydrodynamic flow, because, due to the existence of the Lorentz force, the axisymmetry is broken.

ano.nymous@ccsd.cnrs.fr.invalid (Agnès Drochon), Agnès Drochon

[hal-01187242] Approximate local Dirichlet-to-Neumann map for three-dimensional time-harmonic elastic waves

It has been proven that the knowledge of an accurate approximation of the Dirichlet-to-Neumann (DtN) map is useful for a large range of applications in wave scattering problems. We are concerned in this paper with the construction of an approximate local DtN operator for time-harmonic elastic waves. The main contributions are the following. First, we derive exact operators using Fourier analysis in the case of an elastic half-space. These results are then extended to a general three-dimensional smooth closed surface by using a local tangent plane approximation. Next, a regularization step improves the accuracy of the approximate DtN operators and a localization process is proposed. Finally, a first application is presented in the context of the On-Surface Radiation Conditions method. The efficiency of the approach is investigated for various obstacle geometries at high frequencies.

ano.nymous@ccsd.cnrs.fr.invalid (Stéphanie Chaillat), Stéphanie Chaillat

[hal-01084363] REAL -TIME WAVELET-BASED ALGORITHM FOR CARDIAC AND RESPIRATORY MRI GATING

A real time algorithm for cardiac and respiratory gating, which only requires an ECG sensor, is proposed here. Three ECG electrodes are placed in such a manner that the modulation of the recorded ECG by the respiratory signal would be maximal; hence, given only one signal we can achieve both cardiac and respiratory MRI gating. First, an off-line learning phase based on wavelet decomposition is run to compute an optimal QRS filter. Afterwards, on one hand the QRS filter is used to accomplish R peak detection, and on the other, a low pass filtering process allows the retrieval of the respiration cycle so that the image acquisition sequences would be triggered by the R peaks only during the expiration phase.

ano.nymous@ccsd.cnrs.fr.invalid (D Abi-Abdallah), D Abi-Abdallah

[hal-01084362] REMOVING THE MHD ARTIFACTS FROM THE ECG SIGNAL FOR CARDIAC MRI SYNCHRONIZATION

Blood flow in high static magnetic fields induces elevated voltages that disrupt the ECG signal recorded simultaneously during MRI scans for synchronization purposes. This is known as the magnetohydrodynamic (MHD) effect, it increases the amplitude of the T wave, thus hindering correct R peak detection. In this paper, we present an algorithm for extracting an efficient reference signal from an ECG contaminated by the Nuclear Magnetic Resonance (NMR) environment, that performs a good separation of the R-wave and the MHD artifacts. The proposed signal processing method is based on sub-band decomposition using the wavelet transform, and has been tested on human and small rodents ECG signals acquired during MRI scans in various magnetic field intensities. The results showed an almost flawless trigger generation in fields up to 4.7 Tesla during the three tested imaging sequences (GE, FSE and IRSE)

ano.nymous@ccsd.cnrs.fr.invalid (D Abi-Abdallah), D Abi-Abdallah

[hal-01084357] Alterations in human ECG due to the MagnetoHydroDynamic effect : A method for accurate R peak detection in the presence of high MHD artifacts

Blood flow in high static magnetic fields induces elevated voltages that contaminate the ECG signal which is recorded simultaneously during MRI scans for synchronization purposes. This is known as the magnetohydrodynamic (MHD) effect, it increases the amplitude of the T wave, thus hindering correct R peak detection. In this paper, we inspect the MHD induced alterations of human ECG signals recorded in a 1.5 Tesla steady magnetic field and establish a primary characterization of the induced changes using time and frequency domain analysis. We also reexamine our previously developed real time algorithm for MRI cardiac gating and determine that, with a minor modification, this algorithm is capable of achieving perfect detection even in the presence of strong MHD artifacts.

ano.nymous@ccsd.cnrs.fr.invalid (Dima Abi Abdallah), Dima Abi Abdallah

[hal-01083996] Cardiac and respiratory MRI gating using combined wavelet sub-band decomposition and adaptive filtering

Cardiac Magnetic Resonance Imaging (MRI) requires synchronization to overcome motion related artifacts caused by the heart’s contractions and the chest wall movements during respiration. Achieving good image quality necessitates combining cardiac and respiratory gating to produce, in real time, a trigger signal that sets off the consecutive image acquisitions. This guarantees that the data collection always starts at the same point of the cardiac cycle during the exhalation phase. In this paper, we present a real time algorithm for extracting a cardiac-respiratory trigger signal using only one, adequately placed, ECG sensor. First, an off-line calculation phase, based on wavelet decomposition, is run to compute an optimal QRS filter. This filter is used, afterwards, to accomplish R peak detection, while a low pass filtering process allows the retrieval of the respiration cycle. The algorithm’s synchronization capabilities were assessed during mice cardiac MRI sessions employing three different imaging sequences, and three specific wavelet functions. The prominent image enhancement gave a good proof of correct triggering. QRS detection was almost flawless for all signals. As for the respiration cycle retrieval it was evaluated on contaminated simulated signals, which were artificially modulated to imitate respiration. The results were quite satisfactory.

ano.nymous@ccsd.cnrs.fr.invalid (Dima Abi-Abdallah), Dima Abi-Abdallah

[hal-01083975] Pulsed magnetohydrodynamic blood flow in a rigid vessel under physiological pressure gradient

Blood flow in a steady magnetic field has been of great interest over the past years.Many researchers have examined the effects of magnetic fields on velocity profiles and arterial pressure, and major studies focused on steady or sinusoidal flows. In this paper we present a solution for pulsed magnetohydrodynamic blood flow with a somewhat realistic physiological pressure wave obtained using a windkessel lumped model. A pressure gradient is derived along a rigid vessel placed at the output of a compliant module which receives the ventricle outflow. Then, velocity profile and flow rate expressions are derived in the rigid vessel in the presence of a steady transverse magnetic field. As expected, results showed flow retardation and flattening. The adaptability of our solution approach allowed a comparison with previously addressed flow cases and calculations presented a good coherence with those well established solutions.

ano.nymous@ccsd.cnrs.fr.invalid (Dima Abi Abdallah), Dima Abi Abdallah

[hal-00937113] An extremal eigenvalue problem for the Wentzell-Laplace operator

We consider the question of giving an upper bound for the first nontrivial eigenvalue of the Wentzell-Laplace operator of a domain $\Omega$, involving only geometrical informations. We provide such an upper bound, by generalizing Brock's inequality concerning Steklov eigenvalues, and we conjecture that balls maximize the Wentzell eigenvalue, in a suitable class of domains, which would improve our bound. To support this conjecture, we prove that balls are critical domains for the Wentzell eigenvalue, in any dimension, and that they are local maximizers in dimension 2 and 3, using an order two sensitivity analysis. We also provide some numerical evidence.

ano.nymous@ccsd.cnrs.fr.invalid (Marc Dambrine), Marc Dambrine

[hal-00780735] Shape optimization methods for the Inverse Obstacle Problem with generalized impedance boundary conditions

We aim to reconstruct an inclusion ω immersed in a perfect fluid flowing in a larger bounded domain Ω via boundary measurements on ∂Ω. The obstacle ω is assumed to have a thin layer and is then modeled using generalized boundary conditions (precisely Ventcel boundary conditions). We first obtain an identifiability result (i.e. the uniqueness of the solution of the inverse problem) for annular configurations through explicit computations. Then, this inverse problem of reconstructing ω is studied thanks to the tools of shape optimization by minimizing a least squares type cost functional. We prove the existence of the shape derivatives with respect to the domain ω and characterize the gradient of this cost functional in order to make a numerical resolution. We also characterize the shape Hessian and prove that this inverse obstacle problem is unstable in the following sense: the functional is degenerated for highly oscillating perturbations. Finally, we present some numerical simulations in order to confirm and extend our theoretical results.

ano.nymous@ccsd.cnrs.fr.invalid (Fabien Caubet), Fabien Caubet

[hal-00780730] Stability of critical shapes for the drag minimization problem in Stokes flow

We study the stability of some critical (or equilibrium) shapes in the minimization problem of the energy dissipated by a fluid (i.e. the drag minimization problem) governed by the Stokes equations. We first compute the shape derivative up to the second order, then provide a sufficient condition for the shape Hessian of the energy functional to be coercive at a critical shape. Under this condition, the existence of such a local strict minimum is then proved using a precise upper bound for the variations of the second order shape derivative of the functional with respect to the coercivity and differentiability norms. Finally, for smooth domains, a lower bound of the variations of the drag is obtained in terms of the measure of the symmetric difference of domains.

ano.nymous@ccsd.cnrs.fr.invalid (Fabien Caubet), Fabien Caubet

[hal-00731856] On the necessity of Nitsche term

The aim of this article is to explore the possibility of using a family of fixed finite elements shape functions to solve a Dirichlet boundary value problem with an alternative variational formulation. The domain is embedded in a bounding box and the finite element approximation is associated to a regular structured mesh of the box. The shape of the domain is independent of the discretization mesh. In these conditions, a meshing tool is never required. This may be especially useful in the case of evolving domains, for example shape optimization or moving interfaces. This is not a new idea, but we analyze here a special approach. The main difficulty of the approach is that the associated quadratic form is not coercive and an inf-sup condition has to be checked. In dimension one, we prove that this formulation is well posed and we provide error estimates. Nevertheless, our proof relying on explicit computations is limited to that case and we give numerical evidence in dimension two that the formulation does not provide a reliable method. We first add a regularization through a Nitscheterm and we observe that some instabilities still remain. We then introduce and justify a geometrical regularization. A reliable method is obtained using both regularizations.

ano.nymous@ccsd.cnrs.fr.invalid (Gaël Dupire), Gaël Dupire

[hal-00731528] On the necessity of Nitsche term. Part II : An alternative approach

The aim of this article is to explore the possibility of using a family of fixed finite element shape functions that does not match the domain to solve a boundary value problem with Dirichlet boundary condition. The domain is embedded in a bounding box and the finite element approximation is associated to a regular structured mesh of the box. The shape of the domain is independent of the discretization mesh. In these conditions, a meshing tool is never required. This may be especially useful in the case of evolving domains, for example shape optimization or moving interfaces. Nitsche method has been intensively applied. However, Nitsche is weighted with the mesh size h and therefore is a purely discrete point of view with no interpretation in terms of a continuous variational approach associated with a boundary value problem. In this paper, we introduce an alternative to Nitsche method which is associated with a continuous bilinear form. This extension has strong restrictions: it needs more regularity on the data than the usual method. We prove the well-posedness of our formulation and error estimates. We provide numerical comparisons with Nitsche method.

ano.nymous@ccsd.cnrs.fr.invalid (Jean-Paul Boufflet), Jean-Paul Boufflet

[hal-00684625] Persistency of wellposedness of Ventcel’s boundary value problem under shape deformations

Ventcel boundary conditions are second order di erential conditions that appear in asymptotic models. Like Robin boundary conditions, they lead to well-posed variational problems under a sign condition of the coe cient. This is achieved when physical situations are considered. Nevertheless, situations where this condition is violated appeared in several recent works where absorbing boundary conditions or equivalent boundary conditions on rough surface are sought for numerical purposes. The well-posedness of such problems was recently investigated : up to a countable set of parameters, existence and uniqueness of the solution for the Ventcel boundary value problem holds without the sign condition. However, the values to be avoided depend on the domain where the boundary value problem is set. In this work, we address the question of the persistency of the solvability of the boundary value problem under domain deformation.

ano.nymous@ccsd.cnrs.fr.invalid (Marc Dambrine), Marc Dambrine

[hal-00678036] A Kohn-Vogelius formulation to detect an obstacle immersed in a fluid

The aim of our work is to reconstruct an inclusion immersed in a fluid flowing in a larger bounded domain via a boundary measurement. Here the fluid motion is assumed to be governed by the Stokes equations. We study the inverse problem thanks to the tools of shape optimization by minimizing a Kohn-Vogelius type cost functional. We first characterize the gradient of this cost functional in order to make a numerical resolution. Then, in order to study the stability of this problem, we give the expression of the shape Hessian. We show the compactness of the Riesz operator corresponding to this shape Hessian at a critical point which explains why the inverse problem is ill-posed. Therefore we need some regularization methods to solve numerically this problem. We illustrate those general results by some explicit calculus of the shape Hessian in some particular geometries. In particular, we solve explicitly the Stokes equations in a concentric annulus. Finally, we present some numerical simulations using a parametric method.

ano.nymous@ccsd.cnrs.fr.invalid (Fabien Caubet), Fabien Caubet

[hal-00222765] Inégalités de Calderon-Zygmund, Potentiels et Transformées de Riesz dans des Espaces avec Poids

[...]

ano.nymous@ccsd.cnrs.fr.invalid (Chérif Amrouche), Chérif Amrouche