Retour Accueil / Recherche / Publications sur H.A.L.

To obtain the highest confidence on the correction of numerical simulation programs implementing the finite element method, one has to formalize the mathematical notions and results that allow to establish the soundness of the method. The Lax-Milgram theorem may be seen as one of those theoretical cornerstones: under some completeness and coercivity assumptions, it states existence and uniqueness of the solution to the weak formulation of some boundary value problems. The purpose of this document is to provide the formal proof community with a very detailed pen-and-paper proof of the Lax-Milgram theorem.

ano.nymous@ccsd.cnrs.fr.invalid (François Clément), François Clément

Observability Gramians of diffusion equations have been recently connected to infinite Pick and Cauchy matrices. In fact, inverse or observability inequalities can be obtained after estimating the extreme eigenvalues of these structured matrices, with respect to the diffusion semi-group matrix. The purpose is hence to conduct a spectral study of a subclass of symmetric Cauchy matrices and present an algebraic way to show the desired observability results. We revisit observability inequalities for three different observation problems of the diffusion equation and show how they can be (re)stated through simple proofs.

ano.nymous@ccsd.cnrs.fr.invalid (Faker Ben Belgacem), Faker Ben Belgacem

Ill-posedness and/or Ill-conditioning are features users have to deal with appropriately in the controllability of diffusion problems for secure and reliable outputs. We investigate those issues in the case of a boundary Dirichlet control, in an attempt to underline the origin of the troubles arising in the numerical computations and to shed some light on the difficulties to obtain good quality simulations. The exact controllability is severely ill-posed while, in spite of its well-posedness, the null-controllability turns out to be very badly ill-conditioned. Theoretical and numerical results are stated on the heat equation in one dimension to illustrate the specific instabilities of each problem. The main tools used here are first a characterization of the subspace where the HUM control lies and the study of the spectrum of some structured matrices, of Pick and Löwner type, obtained from the Fourier calculations on the state and adjoint equations.

ano.nymous@ccsd.cnrs.fr.invalid (Faker Ben Belgacem), Faker Ben Belgacem

We consider a mixed reaction diffusion system describing the organic pollution in stream-waters. It may be viewed as the static version of Streeter-Phelps equations relating the Biochemical Oxygen Demand and Dissolved Oxygen to which dispersion terms are added. In this work, we propose a mixed variational formulation and prove its well-posedness. Next, we develop two finite element discretizations of this problem and establish optimal a priori error estimates for the second discrete problem.

ano.nymous@ccsd.cnrs.fr.invalid (Faker Ben Belgacem), Faker Ben Belgacem

Nous complétons ici les résultats d'isomorphismes de l'opérateur de Laplace dans des espaces de Sobolev avec poids et nous donnons quelques applications. Parmi celles-ci, nous obtenons des inégalités semblables à celle de Calderon-Zygmund et en particulier des propriétés de continuité des transformées de Riesz dans des espaces avec poids. Nous donnons également des propriétes de potentiels newtoniens de certaines distributions.

ano.nymous@ccsd.cnrs.fr.invalid (Chérif Amrouche), Chérif Amrouche

We introduce a new algorithm of proper generalized decomposition (PGD) for parametric symmetric elliptic partial differential equations. For any given dimension, we prove the existence of an optimal subspace of at most that dimension which realizes the best approximation---in the mean parametric norm associated to the elliptic operator---of the error between the exact solution and the Galerkin solution calculated on the subspace. This is analogous to the best approximation property of the proper orthogonal decomposition (POD) subspaces, except that in our case the norm is parameter-dependent. We apply a deflation technique to build a series of approximating solutions on finite-dimensional optimal subspaces, directly in the online step, and we prove that the partial sums converge to the continuous solution in the mean parametric elliptic norm. We show that the standard PGD for the considered parametric problem is strongly related to the deflation algorithm introduced in this paper. This opens the possibility of computing the PGD expansion by directly solving the optimization problems that yield the optimal subspaces.

ano.nymous@ccsd.cnrs.fr.invalid (M. Azaïez), M. Azaïez

This article concerns maximum-likelihood estimation for discrete time homogeneous nonparametric semi-Markov models with finite state space. In particular, we present the exact maximum-likelihood estimator of the semi-Markov kernel which governs the evolution of the semi-Markov chain (SMC). We study its asymptotic properties in the following cases: (i) for one observed trajectory, when the length of the observation tends to infinity, and (ii) for parallel observations of independent copies of an SMC censored at a fixed time, when the number of copies tends to infinity. In both cases, we obtain strong consistency, asymptotic normality, and asymptotic efficiency for every finite dimensional vector of this estimator. Finally, we obtain explicit forms for the covariance matrices of the asymptotic distributions.

ano.nymous@ccsd.cnrs.fr.invalid (Samis Trevezas), Samis Trevezas

Blood flow in high static magnetic fields induces elevated voltages that disrupt the ECG signal recorded simultaneously during MRI scans for synchronization purposes. This is known as the magnetohydrodynamic (MHD) effect, it increases the amplitude of the T wave, thus hindering correct R peak detection. In this paper, we present an algorithm for extracting an efficient reference signal from an ECG contaminated by the Nuclear Magnetic Resonance (NMR) environment, that performs a good separation of the R-wave and the MHD artifacts. The proposed signal processing method is based on sub-band decomposition using the wavelet transform, and has been tested on human and small rodents ECG signals acquired during MRI scans in various magnetic field intensities. The results showed an almost flawless trigger generation in fields up to 4.7 Tesla during the three tested imaging sequences (GE, FSE and IRSE)

ano.nymous@ccsd.cnrs.fr.invalid (D Abi-Abdallah), D Abi-Abdallah

A real time algorithm for cardiac and respiratory gating, which only requires an ECG sensor, is proposed here. Three ECG electrodes are placed in such a manner that the modulation of the recorded ECG by the respiratory signal would be maximal; hence, given only one signal we can achieve both cardiac and respiratory MRI gating. First, an off-line learning phase based on wavelet decomposition is run to compute an optimal QRS filter. Afterwards, on one hand the QRS filter is used to accomplish R peak detection, and on the other, a low pass filtering process allows the retrieval of the respiration cycle so that the image acquisition sequences would be triggered by the R peaks only during the expiration phase.

ano.nymous@ccsd.cnrs.fr.invalid (D Abi-Abdallah), D Abi-Abdallah

The magnetohydrodynamics laws govern the motion of a conducting fluid, such as blood, in an externally applied static magnetic field B 0. When an artery is exposed to a magnetic field, the blood charged particles are deviated by the Lorentz force thus inducing electrical currents and voltages along the vessel walls and in the neighboring tissues. Such a situation may occur in several bio-medical applications: magnetic resonance imaging (MRI), magnetic drug transport and targeting, tissue engineering… In this paper, we consider the steady unidirectional blood flow in a straight circular rigid vessel with non-conducting walls, in the presence of an exterior static magnetic field. The exact solution of Gold (1962) (with the induced fields not neglected) is revisited. It is shown that the integration over a cross section of the vessel of the longitudinal projection of the Lorentz force is zero, and that this result is related to the existence of current return paths, whose contributions compensate each other over the section. It is also demonstrated that the classical definition of the shear stresses cannot apply in this situation of magnetohydrodynamic flow, because, due to the existence of the Lorentz force, the axisymmetry is broken.

ano.nymous@ccsd.cnrs.fr.invalid (Agnès Drochon), Agnès Drochon

Cardiac Magnetic Resonance Imaging (MRI) requires synchronization to overcome motion related artifacts caused by the heart’s contractions and the chest wall movements during respiration. Achieving good image quality necessitates combining cardiac and respiratory gating to produce, in real time, a trigger signal that sets off the consecutive image acquisitions. This guarantees that the data collection always starts at the same point of the cardiac cycle during the exhalation phase. In this paper, we present a real time algorithm for extracting a cardiac-respiratory trigger signal using only one, adequately placed, ECG sensor. First, an off-line calculation phase, based on wavelet decomposition, is run to compute an optimal QRS filter. This filter is used, afterwards, to accomplish R peak detection, while a low pass filtering process allows the retrieval of the respiration cycle. The algorithm’s synchronization capabilities were assessed during mice cardiac MRI sessions employing three different imaging sequences, and three specific wavelet functions. The prominent image enhancement gave a good proof of correct triggering. QRS detection was almost flawless for all signals. As for the respiration cycle retrieval it was evaluated on contaminated simulated signals, which were artificially modulated to imitate respiration. The results were quite satisfactory.

ano.nymous@ccsd.cnrs.fr.invalid (Dima Abi-Abdallah), Dima Abi-Abdallah

Blood flow in high static magnetic fields induces elevated voltages that contaminate the ECG signal which is recorded simultaneously during MRI scans for synchronization purposes. This is known as the magnetohydrodynamic (MHD) effect, it increases the amplitude of the T wave, thus hindering correct R peak detection. In this paper, we inspect the MHD induced alterations of human ECG signals recorded in a 1.5 Tesla steady magnetic field and establish a primary characterization of the induced changes using time and frequency domain analysis. We also reexamine our previously developed real time algorithm for MRI cardiac gating and determine that, with a minor modification, this algorithm is capable of achieving perfect detection even in the presence of strong MHD artifacts.

ano.nymous@ccsd.cnrs.fr.invalid (Dima Abi Abdallah), Dima Abi Abdallah

Blood flow in a steady magnetic field has been of great interest over the past years.Many researchers have examined the effects of magnetic fields on velocity profiles and arterial pressure, and major studies focused on steady or sinusoidal flows. In this paper we present a solution for pulsed magnetohydrodynamic blood flow with a somewhat realistic physiological pressure wave obtained using a windkessel lumped model. A pressure gradient is derived along a rigid vessel placed at the output of a compliant module which receives the ventricle outflow. Then, velocity profile and flow rate expressions are derived in the rigid vessel in the presence of a steady transverse magnetic field. As expected, results showed flow retardation and flattening. The adaptability of our solution approach allowed a comparison with previously addressed flow cases and calculations presented a good coherence with those well established solutions.

ano.nymous@ccsd.cnrs.fr.invalid (Dima Abi Abdallah), Dima Abi Abdallah

This paper addresses a complex multi-physical phenomemon involving cardiac electrophysiology and hemodynamics. The purpose is to model and simulate a phenomenon that has been observed in MRI machines: in the presence of a strong magnetic field, the T-wave of the electrocardiogram (ECG) gets bigger, which may perturb ECG-gated imaging. This is due a magnetohydrodynamic (MHD) eff ect occurring in the aorta. We reproduce this experimental observation through computer simulations on a realistic anatomy, and with a three-compartment model: inductionless magnetohydrodynamic equations in the aorta, bidomain equations in the heart and electrical di ffusion in the rest of the body. These compartments are strongly coupled and solved using fi nite elements. Several benchmark tests are proposed to assess the numerical solutions and the validity of some modeling assumptions. Then, ECGs are simulated for a wide range of magnetic field intensities (from 0 to 20 Tesla).

ano.nymous@ccsd.cnrs.fr.invalid (Vincent Martin), Vincent Martin

This paper investigates the influence of static magnetic field exposure on blood flow. We mainly focus on steady flows in a rigid vessel and review the existing theoretical solutions, each based on some simplifying hypothesis. The results are developed, examined and compared, showing how the magnetohy-drodynamic interactions reduce the flow rate and generate electric voltages across the vessel walls. These effects are found to be moderate for magnetic fields such as those used in magnetic resonance imaging. In this case, a very simplified solution, formulated by neglecting the walls conductivity as well as the induced magnetic fields, is proven suitable.

ano.nymous@ccsd.cnrs.fr.invalid (Dima Abi Abdallah), Dima Abi Abdallah

Electron back-scattering diffraction (EBSD) can be successfully performed on SiC coatings for HTR fuel particles. EBSD grain maps obtained from thick and thin unirradiated samples are presented, along with pole figures showing textures and a chart showing the distribution of grain aspect ratios. This information is of great interest, and contributes to improving the process parameters and ensuring the reproducibility of coatings

ano.nymous@ccsd.cnrs.fr.invalid (D. Helary), D. Helary

L'analyse par microsonde électronique (EPMA) permet de quantifier, avec une grande précision, les concentrations élémentaires d'échantillons de compositions inconnues. Elle permet, par exemple, de quantifier les actinides présents dans les combustibles nucléaires neufs ou irradiés, d'aider à la gestion des déchets nucléaires ou encore de dater certaines roches. Malheureusement, ces analyses quantitatives ne sont pas toujours réalisables dû à l'indisponibilité des étalons de référence pour certains actinides. Afin de pallier cette difficulté, une méthode d'analyse dite « sans standard » peut-être employée au moyen d'étalons virtuels. Ces derniers sont obtenus à partir de formules empiriques ou à partir de calculs basés sur des modèles théoriques. Toutefois, ces calculs requièrent la connaissance de paramètres physiques généralement mal connus, comme c'est le cas pour les sections efficaces de production de rayons X. La connaissance précise de ces sections efficaces est requise dans de nombreuses applications telles que dans les codes de transport de particules et dans les simulations Monte-Carlo. Ces codes de calculs sont très utilisés en médecine et particulièrement en imagerie médicale et dans les traitements par faisceau d'électrons. Dans le domaine de l'astronomie, ces données sont utilisées pour effectuer des simulations servant à prédire les compositions des étoiles et des nuages galactiques ainsi que la formation des systèmes planétaires.Au cours de ce travail, les sections efficaces de production des raies L et M du plomb, du thorium et de l'uranium ont été mesurées par impact d'électrons sur des cibles minces autosupportées d'épaisseur variant de 0,2 à 8 nm. Les résultats expérimentaux ont été comparés avec les prédictions théoriques de sections efficaces d'ionisation calculées grâce à l'approximation de Born en ondes distordues (DWBA) et avec les prédictions de formules analytiques utilisées dans les applications pratiques. Les sections efficaces d'ionisation ont été converties en sections efficaces de productions de rayons X grâce aux paramètres de relaxation atomique extraits de la littérature. Les résultats théoriques du modèle DWBA sont en excellents accords avec les résultats expérimentaux. Ceci permet de confirmer les prédictions de ce modèle et de valider son utilisation pour le calcul de standards virtuels.Les prédictions de ce modèle ont été intégrées dans le code Monte-Carlo PENELOPE afin de calculer l'intensité de rayons X produite par des standards pur d'actinides. Les calculs ont été réalisés pour les éléments dont le numéro atomique est 89 ≤ Z ≤ 99 et pour des tensions d'accélération variant du seuil d'ionisation jusque 40 kV, par pas de 0,5 kV. Pour une utilisation pratique, les intensités calculées pour les raies L et M les plus intenses ont été regroupées dans une base de données.Les prédictions des standards virtuels ainsi obtenus ont été comparées avec des mesures effectuées sur des échantillons de composition connue (U, UO2, ThO2, ThF4, PuO2…) et avec les données acquises lors de précédentes campagnes de mesures. Le dosage des actinides à l'aide de ces standards virtuels a montré un bon accord avec les résultats attendus. Ceci confirme la fiabilité des standards virtuels développés et démontre que la quantification des actinides par microsonde électronique est réalisable sans standards d'actinides et avec un bon niveau de confiance.

ano.nymous@ccsd.cnrs.fr.invalid (Aurélien Moy), Aurélien Moy

In this work, we consider singular perturbations of the boundary of a smooth domain. We describe the asymptotic behavior of the solution uε of a second order elliptic equation posed in the perturbed domain with respect to the size parameter ε of the deformation. We are also interested in the variations of the energy functional. We propose a numerical method for the approximation of uε based on a multiscale superposition of the unperturbed solution u0 and a profile defined in a model domain. We conclude with numerical results.

ano.nymous@ccsd.cnrs.fr.invalid (Marc Dambrine), Marc Dambrine

Ventcel boundary conditions are second order di erential conditions that appear in asymptotic models. Like Robin boundary conditions, they lead to well-posed variational problems under a sign condition of the coe cient. This is achieved when physical situations are considered. Nevertheless, situations where this condition is violated appeared in several recent works where absorbing boundary conditions or equivalent boundary conditions on rough surface are sought for numerical purposes. The well-posedness of such problems was recently investigated : up to a countable set of parameters, existence and uniqueness of the solution for the Ventcel boundary value problem holds without the sign condition. However, the values to be avoided depend on the domain where the boundary value problem is set. In this work, we address the question of the persistency of the solvability of the boundary value problem under domain deformation.

ano.nymous@ccsd.cnrs.fr.invalid (Marc Dambrine), Marc Dambrine

We consider the inverse conductivity problem with one measurement for the equation $div((\sigma_1+(\sigma_2-\sigma_1)\chi_D)\nabla{u})=0$ determining the unknown inclusion $D$ included in $\Omega$. We suppose that $\Omega$ is the unit disk of $\mathbb{R}^2$. With the tools of the conformal mappings, of elementary Fourier analysis and also the action of some quasi-conformal mapping on the Sobolev space $\sH^{1/2}(S^1)$, we show how to approximate the Dirichlet-to-Neumann map when the original inclusion $D$ is a $\varepsilon-$ approximation of a disk. This enables us to give some uniqueness and stability results.

ano.nymous@ccsd.cnrs.fr.invalid (Marc Dambrine), Marc Dambrine

In the present work, we consider the inverse conductivity problem of recovering inclusion with one measurement. First, we use conformal mapping techniques for determining the location of the anomaly and estimating its size. We then get a good initial guess for quasi-Newton type method. The inverse problem is treated from the shape optimization point of view. We give a rigorous proof for the existence of the shape derivative of the state function and of shape functionals. We consider both Least Squares fitting and Kohn and Vogelius functionals. For the numerical implementation, we use a parametrization of shapes coupled with a boundary element method. Several numerical exemples indicate the superiority of the Kohn and Vogelius functional over Least Squares fitting.

ano.nymous@ccsd.cnrs.fr.invalid (Lekbir Afraites), Lekbir Afraites

The level set method has become widely used in shape optimization where it allows a popular implementation of the steepest descent method. Once coupled with a weak material approximation, a single mesh is only used leading to very efficient and cheap numerical schemes in optimization of structures. However, it has some limitations and cannot be applied in every situation. This work aims at exploring such a limitation. We estimate the systematic error committed by using the weak material approximation and, on a model case, explain that they amplifies instabilities by a second order analysis of the objective function.

ano.nymous@ccsd.cnrs.fr.invalid (Marc Dambrine), Marc Dambrine

This paper is devoted to the analysis of a second order method for recovering the \emph{a priori} unknown shape of an inclusion $\omega$ inside a body $\Omega$ from boundary measurement. This inverse problem - known as electrical impedance tomography - has many important practical applications and hence has focussed much attention during the last years. However, to our best knowledge, no work has yet considered a second order approach for this problem. This paper aims to fill that void: we investigate the existence of second order derivative of the state $u$ with respect to perturbations of the shape of the interface $\partial\omega$, then we choose a cost function in order to recover the geometry of $\partial \omega$ and derive the expression of the derivatives needed to implement the corresponding Newton method. We then investigate the stability of the process and explain why this inverse problem is severely ill-posed by proving the compactness of the Hessian at the global minimizer.

ano.nymous@ccsd.cnrs.fr.invalid (Lekbir Afraites), Lekbir Afraites

We consider the question of giving an upper bound for the first nontrivial eigenvalue of the Wentzell-Laplace operator of a domain $\Omega$, involving only geometrical informations. We provide such an upper bound, by generalizing Brock's inequality concerning Steklov eigenvalues, and we conjecture that balls maximize the Wentzell eigenvalue, in a suitable class of domains, which would improve our bound. To support this conjecture, we prove that balls are critical domains for the Wentzell eigenvalue, in any dimension, and that they are local maximizers in dimension 2 and 3, using an order two sensitivity analysis. We also provide some numerical evidence.

ano.nymous@ccsd.cnrs.fr.invalid (Marc Dambrine), Marc Dambrine

This paper deals with optimal input design for parameter estimation in a bounded-error context. Uncertain controlled nonlinear dynamical models, when the input can be parametrized by a finite number of parameters, are considered. The main contribution of this paper concerns criteria for obtaining optimal inputs in this context. Two input design criteria are proposed and analysed. They involve sensitivity functions. The first criterion requires the inversion of the Gram matrix of sensitivity functions. The second one does not require this inversion and is then applied for parameter estimation of a model taken from the aeronautical domain. The estimation results obtained using an optimal input are compared with those obtained with an input optimized in a more classical context (Gaussian measurement noise and parameters a priori known to belong to some boxes). These results highlight the potential of optimal input design in a bounded-error context.

ano.nymous@ccsd.cnrs.fr.invalid (Carine Jauberthie), Carine Jauberthie

The main purpose of this paper is to investigate the strong approximation of the integrated empirical process. More precisely, we obtain the exact rate of the approximations by a sequence of weighted Brownian bridges and a weighted Kiefer process. Our arguments are based in part on the Komlós et al. (1975)'s results. Applications include the two-sample testing procedures together with the change-point problems. We also consider the strong approximation of the integrated empirical process when the parameters are estimated. Finally, we study the behavior of the self-intersection local time of the partial sum process representation of the integrated empirical process.Reference: Koml\'os, J., Major, P. and Tusn\'ady, G. (1975). An approximation of partial sums of independent RV's and the sample DF. I. Z. Wahrscheinlichkeitstheorie und Verw. Gebiete, 32, 111-131.

ano.nymous@ccsd.cnrs.fr.invalid (Sergio Alvarez-Andrade), Sergio Alvarez-Andrade

This paper deals with optimal input design for parameter estimation in a bounded-error context. Uncertain controlled nonlinear dynamical models, when the input can be parametrized by a finite number of parameters, are considered. The main contribution of this paper concerns criteria for obtaining optimal inputs in this context. Two input design criteria are proposed and analysed. They involve sensitivity functions. The first criterion requires the inversion of the Gram matrix of sensitivity functions. The second one does not require this inversion and is then applied for parameter estimation of a model taken from the aeronautical domain. The estimation results obtained using an optimal input are compared with those obtained with an input optimized in a more classical context (Gaussian measurement noise and parameters a priori known to belong to some boxes). These results highlight the potential of optimal input design in a bounded-error context.

ano.nymous@ccsd.cnrs.fr.invalid (Carine Jauberthie), Carine Jauberthie

Recently several authors considered finite mixture models with semi-/non-parametric component distributions. Identifiability of such model parameters is generally not obvious, and when it occurs, inference methods are rather specific to the mixture model under consideration. In this paper we propose a generalization of the EM algorithm to semiparametric mixture models. Our approach is methodological and can be applied to a wide class of semiparametric mixture models. The behavior of the EM type estimators we propose is studied numerically through several Monte Carlo experiments but also by comparison with alternative methods existing in the literature. In addition to these numerical experiments we provide applications to real data showing that our estimation methods behaves well, that it is fast and easy to be implemented.

ano.nymous@ccsd.cnrs.fr.invalid (Laurent Bordes), Laurent Bordes

This paper considers two different methods in the analysis of nonlinear controlled dynamical system identifiability. The corresponding identifiability definitions are not equivalent. Moreover one is based on the construction of an input-output ideal and the other on the similarity transformation theorem. Our aim is to develop algorithms which give identifiability results from both approaches. Differential algebra theory allows realization of such a project. In order to state these algorithms, new results of differential algebra must be proved. Then the implementation of these algorithms is done in a symbolic computation language.

ano.nymous@ccsd.cnrs.fr.invalid (Lilianne Denis-Vidal), Lilianne Denis-Vidal

We present a first version of a software dedicated to an application of a classical nonlinear control theory problem to the study of compartmental models in biology. The software is being developed over a new free computer algebra library dedicated to differential and algebraic elimination.

ano.nymous@ccsd.cnrs.fr.invalid (François Boulier), François Boulier

One of the important challenges for the decommissioning of the damaged reactors of the Fukushima Daiichi Nuclear Power Plant is the safe retrieval of the fuel debris or corium. It is especially primordial to investigate the cutting conditions for air configuration and for underwater configuration at different water levels. Concerning the cutting techniques, the laser technique is well adapted to the cutting of expected material such as corium that has an irregular shape and heterogeneous composition. A French consortium (ONET Technologies, CEA and IRSN) is being subsidized by the Japanese government to implement R&D related to the laser cutting of Fukushima Daiichi fuel debris and related to dust collection technology. Debris simulant have been manufactured in the PLINIUS platform to represent Molten Core Concrete Interaction as estimated from Fukushima Daiichi calculations. In this simulant, uranium is replaced by hafnium and the major fission products have been replaced by their natural isotopes. During laser cutting experiments in the DELIA facility, aerosols have been collected thanks to filters and impactors. The collected aerosols have been analyzed. Both chemical analysis (dissolution + ICP MS and ICP AES) and microscopic analyses (SEM EDS) will be presented and discussed. These data provide insights on the expected dust releases during cutting and can be converted to provide radioactivity estimates. They have also been successfully compared to thermodynamic calculations with the NUCLEA database.

ano.nymous@ccsd.cnrs.fr.invalid (Christophe Journeau), Christophe Journeau

Nous présentons un environnement de génération automatique de simulations entièrement basé sur les technologies XML. Le langage de description proposé permet de décrire des objets mathématiques tels que des systèmes d'équations différentielles, des systèmes d'équations non-linéaires, des équations aux dérivées partielles en dimension 2, ou bien de simples courbes et surfaces. Il permet aussi de décrire les paramètres dont dépendent ces objets. Ce langage est indépendant du logiciel et permet donc de garantir la pérennité du travail des auteurs ainsi que leur mutualisation et leur réutilisation. Nous décrivons aussi l'architecture d'une «chaîne de compilation» permettant de transformer ces fichiers XML sous forme de scripts et de les faire fonctionner dans le logiciel Scilab.

ano.nymous@ccsd.cnrs.fr.invalid (Stéphane Mottelet), Stéphane Mottelet

This paper deals with optimal input design for parameter estimation in a bounded-error context. Uncertain controlled nonlinear dynamical models, when the input can be parametrized by a finite number of parameters, are considered. The main contribution of this paper concerns criteria for obtaining optimal inputs in this context. Two input design criteria are proposed and analysed. They involve sensitivity functions. The first criterion requires the inversion of the Gram matrix of sensitivity functions. The second one does not require this inversion and is then applied for parameter estimation of a model taken from the aeronautical domain. The estimation results obtained using an optimal input are compared with those obtained with an input optimized in a more classical context (Gaussian measurement noise and parameters a priori known to belong to some boxes). These results highlight the potential of optimal input design in a bounded-error context.

ano.nymous@ccsd.cnrs.fr.invalid (Qiaochu Li), Qiaochu Li

It has been proven that the knowledge of an accurate approximation of the Dirichlet-to-Neumann (DtN) map is useful for a large range of applications in wave scattering problems. We are concerned in this paper with the construction of an approximate local DtN operator for time-harmonic elastic waves. The main contributions are the following. First, we derive exact operators using Fourier analysis in the case of an elastic half-space. These results are then extended to a general three-dimensional smooth closed surface by using a local tangent plane approximation. Next, a regularization step improves the accuracy of the approximate DtN operators and a localization process is proposed. Finally, a first application is presented in the context of the On-Surface Radiation Conditions method. The efficiency of the approach is investigated for various obstacle geometries at high frequencies.

ano.nymous@ccsd.cnrs.fr.invalid (Stéphanie Chaillat), Stéphanie Chaillat

The fast multipole method is an efficient technique to accelerate the solution of large scale 3D scattering problems with boundary integral equations. However, the fast multipole accelerated boundary element method (FM-BEM) is intrinsically based on an iterative solver. It has been shown that the number of iterations can significantly hinder the overall efficiency of the FM-BEM. The derivation of robust preconditioners for FM-BEM is now inevitable to increase the size of the problems that can be considered. The main constraint in the context of the FM-BEM is that the complete system is not assembled to reduce computational times and memory requirements. Analytic preconditioners offer a very interesting strategy by improving the spectral properties of the boundary integral equations ahead from the discretization. The main contribution of this paper is to combine an approximate adjoint Dirichlet to Neumann (DtN) map as an analytic preconditioner with a FM-BEM solver to treat Dirichlet exterior scattering problems in 3D elasticity. The approximations of the adjoint DtN map are derived using tools proposed in [40]. The resulting boundary integral equations are preconditioned Combined Field Integral Equations (CFIEs). We provide various numerical illustrations of the efficiency of the method for different smooth and non smooth geometries. In particular, the number of iterations is shown to be completely independent of the number of degrees of freedom and of the frequency for convex obstacles.

ano.nymous@ccsd.cnrs.fr.invalid (Stéphanie Chaillat), Stéphanie Chaillat

Faults and geological barriers can drastically affect the flow patterns in porous media. Such fractures can be modeled as interfaces that interact with the surrounding matrix. We propose a new technique for the estimation of the location and hydrogeological properties of a small number of large fractures in a porous medium from given distributed pressure or flow data. At each iteration, the algorithm builds a short list of candidates by comparing fracture indicators. These indicators quantify at the first order the decrease of a data misfit function; they are cheap to compute. Then, the best candidate is picked up by minimization of the objective function for each candidate. Optimally driven by the fit to the data, the approach has the great advantage of not requiring remeshing, nor shape derivation. The stability of the algorithm is shown on a series of numerical examples representative of typical situations.

ano.nymous@ccsd.cnrs.fr.invalid (Hend Ben Ameur), Hend Ben Ameur

We are interested in an inverse problem of recovering the position of a pollutant or contaminant source in a stream water. Advection, dispersive transport and the reaction of the solute is commonly modeled by a linear or non-linear parabolic equation. In former works, it is established that a point-wise source is fully identifiable from measurements recorded by a couple of sensors placed, one up-stream and the other down-stream of the pollution source. The observability question we try to solve here is related to the redundancy of sensors when additional information is available on the point-wise source. It may occur, in hydrological engineering, that the intensity of the pollutant is known in advance. In this case, we pursue an identifiability result of a moving source location using a single observation. The chief mathematical tools to prove identifiability are the unique continuation theorem together with an appropriate maximum principle for the parabolic equation under investigation.

ano.nymous@ccsd.cnrs.fr.invalid (Faker Ben Belgacem), Faker Ben Belgacem

Using a preconditioned Richardson iterative method as a regularization to the data completion problem is the aim of the contribution. The problem is known to be exponentially ill posed that makes its numerical treatment a hard task. The approach we present relies on the Steklov-Poincaré variational framework introduced in [Inverse Problems, vol. 21, 2005]. The resulting algorithm turns out to be equivalent to the Kozlov-Maz’ya-Fomin method in [Comp. Math. Phys., vol. 31, 1991]. We conduct a comprehensive analysis on the suitable stopping rules that provides some optimal estimates under the General Source Condition on the exact solution. Some numerical examples are finally discussed to highlight the performances of the method.

ano.nymous@ccsd.cnrs.fr.invalid (Duc Thang Du), Duc Thang Du

The inverse problem under investigation consists of the boundary data completion in a deoxygenation-reaeration model in stream-waters. The unidimensional transport model we deal with is based on the one introduced by Streeter and Phelps, augmented by Taylor dispersion terms. The missing boundary condition is the load or/and the flux of the biochemical oxygen demand indicator at the outfall point. The counterpart is the availability of two boundary conditions on the dissolved oxygen tracer at the same point. The major consequences of these non-standard boundary conditions is that dispersive transport equations on both oxygen tracers are strongly coupled and the resulting system becomes ill-posed. The main purpose is a finite element space-discretization of the variational problem put under a non-symmetric mixed form. Combining analytical calculations, numerical computations and theoretical justifications, we try to elucidate the characteristics related to the ill-posedness of this data completion dynamical problem and understand its mathematical structure.

ano.nymous@ccsd.cnrs.fr.invalid (Faker Ben Belgacem), Faker Ben Belgacem

Résumé du papier "A Coq formal proof of the Lax-Milgram Theorem", CPP 2017.

ano.nymous@ccsd.cnrs.fr.invalid (Sylvie Boldo), Sylvie Boldo

The Finite Element Method is a widely-used method to solve numerical problems coming for instance from physics or biology. To obtain the highest confidence on the correction of numerical simulation programs implementing the Finite Element Method, one has to formalize the mathematical notions and results that allow to establish the sound-ness of the method. The Lax–Milgram theorem may be seen as one of those theoretical cornerstones: under some completeness and coercivity assumptions, it states existence and uniqueness of the solution to the weak formulation of some boundary value problems. This article presents the full formal proof of the Lax–Milgram theorem in Coq. It requires many results from linear algebra, geometry, functional analysis , and Hilbert spaces.

ano.nymous@ccsd.cnrs.fr.invalid (Sylvie Boldo), Sylvie Boldo

In different fields of research, modeling has become a major issue for studying and predicting the possible evolution of a system, in particular in epidemiology. Indeed, according to the globalization of our societies and the genetic mutation of certain diseases or transmission vectors, several epidemics have appeared in the last years in regions not yet concerned by such a catastrophe. One can name, for example, the chikungunya epidemic on the Réunion Island in 2005-2006. In this paper, a model describing the propagation of the chikungunya to the human population is taken again from \cite{Moulay2010}. In such models, some parameters are not directly accessible from experiments and have to be estimated numerically from an iterative algorithm. However, before searching for their values, it is essential to verify the identifiability of model parameters to assess whether the set of unknown parameters can be uniquely determined from the data. Indeed, this study insures that numerical procedures can be successful and if the identifiability is not ensured, some supplementary data have to be added or the set of admissible data has to be reduced. A first identifiability study had been done in \cite{Moulay2012} in considering that the number of eggs can be easily counted. However, after discussing with epidemiologist searchers, it appears that it is the number of larvae who can be estimated weeks by weeks. Thus, this paper proposes to do an identifiability study with this assumption and thanks to an integration of one of the model equations, some easier equations linking the inputs, outputs and parameters are obtained permitting a simpler identifiability study.

ano.nymous@ccsd.cnrs.fr.invalid (Zhu Shousheng), Zhu Shousheng

In this paper, we study a Chikungunya epidemic transmission model which describes an epidemic disease transmitted by Aedes mosquitoes. This model includes the spatial mobility of humans which is probably a factor that has influenced the re-emergence of several diseases. Assuming that the spatial mobility of humans is random described as Brownian random motion, an original model including a reaction-diffusion system is proposed. Since the displacement of mosquitoes is limited to a few meters, compared with humans, one can ignore mosquitoes mobility. Therefore, the complete model is composed of a reaction-diffusion system coupled with ordinary differential equations (ODEs). In this paper, we prove the existence and uniqueness, the positivity and boundedness of the global solution for the model and give some numerical simulations.

ano.nymous@ccsd.cnrs.fr.invalid (Shousheng Zhu), Shousheng Zhu

In the last years, several epidemics have been reported in particular the chikungunya epidemic on the R eunion Island. For predicting its possible evolution, new models describing the transmission of the chikungunya to the human population have been proposed and studied in the literature. In such models, some parameters are not directely accessible from experiments and for estimating them, iterative algorithms can be used. However, before searching for their values, it is essential to verify the identi ability of models parameters to assess wether the set of unknown parameters can be uniquely determined from the data. Thus, identi ability is particularly important in modeling. Indeed, if the model is not identi able, numerical procedures can fail and in that case, some supplementary data have to be added or the set of admissible data has to be reduced. Thus, this paper proposes to study the identi ability of the proposed models by (Moulay, Aziz-Alaoui & Cadivel 2011).

ano.nymous@ccsd.cnrs.fr.invalid (Djamila Moulay), Djamila Moulay

[...]

ano.nymous@ccsd.cnrs.fr.invalid (Nathalie Verdière), Nathalie Verdière

The aim of this paper is to identify the localization and the flow rate of a pollution source in a river by measuring the concentration of a substrate giving significant information. This concentration is assumed to be measured in two points of the river. The simplest model of such a problem consists in a parabolic partial derivative equation. We propose to discretize this P.D.E. in space, which leads to a system of differential equations in time. Then, the analysis of identifiability is carried out using an approach based on differential algebra. A numerical parameter estimation is inferred from this procedure, which gives a first parameter estimate without a priori knowledge about unknown parameters.

ano.nymous@ccsd.cnrs.fr.invalid (Nathalie Verdiere), Nathalie Verdiere

The aim of our work is to reconstruct an inclusion immersed in a fluid flowing in a larger bounded domain via a boundary measurement. Here the fluid motion is assumed to be governed by the Stokes equations. We study the inverse problem thanks to the tools of shape optimization by minimizing a Kohn-Vogelius type cost functional. We first characterize the gradient of this cost functional in order to make a numerical resolution. Then, in order to study the stability of this problem, we give the expression of the shape Hessian. We show the compactness of the Riesz operator corresponding to this shape Hessian at a critical point which explains why the inverse problem is ill-posed. Therefore we need some regularization methods to solve numerically this problem. We illustrate those general results by some explicit calculus of the shape Hessian in some particular geometries. In particular, we solve explicitly the Stokes equations in a concentric annulus. Finally, we present some numerical simulations using a parametric method.

ano.nymous@ccsd.cnrs.fr.invalid (Fabien Caubet), Fabien Caubet

We study the stability of some critical (or equilibrium) shapes in the minimization problem of the energy dissipated by a fluid (i.e. the drag minimization problem) governed by the Stokes equations. We first compute the shape derivative up to the second order, then provide a sufficient condition for the shape Hessian of the energy functional to be coercive at a critical shape. Under this condition, the existence of such a local strict minimum is then proved using a precise upper bound for the variations of the second order shape derivative of the functional with respect to the coercivity and differentiability norms. Finally, for smooth domains, a lower bound of the variations of the drag is obtained in terms of the measure of the symmetric difference of domains.

ano.nymous@ccsd.cnrs.fr.invalid (Fabien Caubet), Fabien Caubet

We aim to reconstruct an inclusion ω immersed in a perfect fluid flowing in a larger bounded domain Ω via boundary measurements on ∂Ω. The obstacle ω is assumed to have a thin layer and is then modeled using generalized boundary conditions (precisely Ventcel boundary conditions). We first obtain an identifiability result (i.e. the uniqueness of the solution of the inverse problem) for annular configurations through explicit computations. Then, this inverse problem of reconstructing ω is studied thanks to the tools of shape optimization by minimizing a least squares type cost functional. We prove the existence of the shape derivatives with respect to the domain ω and characterize the gradient of this cost functional in order to make a numerical resolution. We also characterize the shape Hessian and prove that this inverse obstacle problem is unstable in the following sense: the functional is degenerated for highly oscillating perturbations. Finally, we present some numerical simulations in order to confirm and extend our theoretical results.

ano.nymous@ccsd.cnrs.fr.invalid (Fabien Caubet), Fabien Caubet

Karhunen-Loève's decompositions (KLD) or the proper orthogonal decompositions (POD) of bivariate functions are revisited in this work. We investigate the truncation error first for regular functions and try to improve and sharpen bounds found in the literature. However it happens that (KL)-series expansions are in fact more sensitive to the liability of fields to approximate to be well represented by a small sum of products of separated variables functions. We consider this very issue for some interesting fields solutions of partial differential equations such as the transient heat problem and Poisson's equation. The main tool to state approximation bounds is linear algebra. We show how the singular value decomposition underlying the (KL)-expansion is connected to the spectrum of some Gram matrices. Deriving estimates on the truncation error is thus strongly tied to the spectral properties of these Gram matrices which are structured matrices with low displacement ranks.

ano.nymous@ccsd.cnrs.fr.invalid (Mejdi Azaïez), Mejdi Azaïez

The invariance principle for M/M/1 and M/M/∞ queues states that when properly renormalized (i.e. rescaled and centered), the Markov processes which describe these systems both converge to a diffusive limit when the driving parameters go to infinity: a killed Brownian motion in the former case and an Ornstein-Uhlenbeck process for the latter. The purpose of this paper is to assess the rate of convergence in these diffusion approximations. To this end, we extend to these contexts, the functional Stein's method introduced for the Brownian approximation of Poisson processes.

ano.nymous@ccsd.cnrs.fr.invalid (Eustache Besançon), Eustache Besançon

We propose a model for a medical device, called a stent, designed for the treatment of cerebral aneurysms. The stent consists of a grid, immersed in the blood flow and located at the inlet of the aneurysm. It aims at promoting a clot within the aneurysm. The blood flow is modelled by the incompressible Navier-Stokes equations and the stent by a dissipative surface term. We propose a stabilized finite element method for this model and we analyse its convergence in the case of the Stokes equations. We present numerical results for academical test cases, and on a realistic aneurysm obtained from medical imaging.

ano.nymous@ccsd.cnrs.fr.invalid (Miguel Angel Fernández), Miguel Angel Fernández

The purpose is a finite element approximation of the heat diffusion problem in composite media, with non-linear contact resistance at the interfaces. As already explained in [Journal of Scientific Computing, {\bf 63}, 478-501(2015)], hybrid dual formulations are well fitted to complicated composite geometries and provide tractable approaches to variationally express the jumps of the temperature. The finite elements spaces are standard. Interface contributions are added to the variational problem to account for the contact resistance. This is an important advantage for computing codes developers. We undertake the analysis of the non-linear heat problem for a large range of contact resistance and we investigate its discretization by hybrid dual finite element methods. Numerical experiments are presented at the end to support the theoretical results.

ano.nymous@ccsd.cnrs.fr.invalid (F Ben Belgacem), F Ben Belgacem

We consider a model for fluid flow in a porous medium with a fracture. In this model, the fracture is represented as an interface between subdomains, where specific equations have to be solved. In this article we analyse the discrete problem, assuming that the fracture mesh and the subdomain meshes are completely independent, but that the geometry of the fracture is respected. We show that despite this non-conformity, first order convergence is preserved with the lowest order Raviart-Thomas(-Nedelec) mixed finite elements. Numerical simulations confirm this result.

ano.nymous@ccsd.cnrs.fr.invalid (Najla Frih), Najla Frih

We consider an inverse problem that arises in the management of water resources and pertains to the analysis of the surface waters pollution by organic matter. Most of physical models used by engineers derive from various additions and corrections to enhance the earlier deoxygenation-reaeration model proposed by Streeter and Phelps in 1925, the unknowns being the biochemical oxygen demand (BOD) and the dissolved oxygen (DO) concentrations. The one we deal with includes Taylor's dispersion to account for the heterogeneity of the contamination in all space directions. The system we obtain is then composed of two reaction-dispersion equations. The particularity is that both Neumann and Dirichlet boundary conditions are available on the DO tracer while the BOD density is free of any condition. In fact, for real-life concerns, measurements on the dissolved oxygen are easy to obtain and to save. In the contrary, collecting data on the biochemical oxygen demand is a sensitive task and turns out to be a long-time process. The global model pursues the reconstruction of the BOD density, and especially of its flux along the boundary. Not only this problem is plainly worth studying for its own interest but it can be also a mandatory step in other applications such as the identification of the pollution sources location. The non-standard boundary conditions generate two difficulties in mathematical and computational grounds. They set up a severe coupling between both equations and they are cause of ill-posedness for the data reconstruction problem. Existence and stability fail. Identifiability is therefore the only positive result one can seek after ; it is the central purpose of the paper. We end by some computational experiences to assess the capability of the mixed finite element capability in the missing data recovery (on the biochemical oxygen demand).

ano.nymous@ccsd.cnrs.fr.invalid (Mejdi Azaïez), Mejdi Azaïez

Nous considérons une ́equation qui modélise la diffusion de la température dans une mousse de graphite contenant des capsules de sel. Les conditions de transition de la température entre le graphite et le sel doivent être traitées correctement. Nous effectuons l'analyse de ce modèle et prouvons qu'il est bien posé. Puis nous en proposons une discrétisation par éléments finis et effectuons l'analyse a priori du problème discret. Quelques expériences numériques confirment l'intérêt de cette approche.

ano.nymous@ccsd.cnrs.fr.invalid (Faker Ben Belgacem), Faker Ben Belgacem

In many applications, the estimation of derivatives has to be done from noisy measured signal. In this paper, an original method based on a distribution approach is presented. Its interest is to report the derivatives on infinitely differentiable functions. Thus, the estimation of the derivatives is done only from the signal. Besides, this method gives some explicit formulae leading to fast calculus. For all these reasons, it is an efficient method in the case of noisy signals as it will be confirmed in several examples.

ano.nymous@ccsd.cnrs.fr.invalid (Nathalie Verdière), Nathalie Verdière

The aim of this article is to explore the possibility of using a family of fixed finite elements shape functions to solve a Dirichlet boundary value problem with an alternative variational formulation. The domain is embedded in a bounding box and the finite element approximation is associated to a regular structured mesh of the box. The shape of the domain is independent of the discretization mesh. In these conditions, a meshing tool is never required. This may be especially useful in the case of evolving domains, for example shape optimization or moving interfaces. This is not a new idea, but we analyze here a special approach. The main difficulty of the approach is that the associated quadratic form is not coercive and an inf-sup condition has to be checked. In dimension one, we prove that this formulation is well posed and we provide error estimates. Nevertheless, our proof relying on explicit computations is limited to that case and we give numerical evidence in dimension two that the formulation does not provide a reliable method. We first add a regularization through a Nitscheterm and we observe that some instabilities still remain. We then introduce and justify a geometrical regularization. A reliable method is obtained using both regularizations.

ano.nymous@ccsd.cnrs.fr.invalid (Gaël Dupire), Gaël Dupire

The aim of this article is to explore the possibility of using a family of fixed finite element shape functions that does not match the domain to solve a boundary value problem with Dirichlet boundary condition. The domain is embedded in a bounding box and the finite element approximation is associated to a regular structured mesh of the box. The shape of the domain is independent of the discretization mesh. In these conditions, a meshing tool is never required. This may be especially useful in the case of evolving domains, for example shape optimization or moving interfaces. Nitsche method has been intensively applied. However, Nitsche is weighted with the mesh size h and therefore is a purely discrete point of view with no interpretation in terms of a continuous variational approach associated with a boundary value problem. In this paper, we introduce an alternative to Nitsche method which is associated with a continuous bilinear form. This extension has strong restrictions: it needs more regularity on the data than the usual method. We prove the well-posedness of our formulation and error estimates. We provide numerical comparisons with Nitsche method.

ano.nymous@ccsd.cnrs.fr.invalid (Jean-Paul Boufflet), Jean-Paul Boufflet

L’objectif de ce travail est de prendre en compte l’influence de la présence de défauts surfaciques sur le comportement jusqu’à rupture des structures et ce sans description fine de la géométrie des perturbations. L’approche proposée s’appuie principalement sur deux outils : une analyse asymptotique fine des équations de Navier et l’utilisation des modèles à discontinuité forte. Une stratégie de couplage des deux approches permettant l’analyse du comportement de la structure jusqu’à rupture est également présentée.

ano.nymous@ccsd.cnrs.fr.invalid (Delphine Brancherie), Delphine Brancherie

The nutrient-poor grasslands of Western Europe are of major conservation concern because land use changes threaten their high biodiversity. Studies assessing their characteristics show that their past and ongoing dynamics are strongly related to human activities. Yet, the initial development patterns of this specific ecosystem remain unclear. Here, we examine findings from previous paleoecological investigations performed at local level on European grassland areas ranging from several hundred square meters to several square kilometers. Comparing data from these locally relevant studies at a regional scale, we investigate these grasslands' spatiotemporal patterns of origin and long-term dynamics. The study is based on taxonomic identification and radiocarbon AMS dating of charcoal pieces from soil/soil sediment archives of nutrient-poor grasslands in Mediterranean and temperate Western Europe (La Crau plain, Mont Lozère, Grands Causses, Vosges Mountains, Franconian Alb, and Upper-Normandy region). We address the following questions: (1) What are the key determinants of the establishment of these nutrient-poor grasslands? (2) What temporal synchronicities might there be? and (3) What is the spatial scale of these grasslands' past dynamics? The nutrient-poor grasslands in temperate Western Europe are found to result from the first anthropogenic woodland clearings during the late Neolithic, revealed by fire events in mesophilious mature forests. In contrast, the sites with Mediterranean affinities appear to have developed at earlier plant successional stages (pine forest, matorral), established before the first human impacts in the same period. However, no general pattern of establishment and dynamics of the nutrient-poor grasslands could be identified. Local mechanisms appear to be the key determinants of the dynamics of these ecosystems. Nevertheless, this paleoecological synthesis provides insights into past climate or human impacts on present-day vegetation.

ano.nymous@ccsd.cnrs.fr.invalid (Vincent Robin), Vincent Robin

This paper deals with parameter and state estimation in a bounded-error context for uncertain dynamical aerospace models when the input is considered optimized or not. In a bounded-error context, perturbations are assumed bounded but otherwise unknown. The parameters to be estimated are also considered bounded. The tools of the presented work are based on a guaranteed numerical set integration solver of ordinary differential equations combined with adapted set inversion computation. The main contribution of this work consists in developing procedures for parameter estimation whose performance is highly related with the input of system. In this paper, a comparison with a classical non-optimized input is proposed.

ano.nymous@ccsd.cnrs.fr.invalid (Qiaochu Li), Qiaochu Li