Retour Accueil / Recherche / Publications sur H.A.L.

The hidden Markov models (HMM) are used in many different fields, to study the dynamics of a process that cannot be directly observed. However, in some cases, the structure of dependencies of a HMM is too simple to describe the dynamics of the hidden process. In particular, in some applications in finance or in ecology, the transition probabilities of the hidden Markov chain can also depend on the current observation. In this work we are interested in extending the classical HMM to this situation. We define a new model, referred to as the Observation Driven-Hidden Markov Model (OD-HMM). We present a complete study of the general non-parametric OD-HMM with discrete and finite state spaces (hidden and observed variables). We study its identifiability. Then we study the consistency of the maximum likelihood estimators. We derive the associated forward-backward equations for the E-step of the EM algorithm. The quality of the procedure is tested on simulated data sets. Finally, we illustrate the use of the model on an application on the study of annual plants dynamics. This works sets theoretical and practical foundations for a new framework that could be further extended, on one hand to the non-parametric context to simplify estimation, and on the other hand to the hidden semi-Markov models for more realism.

ano.nymous@ccsd.cnrs.fr.invalid (Hanna Bacave), Hanna Bacave

We deploy artificial neural networks to unfold neutron spectra from measured energy-integrated quantities. These neutron spectra represent an important parameter allowing to compute the absorbed dose and the kerma to serve radiation protection in addition to nuclear safety. The built architectures are inspired from convolutional neural networks. The first architecture is made up of residual transposed convolution's blocks while the second is a modified version of the U-net architecture. A large and balanced dataset is simulated following "realistic" physical constraints to train the architectures in an efficient way. Results show a high accuracy prediction of neutron spectra ranging from thermal up to fast spectrum. The dataset processing, the attention paid to performances' metrics and the hyperoptimization are behind the architectures' robustness.

ano.nymous@ccsd.cnrs.fr.invalid (Maha Bouhadida), Maha Bouhadida

We study the time evolution of an increasing stochastic process governed by a first-order stochastic differential system. This defines a particular piecewise deterministic Markov process (PDMP). We consider a Markov renewal process (MRP) associated to the PDMP and its Markov renewal equation (MRE) which is solved in order to obtain a closed-form solution of the transition function of the PDMP. It is then applied in the framework of survival analysis to evaluate the reliability function of a given system. We give a numerical illustration and we compare this analytical solution with the Monte Carlo estimator.

ano.nymous@ccsd.cnrs.fr.invalid (Julien Chiquet), Julien Chiquet

During a severe accident in a nuclear reactor, extreme temperatures may be reached (T>2500 K). In these conditions, the nuclear fuel may react with the Zircaloy cladding and then with the steel vessel, forming a mixture of solid-liquid phases called in-vessel corium. In the worst scenario, this mixture may penetrate the vessel and reach the concrete underneath the reactor. In order to develop the TAF-ID thermodynamic database (www.oecd-nea.orgiscienceitaf-id) on nuclear fuels and to predict the high temperature behaviour of the corium + concrete system, new high temperature thermodynamic data are needed. The LM2T at CEA Saclay centre started an experimental campaign of phase equilibria measurements at high temperature (up to 2600 K) on interesting corium sub-systems. In particular, a heat treatment at 2500 K has been performed on two prototypic ex-vessel corium samples (within the U-Zr-Al-Ca-Si-O system) with different amounts of CaO and SiO$_2$. The results show that depending on the SiO2-content, the final configuration of the samples can be significantly different. The sample with the higher CaO-content showed a dendritic structure representative of a single quenched liquid phase, whilst the sample richer in SiO2 exhibited a microstructure which suggests the presence of a liquid miscibility gap. Furthermore a new laser heating setup has been conceived. This technique allows very high temperature measures (T > 3000 K) limiting the interactions between the sample and the surroundings.

ano.nymous@ccsd.cnrs.fr.invalid (Andrea Quaini), Andrea Quaini

Lebesgue integration is a well-known mathematical tool, used for instance in probability theory, real analysis, and numerical mathematics. Thus, its formalization in a proof assistant is to be designed to fit different goals and projects. Once the Lebesgue integral is formally defined and the first lemmas are proved, the question of the convenience of the formalization naturally arises. To check it, a useful extension is Tonelli's theorem, stating that the (double) integral of a nonnegative measurable function of two variables can be computed by iterated integrals, and allowing to switch the order of integration. This article describes the formal definition and proof in Coq of product sigma-algebras, product measures and their uniqueness, the construction of iterated integrals, up to Tonelli's theorem. We also advertise the Lebesgue induction principle provided by an inductive type for nonnegative measurable functions.

ano.nymous@ccsd.cnrs.fr.invalid (Sylvie Boldo), Sylvie Boldo

Lebesgue integration is a well-known mathematical tool, used for instance in probability theory, real analysis, and numerical mathematics. Thus, its formalization in a proof assistant is to be designed to fit different goals and projects. Once the Lebesgue integral is formally defined and the first lemmas are proved, the question of the convenience of the formalization naturally arises. To check it, a useful extension is Tonelli's theorem, stating that the (double) integral of a nonnegative measurable function of two variables can be computed by iterated integrals, and allowing to switch the order of integration. This article describes the formal definition and proof in Coq of product sigma-algebras, product measures and their uniqueness, the construction of iterated integrals, up to Tonelli's theorem. We also advertise the Lebesgue induction principle provided by an inductive type for nonnegative measurable functions.

ano.nymous@ccsd.cnrs.fr.invalid (Sylvie Boldo), Sylvie Boldo

This work is part of a general study on the long-term safety of the geological repository of nuclear wastes. A diffusion equation with a moving boundary in one dimension is introduced and studied. The model describes some mechanisms involved in corrosion processes at the surface of carbon steel canisters in contact with a claystone formation. The main objective of the paper is to prove the existence of global weak solutions to the problem. For this, a semi-discrete in time minimizing movements scheme à la De Giorgi is introduced. First, the existence of solutions to the scheme is established and then, using a priori estimates, it is proved that as the time step goes to zero these solutions converge up to extraction towards a weak solution to the free boundary model.

ano.nymous@ccsd.cnrs.fr.invalid (Benoît Merlet), Benoît Merlet

The variational finite element solution of Cauchy's problem, expressed in the Steklov-Poincaré framework and regularized by the Lavrentiev method, has been introduced and computationally assessed in [Inverse Problems in Science and Engineering, 18, 1063-1086 (2011)]. The present work concentrates on the numerical analysis of the semi-discrete problem. We perform the mathematical study of the error to rigorously establish the convergence of the global bias-variance error.

ano.nymous@ccsd.cnrs.fr.invalid (Faker Ben Belgacem), Faker Ben Belgacem

Motivation: Comparing single-stranded nucleic acids (ssNAs) secondary structures is fundamental when investigating their function and evolution and predicting the effect of mutations on their structures. Many comparison metrics exist, although they are either too elaborate or not sensitive enough to distinguish close ssNAs structures. Results: In this context, we developed AptaMat, a simple and sensitive algorithm for ssNAs secondary structures comparison based on matrices representing the ssNAs secondary structures and a metric built upon the Manhattan distance in the plane. We applied AptaMat to several examples and compared the results to those obtained by the most frequently used metrics, namely the Hamming distance and the RNAdistance, and by a recently developed image-based approach. We showed that AptaMat is able to discriminate between similar sequences, outperforming all the other here considered metrics. In addition, we showed that AptaMat was able to correctly classify 14 RFAM families within a clustering procedure.

ano.nymous@ccsd.cnrs.fr.invalid (Thomas Binet), Thomas Binet

We focus on the ill posed data completion problem and its finite element approximation, when recast via the variational duplication Kohn-Vogelius artifice and the condensation Steklov-Poincaré operators. We try to understand the useful hidden features of both exact and discrete problems. When discretized with finite elements of degree one, the discrete and exact problems behave in diametrically opposite ways. Indeed, existence of the discrete solution is always guaranteed while its uniqueness may be lost. In contrast, the solution of the exact problem may not exist, but it is unique. We show how existence of the so called "weak spurious modes", of the exact variational formulation, is source of instability and the reason why existence may fail. For the discrete problem, we find that the cause of non uniqueness is actually the occurrence of "spurious modes". We track their fading effect asymptotically when the mesh size tends to zero. In order to restore uniqueness, we recall the discrete version of the Holmgren principle, introduced in [Azaïez et al, IPSE, 18, 2011], and we discuss the effect on uniqueness of the finite element mesh, using some graph theory basic material.

ano.nymous@ccsd.cnrs.fr.invalid (F Ben Belgacem), F Ben Belgacem

In this dissertation we are concerned with semiparametric models. These models have success and impact in mathematical statistics due to their excellent scientific utility and intriguing theoretical complexity. In the first part of the thesis, we consider the problem of the estimation of a parameter θ, in Banach spaces, maximizing some criterion function which depends on an unknown nuisance parameter h, possibly infinite-dimensional. We show that the m out of n bootstrap, in a general setting, is weakly consistent under conditions similar to those required for weak convergence of the non smooth M-estimators. In this framework, delicate mathematical derivations will be required to cope with estimators of the nuisance parameters inside non-smooth criterion functions. We then investigate an exchangeable weighted bootstrap for function-valued estimators defined as a zero point of a function-valued random criterion function. The main ingredient is the use of a differential identity that applies when the random criterion function is linear in terms of the empirical measure. A large number of bootstrap resampling schemes emerge as special cases of our settings. Examples of applications from the literature are given to illustrate the generality and the usefulness of our results. The second part of the thesis is devoted to the statistical models with multiple change-points. The main purpose of this part is to investigate the asymptotic properties of semiparametric M-estimators with non-smooth criterion functions of the parameters of multiple change-points model for a general class of models in which the form of the distribution can change from segment to segment and in which, possibly, there are parameters that are common to all segments. Consistency of the semiparametric M-estimators of the change-points is established and the rate of convergence is determined. The asymptotic normality of the semiparametric M-estimators of the parameters of the within-segment distributions is established under quite general conditions. We finally extend our study to the censored data framework. We investigate the performance of our methodologies for small samples through simulation studies.

ano.nymous@ccsd.cnrs.fr.invalid (Anouar Abdeldjaoued Ferfache), Anouar Abdeldjaoued Ferfache

[...]

ano.nymous@ccsd.cnrs.fr.invalid (Florian de Vuyst), Florian de Vuyst

In this paper we analyse a finite volume scheme for a nonlocal version of the Shigesada-Kawazaki-Teramoto (SKT) cross-diffusion system. We prove the existence of solutions to the scheme, derive qualitative properties of the solutions and prove its convergence. The proofs rely on a discrete entropy-dissipation inequality, discrete compactness arguments, and on the novel adaptation of the so-called duality method at the discrete level. Finally, thanks to numerical experiments, we investigate the influence of the nonlocality in the system: on convergence properties of the scheme, as an approximation of the local system and on the development of diffusive instabilities.

ano.nymous@ccsd.cnrs.fr.invalid (Maxime Herda), Maxime Herda

In this paper, we consider the problem of identifying a single moving point source for a three-dimensional wave equation from boundary measurements. Precisely, we show that the knowledge of the field generated by the source at six different points of the boundary over a finite time interval is sufficient to determine uniquely its trajectory. We also derive a Lipschitz stability estimate for the inversion.

ano.nymous@ccsd.cnrs.fr.invalid (Hanin Al Jebawy), Hanin Al Jebawy

Integration, just as much as differentiation, is a fundamental calculus tool that is widely used in many scientific domains. Formalizing the mathematical concept of integration and the associated results in a formal proof assistant helps in providing the highest confidence on the correctness of numerical programs involving the use of integration, directly or indirectly. By its capability to extend the (Riemann) integral to a wide class of irregular functions, and to functions defined on more general spaces than the real line, the Lebesgue integral is perfectly suited for use in mathematical fields such as probability theory, numerical mathematics, and real analysis. In this article, we present the Coq formalization of $\sigma$-algebras, measures, simple functions, and integration of nonnegative measurable functions, up to the full formal proofs of the Beppo Levi (monotone convergence) theorem and Fatou's lemma. More than a plain formalization of the known literature, we present several design choices made to balance the harmony between mathematical readability and usability of Coq theorems. These results are a first milestone toward the formalization of $L^p$~spaces such as Banach spaces.

ano.nymous@ccsd.cnrs.fr.invalid (Sylvie Boldo), Sylvie Boldo

Integration, just as much as differentiation, is a fundamental calculus tool that is widely used in many scientific domains. Formalizing the mathematical concept of integration and the associated results in a formal proof assistant helps in providing the highest confidence on the correctness of numerical programs involving the use of integration, directly or indirectly. By its capability to extend the (Riemann) integral to a wide class of irregular functions, and to functions defined on more general spaces than the real line, the Lebesgue integral is perfectly suited for use in mathematical fields such as probability theory, numerical mathematics, and real analysis. In this article, we present the Coq formalization of $\sigma$-algebras, measures, simple functions, and integration of nonnegative measurable functions, up to the full formal proofs of the Beppo Levi (monotone convergence) theorem and Fatou's lemma. More than a plain formalization of the known literature, we present several design choices made to balance the harmony between mathematical readability and usability of Coq theorems. These results are a first milestone toward the formalization of $L^p$~spaces such as Banach spaces.

ano.nymous@ccsd.cnrs.fr.invalid (Sylvie Boldo), Sylvie Boldo

[...]

ano.nymous@ccsd.cnrs.fr.invalid (Elias Zgheib), Elias Zgheib

Recent works in the Boundary Element Method (BEM) community have been devoted to the derivation of fast techniques to perform the matrix vector product needed in the iterative solver. Fast BEMs are now very mature. However, it has been shown that the number of iterations can significantly hinder the overall efficiency of fast BEMs. The derivation of robust preconditioners is now inevitable to increase the size of the problems that can be considered. Analytical precon-ditioners offer a very interesting strategy by improving the spectral properties of the boundary integral equations ahead from the discretization. The main contribution of this paper is to propose new analytical preconditioners to treat Neumann exterior scattering problems in 2D and 3D elasticity. These preconditioners are local approximations of the adjoint Neumann-to-Dirichlet map. We propose three approximations with different orders. The resulting boundary integral equations are preconditioned Combined Field Integral Equations (CFIEs). An analytical spectral study confirms the expected behavior of the preconditioners, i.e., a better eigenvalue clustering especially in the elliptic part contrary to the standard CFIE of the first-kind. We provide various 2D numerical illustrations of the efficiency of the method for different smooth and non smooth geometries. In particular, the number of iterations is shown to be independent of the density of discretization points per wavelength which is not the case of the standard CFIE. In addition, it is less sensitive to the frequency.

ano.nymous@ccsd.cnrs.fr.invalid (Stéphanie Chaillat), Stéphanie Chaillat

An innovative data-driven model-order reduction technique is proposed to model dilute micrometric or nanometric suspensions of microcapsules, i.e., microdrops protected in a thin hyperelastic membrane, which are used in Healthcare as innovative drug vehicles. We consider a microcapsule flowing in a similar-size microfluidic channel and vary systematically the governing parameter, namely the capillary number, ratio of the viscous to elastic forces, and the confinement ratio, ratio of the capsule to tube size. The resulting space-time-parameter problem is solved using two global POD reduced bases, determined in the offline stage for the space and parameter variables, respectively. A suitable low-order spatial reduced basis is then computed in the online stage for any new parameter instance. The time evolution of the capsule dynamics is achieved by identifying the nonlinear low-order manifold of the reduced variables; for that, a point cloud of reduced data is computed and a diffuse approximation method is used. Numerical comparisons between the full-order fluid-structure interaction model and the reduced-order one confirm both accuracy and stability of the reduction technique over the whole admissible parameter domain. We believe that such an approach can be applied to a broad range of coupled problems especially involving quasistatic models of structural mechanics.

ano.nymous@ccsd.cnrs.fr.invalid (Toufik Boubehziz), Toufik Boubehziz

For a system, a priori identifiability is a theoretical property depending only on the model and guarantees that its parameters can be uniquely determined from observations. This paper provides a survey of the various and numerous definitions of a priori identifiability given in the literature, for both deterministic continuous and discrete-time models. A classification is done by distinguishing analytical and algebraic definitions as well as local and global ones. Moreover, this paper provides an overview on the distinct methods to test the parameter identifiability. They are classified into the so-called output equality approaches, local state isomorphism approaches and differential algebra approaches. A few examples are detailed to illustrate the methods and complete this survey.

ano.nymous@ccsd.cnrs.fr.invalid (Floriane Anstett-Collin), Floriane Anstett-Collin

CRF19 is a recombinant form of HIV-1 subtypes D, A1 and G, which was first sampled in Cuba in 1999, but was already present there in 1980s. CRF19 was reported almost uniquely in Cuba, where it accounts for ∼25% of new HIV-positive patients and causes rapid progression to AIDS (∼3 years). We analyzed a large data set comprising ∼350 pol and env sequences sampled in Cuba over the last 15 years and ∼350 from Los Alamos database. This data set contained both CRF19 (∼315), and A1, D and G sequences. We performed and combined analyses for the three A1, G and D regions, using fast maximum likelihood approaches, including: (1) phylogeny reconstruction, (2) spatio-temporal analysis of the virus spread, and ancestral character reconstruction for (3) transmission mode and (4) drug resistance mutations (DRMs). We verified these results with a Bayesian approach. This allowed us to acquire new insights on the CRF19 origin and transmission patterns. We showed that CRF19 recombined between 1966 and 1977, most likely in Cuban community stationed in Congo region. We further investigated CRF19 spread on the Cuban province level, and discovered that the epidemic started in 1970s, most probably in Villa Clara, that it was at first carried by heterosexual transmissions, and then quickly spread in the 1980s within the “men having sex with men” (MSM) community, with multiple transmissions back to heterosexuals. The analysis of the transmission patterns of common DRMs found very few resistance transmission clusters. Our results show a very early introduction of CRF19 in Cuba, which could explain its local epidemiological success. Ignited by a major founder event, the epidemic then followed a similar pattern as other subtypes and CRFs in Cuba. The reason for the short time to AIDS remains to be understood and requires specific surveillance, in Cuba and elsewhere.

ano.nymous@ccsd.cnrs.fr.invalid (Anna Zhukova), Anna Zhukova

We extend the general stochastic matching model on graphs introduced in [13], to matching models on multigraphs, that is, graphs with self-loops. The evolution of the model can be described by a discrete time Markov chain whose positive recurrence is investigated. Necessary and sufficient stability conditions are provided, together with the explicit form of the stationary probability in the case where the matching policy is 'First Come, First Matched'.

ano.nymous@ccsd.cnrs.fr.invalid (Jocelyn Begeot), Jocelyn Begeot

We address the problem of unsupervised domain adaptation under the setting of generalized target shift (both class-conditional and label shifts occur). We show that in that setting, for good generalization, it is necessary to learn with similar source and target label distributions and to match the class-conditional probabilities. For this purpose, we propose an estimation of target label proportion by blending mixture estimation and optimal transport. This estimation comes with theoretical guarantees of correctness. Based on the estimation, we learn a model by minimizing a importance weighted loss and a Wasserstein distance between weighted marginals. We prove that this minimization allows to match class-conditionals given mild assumptions on their geometry. Our experimental results show that our method performs better on average than competitors accross a range domain adaptation problems including digits,VisDA and Office. Code for this paper is available at \url{https://github.com/arakotom/mars_domain_adaptation}.

ano.nymous@ccsd.cnrs.fr.invalid (Alain Rakotomamonjy), Alain Rakotomamonjy

Internet of Things (IoT) applications using sensors and actuators raise new privacy related threats such as drivers and vehicles tracking and profiling. These threats can be addressed by developing adaptive and context-aware privacy protection solutions to face the environmental constraints (memory, energy, communication channel, etc.), which cause a number of limitations of applying cryptographic schemes. This paper proposes a privacy preserving solution in ITS context relying on a game theory model between two actors (data holder and data requester) using an incentive motivation against a privacy concession, or leading an active attack. We describe the game elements (actors, roles, states, strategies, and transitions), and find an equilibrium point reaching a compromise between privacy concessions and incentive motivation. Finally, we present numerical results to analyze and evaluate the game theory-based theoretical formulation.

ano.nymous@ccsd.cnrs.fr.invalid (Arbia Riahi Sfar), Arbia Riahi Sfar

We present here the results regarding the characterization of chemical composition and size distribution of aerosols released during laser cutting of two types of fuel debris simulants (Ex-Vessel and In-Vessel scenarios) in air and underwater conditions in the context of Fukushima Daiichi dismantling. The aerosols have systematically an aerodynamic mass median diameter below 1 μm, with particle sizes generally comprised between 60 nm and 160 nm for air cutting conditions, and larger diameters (300-400 nm) for underwater experiments. Regarding the chemical composition, iron, chromium and nickel are mainly found by more than 50 % in the samples whereas radioactive surrogates of Uranium (Hafnium) are undetectable. When compositions are transposed to radioactivity, taking into account radioisotope inventories 10 years after the accident, it is well evidenced that the radioactivity is carried out by small particles in air condition tests (median size around 100 nm) than underwater (median size around 400 nm): 50 % of the radioactivity is present in particles below 90 nm, and 99 % below 950 nm. Caesium carries the largest part of the radioactivity at all sizes below 1 μm in the case of an Ex- Vessel fuel debris simulant. For the In-Vessel, the aerosol median size for the radioactivity is situated around 100 nm, with 59 % of the radioactivity is carried by strontium, 17 % by barium and 16 % by minor actinides (modelled by cerium) and 7% by the caesium. For sizes above 1.6 μm, cerium representing alpha particles (surrogate of plutonium) is almost the only radioactivity-bearing element (96–97 % of the radioactivity). The data produced here could already be used for modelling or designing development of strategies to implement insitu the laser cutting for fuel debris retrieval and safety associated strategies.

ano.nymous@ccsd.cnrs.fr.invalid (Claire Dazon), Claire Dazon

Dans le contexte du démantèlement des réacteurs de Fukushima Daiichi, plusieurs projets ont été subventionnés par le gouvernement japonais pour préparer les opérations de retrait du corium. Dans ce cadre, une étude conjointe menée entre ONET Technologies et les laboratoires du CEA et de l’IRSN a permis de démontrer la faisabilité de l’utilisation de la technique de découpe par laser et d’estimer le terme source aérosol ainsi généré. Deux simulants du corium synthétisés et caractérisés par le CEA-Cadarache ont fait l’objet d’essais de tirs laser sous air et sous eau au sein de l’installation DELIA du CEA Saclay, et les aérosols émis ont été caractérisés par l’IRSN. La caractérisation des particules émises en termes de concentration et de distribution granulométrique a permis d’apporter des informations pour prédire notamment le transport et le dépôt des particules, mais la connaissance de la composition chimique par classe de taille est une information nécessaire pour une meilleure gestion des risques professionnels et environnementaux. Cet article présente les résultats concernant la caractérisation de la composition chimique de l’aérosol d’un simulant du corium, en condition de découpe laser sous air, et la distribution granulométrique associée

ano.nymous@ccsd.cnrs.fr.invalid (Emmanuel Porcheron), Emmanuel Porcheron

Let (S D-Omega) be the Stokes operator defined in a bounded domain Omega of R-3 with Dirichlet boundary conditions. We prove that, generically with respect to the domain Omega with C-5 boundary, the spectrum of (S D-Omega) satisfies a non-resonant property introduced by C. Foias and J.C. Saut in [17] to linearize the Navier-Stokes system in a bounded domain Omega of R-3 with Dirichlet boundary conditions. For that purpose, we first prove that, generically with respect to the domain Omega with C-5 boundary, all the eigenvalues of (SD Omega) are simple. That answers positively a question raised by J.H. Ortega and E. Zuazua in [27, Section 6]. The proofs of these results follow a standard strategy based on a contradiction argument requiring shape differentiation. One needs to shape differentiate at least twice the initial problem in the direction of carefully chosen domain variations. The main step of the contradiction argument amounts to study the evaluation of Dirichlet-to-Neumann operators associated to these domain variations. (C) 2014 Elsevier Masson SAS. All rights reserved.

ano.nymous@ccsd.cnrs.fr.invalid (Yacine Chitour), Yacine Chitour

Dans le cadre d’un programme pluriannuel, des campagnes de sondages ont été réalisées sur les deux versants du col du Petit-Saint-Bernard (2188 m, Alpes occidentales), entre 750 et 3000 m d’altitude. La méthode de travail néglige les prospections au sol, au profit de la multiplication des sondages manuels, implantés dans des contextes topographiques sélectionnés et menés jusqu’à la base des remplissages holocènes. Les résultats obtenus documentent dans la longue durée l’évolution de la dynamique pédo-sédimentaire et la fréquentation des différents étages d’altitude. La signification des données archéologiques collectées est discutée par rapport à l’état des connaissances dans une zone de comparaison groupant les vallées voisines des Alpes occidentales, par rapport aux modèles de peuplement existants et par rapport aux indications taphonomiques apportées par l’étude pédo-sédimentaire. Un programme d’analyses complémentaires destiné à préciser le contexte, la taphonomie et le statut fonctionnel

ano.nymous@ccsd.cnrs.fr.invalid (Pierre-Jérôme Rey), Pierre-Jérôme Rey

The purpose is a finite element approximation of the heat diffusion problem in composite media, with non-linear contact resistance at the interfaces. As already explained in [Journal of Scientific Computing, {\bf 63}, 478-501(2015)], hybrid dual formulations are well fitted to complicated composite geometries and provide tractable approaches to variationally express the jumps of the temperature. The finite elements spaces are standard. Interface contributions are added to the variational problem to account for the contact resistance. This is an important advantage for computing codes developers. We undertake the analysis of the non-linear heat problem for a large range of contact resistance and we investigate its discretization by hybrid dual finite element methods. Numerical experiments are presented at the end to support the theoretical results.

ano.nymous@ccsd.cnrs.fr.invalid (F Ben Belgacem), F Ben Belgacem

This paper introduces a new approach for the forecasting of solar radiation series at a located station for very short time scale. We built a multivariate model in using few stations (3 stations) separated with irregular distances from 26 km to 56 km. The proposed model is a spatio temporal vector autoregressive VAR model specifically designed for the analysis of spatially sparse spatio-temporal data. This model differs from classic linear models in using spatial and temporal parameters where the available pre-dictors are the lagged values at each station. A spatial structure of stations is defined by the sequential introduction of predictors in the model. Moreover, an iterative strategy in the process of our model will select the necessary stations removing the uninteresting predictors and also selecting the optimal p-order. We studied the performance of this model. The metric error, the relative root mean squared error (rRMSE), is presented at different short time scales. Moreover, we compared the results of our model to simple and well known persistence model and those found in literature.

ano.nymous@ccsd.cnrs.fr.invalid (Maïna André), Maïna André

We introduce the binacox, a prognostic method to deal with the problem of detect- ing multiple cut-points per features in a multivariate setting where a large number of continuous features are available. The method is based on the Cox model and com- bines one-hot encoding with the binarsity penalty, which uses total-variation regular- ization together with an extra linear constraint, and enables feature selection. Original nonasymptotic oracle inequalities for prediction (in terms of Kullback-Leibler diver- gence) and estimation with a fast rate of convergence are established. The statistical performance of the method is examined in an extensive Monte Carlo simulation study, and then illustrated on three publicly available genetic cancer datasets. On these high- dimensional datasets, our proposed method signi cantly outperforms state-of-the-art survival models regarding risk prediction in terms of the C-index, with a computing time orders of magnitude faster. In addition, it provides powerful interpretability from a clinical perspective by automatically pinpointing signi cant cut-points in relevant variables.

ano.nymous@ccsd.cnrs.fr.invalid (Simon Bussy), Simon Bussy

We introduce a new algorithm of proper generalized decomposition (PGD) for parametric symmetric elliptic partial differential equations. For any given dimension, we prove the existence of an optimal subspace of at most that dimension which realizes the best approximation---in the mean parametric norm associated to the elliptic operator---of the error between the exact solution and the Galerkin solution calculated on the subspace. This is analogous to the best approximation property of the proper orthogonal decomposition (POD) subspaces, except that in our case the norm is parameter-dependent. We apply a deflation technique to build a series of approximating solutions on finite-dimensional optimal subspaces, directly in the online step, and we prove that the partial sums converge to the continuous solution in the mean parametric elliptic norm. We show that the standard PGD for the considered parametric problem is strongly related to the deflation algorithm introduced in this paper. This opens the possibility of computing the PGD expansion by directly solving the optimization problems that yield the optimal subspaces.

ano.nymous@ccsd.cnrs.fr.invalid (M. Azaïez), M. Azaïez

The γ-irradiation of a biphasic system composed of tri-n-butylphosphate in tetrapropylene hydrogen (TPH) in contact with palladium(II) nitrate in nitric acid aqueous solution led to the formation of two precipitates. A thorough characterization of these solids was performed by means of various analytical techniques including X-Ray Diffraction (XRD), Thermal Gravimetric Analysis coupled with a Differential Scanning Calorimeter (TGA-DSC), X-ray Photoelectron Spectroscopy (XPS), InfraRed (IR), RAMAN and Nuclear Magnetic Resonance (NMR) Spectroscopy, and ElectroSpray Ionization Mass Spectrometry (ESI-MS). Investigations showed that the two precipitates exhibit quite similar structures. They are composed at least of two compounds: palladium cyanide and palladium species containing ammonium, phosphorous or carbonyl groups. Several mechanisms are proposed to explain the formation of Pd(CN)2.

ano.nymous@ccsd.cnrs.fr.invalid (Bénédicte Simon), Bénédicte Simon

For each pair ε = (ε 1 , ε 2) of positive parameters, we define a perforated domain Ω ε by making a small hole of size ε 1 ε 2 in an open regular subset Ω of R n (n ≥ 3). The hole is situated at distance ε 1 from the outer boundary ∂Ω of the domain. Then, when ε → (0, 0) both the size of the hole and its distance from ∂Ω tend to zero, but the size shrinks faster than the distance. In such perforated domain Ω ε we consider a Dirichlet problem for the Laplace equation and we denote by u ε its solution. Our aim is to represent the map that takes ε to u ε in term of real analytic functions of ε defined in a neighborhood of (0, 0). In contrast with previous results valid only for restrictions of u ε to suitable subsets of Ω ε , we prove a global representation formula that holds on the whole of Ω ε. Such a formula allows to rigorously justify multi-scale expansions, which we subsequently construct.

ano.nymous@ccsd.cnrs.fr.invalid (Virginie Bonnaillie-Noël), Virginie Bonnaillie-Noël

This paper focuses on solving coupled problems of lumped parameter models. Such problems are of interest for the simulation of severe accidents in nuclear reactors: these coarse-grained models allow for fast calculations for statistical analysis used for risk assessment and solutions of large problems when considering the whole severe accident scenario. However, this modeling approach has several numerical flaws. Besides, in this industrial context, computational efficiency is of great importance leading to various numerical constraints. The objective of this research is to analyze the applicability of explicit coupling strategies to solve such coupled problems and to design implicit coupling schemes allowing stable and accurate computations. The proposed schemes are theoretically analyzed and tested within CEA's procor platform on a problem of heat conduction solved with coupled lumped parameter models and coupled 1D models. Numerical results are discussed and allow us to emphasize the benefits of using the designed coupling schemes instead of the usual explicit coupling schemes.

ano.nymous@ccsd.cnrs.fr.invalid (Louis Viot), Louis Viot

Nous considérons une ́equation qui modélise la diffusion de la température dans une mousse de graphite contenant des capsules de sel. Les conditions de transition de la température entre le graphite et le sel doivent être traitées correctement. Nous effectuons l'analyse de ce modèle et prouvons qu'il est bien posé. Puis nous en proposons une discrétisation par éléments finis et effectuons l'analyse a priori du problème discret. Quelques expériences numériques confirment l'intérêt de cette approche.

ano.nymous@ccsd.cnrs.fr.invalid (Faker Ben Belgacem), Faker Ben Belgacem

L'analyse par microsonde électronique (EPMA) permet de quantifier, avec une grande précision, les concentrations élémentaires d'échantillons de compositions inconnues. Elle permet, par exemple, de quantifier les actinides présents dans les combustibles nucléaires neufs ou irradiés, d'aider à la gestion des déchets nucléaires ou encore de dater certaines roches. Malheureusement, ces analyses quantitatives ne sont pas toujours réalisables dû à l'indisponibilité des étalons de référence pour certains actinides. Afin de pallier cette difficulté, une méthode d'analyse dite « sans standard » peut-être employée au moyen d'étalons virtuels. Ces derniers sont obtenus à partir de formules empiriques ou à partir de calculs basés sur des modèles théoriques. Toutefois, ces calculs requièrent la connaissance de paramètres physiques généralement mal connus, comme c'est le cas pour les sections efficaces de production de rayons X. La connaissance précise de ces sections efficaces est requise dans de nombreuses applications telles que dans les codes de transport de particules et dans les simulations Monte-Carlo. Ces codes de calculs sont très utilisés en médecine et particulièrement en imagerie médicale et dans les traitements par faisceau d'électrons. Dans le domaine de l'astronomie, ces données sont utilisées pour effectuer des simulations servant à prédire les compositions des étoiles et des nuages galactiques ainsi que la formation des systèmes planétaires.Au cours de ce travail, les sections efficaces de production des raies L et M du plomb, du thorium et de l'uranium ont été mesurées par impact d'électrons sur des cibles minces autosupportées d'épaisseur variant de 0,2 à 8 nm. Les résultats expérimentaux ont été comparés avec les prédictions théoriques de sections efficaces d'ionisation calculées grâce à l'approximation de Born en ondes distordues (DWBA) et avec les prédictions de formules analytiques utilisées dans les applications pratiques. Les sections efficaces d'ionisation ont été converties en sections efficaces de productions de rayons X grâce aux paramètres de relaxation atomique extraits de la littérature. Les résultats théoriques du modèle DWBA sont en excellents accords avec les résultats expérimentaux. Ceci permet de confirmer les prédictions de ce modèle et de valider son utilisation pour le calcul de standards virtuels.Les prédictions de ce modèle ont été intégrées dans le code Monte-Carlo PENELOPE afin de calculer l'intensité de rayons X produite par des standards pur d'actinides. Les calculs ont été réalisés pour les éléments dont le numéro atomique est 89 ≤ Z ≤ 99 et pour des tensions d'accélération variant du seuil d'ionisation jusque 40 kV, par pas de 0,5 kV. Pour une utilisation pratique, les intensités calculées pour les raies L et M les plus intenses ont été regroupées dans une base de données.Les prédictions des standards virtuels ainsi obtenus ont été comparées avec des mesures effectuées sur des échantillons de composition connue (U, UO2, ThO2, ThF4, PuO2…) et avec les données acquises lors de précédentes campagnes de mesures. Le dosage des actinides à l'aide de ces standards virtuels a montré un bon accord avec les résultats attendus. Ceci confirme la fiabilité des standards virtuels développés et démontre que la quantification des actinides par microsonde électronique est réalisable sans standards d'actinides et avec un bon niveau de confiance.

ano.nymous@ccsd.cnrs.fr.invalid (Aurélien Moy), Aurélien Moy

This paper deals with optimal input design for parameter estimation in a bounded-error context. Uncertain controlled nonlinear dynamical models, when the input can be parametrized by a finite number of parameters, are considered. The main contribution of this paper concerns criteria for obtaining optimal inputs in this context. Two input design criteria are proposed and analysed. They involve sensitivity functions. The first criterion requires the inversion of the Gram matrix of sensitivity functions. The second one does not require this inversion and is then applied for parameter estimation of a model taken from the aeronautical domain. The estimation results obtained using an optimal input are compared with those obtained with an input optimized in a more classical context (Gaussian measurement noise and parameters a priori known to belong to some boxes). These results highlight the potential of optimal input design in a bounded-error context.

ano.nymous@ccsd.cnrs.fr.invalid (Carine Jauberthie), Carine Jauberthie

Background and Objective: This paper deals with the improvement of parameter estimation in terms of precision and computational time for dynamical models in a bounded error context. Methods: To improve parameter estimation, an optimal initial state design is proposed combined with a contractor. This contractor is based on a volumetric criterion and an original condition initializing this contractor is given. Based on a sensitivity analysis, our optimal initial state design methodology consists in searching the minimum value of a proposed criterion for the interested parameters. In our framework, the uncertainty (on measurement noise and parameters) is supposed unknown but belongs to known bounded intervals. Thus guaranteed state and sensitivity estimation have been considered. An elementary effect analysis on the number of sampling times is also implemented to achieve the fast and guaranteed parameter estimation. Results: The whole procedure is applied to a pharmacokinetics model and simulation results are given. Conclusions: The good improvement of parameter estimation in terms of computational time and precision for the case study highlights the potential of the proposed methodology.

ano.nymous@ccsd.cnrs.fr.invalid (Qiaochu Li), Qiaochu Li

We consider a degenerate parabolic system modelling the flow of fresh and saltwater in an anisotropic porous medium in the context of seawater intrusion. We propose and analyze a nonlinear Control Volume Finite Element scheme. This scheme ensures the nonnegativity of the discrete solution without any restriction on the mesh and on the anisotropy tensor. Moreover It also provides a control on the entropy. Based on these nonlinear stability results, we show that the scheme converges towards a weak solution to the problem. Numerical results are provided to illustrate the behavior of the model and of the scheme.

ano.nymous@ccsd.cnrs.fr.invalid (Ahmed Ait Hammou Oulhaj), Ahmed Ait Hammou Oulhaj

Résumé du papier "A Coq formal proof of the Lax-Milgram Theorem", CPP 2017.

ano.nymous@ccsd.cnrs.fr.invalid (Sylvie Boldo), Sylvie Boldo

We introduce in this paper a technique for the reduced order approximation of parametric symmetric elliptic partial differential equations. For any given dimension, we prove the existence of an optimal subspace of at most that dimension which realizes the best approximation in mean of the error with respect to the parameter in the quadratic norm associated to the elliptic operator between the exact solution and the Galerkin solution calculated on the subspace. This is analogous to the best approximation property of the Proper Orthogonal Decomposition (POD) subspaces, excepting that in our case the norm is parameter-depending, and then the POD optimal sub-spaces cannot be characterized by means of a spectral problem. We apply a deflation technique to build a series of approximating solutions on finite-dimensional optimal subspaces, directly in the on-line step. We prove that the partial sums converge to the continuous solutions in mean quadratic elliptic norm.

ano.nymous@ccsd.cnrs.fr.invalid (Mejdi Azaiez), Mejdi Azaiez

This paper focuses on Generalized Impedance Boundary Conditions (GIBC) with second order derivatives in the context of linear elasticity and general curved interfaces. A condition of the Wentzell type modeling thin layer coatings on some elastic structure is obtained through an asymptotic analysis of order one of the transmission problem at the thin layer interfaces with respect to the thickness parameter. We prove the well-posedness of the approximate problem and the theoretical quadratic accuracy of the boundary conditions. Then we perform a shape sensitivity analysis of the GIBC model in order to study a shape optimization/optimal design problem. We prove the existence and characterize the first shape derivative of this model. A comparison with the asymptotic expansion of the first shape derivative associated to the original thin layer transmission problem shows that we can interchange the asymptotic and shape derivative analysis. Finally we apply these results to the compliance minimization problem. We compute the shape derivative of the compliance in this context and present some numerical simulations.

ano.nymous@ccsd.cnrs.fr.invalid (Fabien Caubet), Fabien Caubet

The fast multipole method is an efficient technique to accelerate the solution of large scale 3D scattering problems with boundary integral equations. However, the fast multipole accelerated boundary element method (FM-BEM) is intrinsically based on an iterative solver. It has been shown that the number of iterations can significantly hinder the overall efficiency of the FM-BEM. The derivation of robust preconditioners for FM-BEM is now inevitable to increase the size of the problems that can be considered. The main constraint in the context of the FM-BEM is that the complete system is not assembled to reduce computational times and memory requirements. Analytic preconditioners offer a very interesting strategy by improving the spectral properties of the boundary integral equations ahead from the discretization. The main contribution of this paper is to combine an approximate adjoint Dirichlet to Neumann (DtN) map as an analytic preconditioner with a FM-BEM solver to treat Dirichlet exterior scattering problems in 3D elasticity. The approximations of the adjoint DtN map are derived using tools proposed in [40]. The resulting boundary integral equations are preconditioned Combined Field Integral Equations (CFIEs). We provide various numerical illustrations of the efficiency of the method for different smooth and non smooth geometries. In particular, the number of iterations is shown to be completely independent of the number of degrees of freedom and of the frequency for convex obstacles.

ano.nymous@ccsd.cnrs.fr.invalid (Stéphanie Chaillat), Stéphanie Chaillat

L’objectif de ce travail est de prendre en compte l’influence de la présence de défauts surfaciques sur le comportement jusqu’à rupture des structures et ce sans description fine de la géométrie des perturbations. L’approche proposée s’appuie principalement sur deux outils : une analyse asymptotique fine des équations de Navier et l’utilisation des modèles à discontinuité forte. Une stratégie de couplage des deux approches permettant l’analyse du comportement de la structure jusqu’à rupture est également présentée.

ano.nymous@ccsd.cnrs.fr.invalid (Delphine Brancherie), Delphine Brancherie

The main purpose of this paper is to investigate the strong approximation of the $p$-fold integrated empirical process, $p$ being a fixed positive integer. More precisely, we obtain the exact rate of the approximations by a sequence of weighted Brownian bridges and a weighted Kiefer process. Our arguments are based in part on results of Koml\'os, Major and Tusn\'ady (1975). Applications include the two-sample testing procedures together with the change-point problems. We also consider the strong approximation of integrated empirical processes when the parameters are estimated. Finally, we study the behavior of the self-intersection local time of the partial sum process representation of integrated empirical processes.

ano.nymous@ccsd.cnrs.fr.invalid (Sergio Alvarez-Andrade), Sergio Alvarez-Andrade

A sensitivity analysis of a suspension model has been performed in order to highlight the most influential parameters on the sprung mass displacement. To analyse this dynamical model, a new global and bounded dynamic method is investigated. This method, based on the interval analysis, consists in determining lower and upper bounds including the dynamic sensitivity indices. It requires only the knowledge of the parameter variation ranges and not the joint probability density function of the parameters which is hard to estimate. The advantage of the proposed approach is that it takes into account the recursive behavior of the system dynamics.

ano.nymous@ccsd.cnrs.fr.invalid (Sabra Hamza), Sabra Hamza

A mathematical model for the forward problem in electroencephalographic (EEG) source localization in neonates is proposed. The model is able to take into account the presence and ossification process of fontanels which are characterized by a variable conductivity. A subtraction approach is used to deal with the singularity in the source term, and existence and uniqueness results are proved for the continuous problem. Discretization is performed with 3D Finite Elements of type P1 and error estimates are proved in the energy (H 1-)norm. Numerical simulations for a three-layer spherical model as well as for a realistic neonatal head model have been obtained and corroborate the theoretical results. A mathematical tool related to the concept of Gâteau derivatives is introduced which is able to measure the sensitivity of the electric potential with respect to small variations in the fontanel conductivity. Numerical simulations attest that the presence of fontanels in neonates does have an impact on EEG measurements. The present work is an essential preamble to the numerical analysis of the corresponding EEG source reconstruction.

ano.nymous@ccsd.cnrs.fr.invalid (M Darbas), M Darbas

The magnetohydrodynamics laws govern the motion of a conducting fluid, such as blood, in an externally applied static magnetic field B 0. When an artery is exposed to a magnetic field, the blood charged particles are deviated by the Lorentz force thus inducing electrical currents and voltages along the vessel walls and in the neighboring tissues. Such a situation may occur in several bio-medical applications: magnetic resonance imaging (MRI), magnetic drug transport and targeting, tissue engineering… In this paper, we consider the steady unidirectional blood flow in a straight circular rigid vessel with non-conducting walls, in the presence of an exterior static magnetic field. The exact solution of Gold (1962) (with the induced fields not neglected) is revisited. It is shown that the integration over a cross section of the vessel of the longitudinal projection of the Lorentz force is zero, and that this result is related to the existence of current return paths, whose contributions compensate each other over the section. It is also demonstrated that the classical definition of the shear stresses cannot apply in this situation of magnetohydrodynamic flow, because, due to the existence of the Lorentz force, the axisymmetry is broken.

ano.nymous@ccsd.cnrs.fr.invalid (Agnès Drochon), Agnès Drochon

We derive rates of contraction of posterior distributions on non-parametric models resulting from sieve priors. The aim of the study was to provide general conditions to get posterior rates when the parameter space has a general structure, and rate adaptation when the parameter is, for example, a Sobolev class. The conditions employed, although standard in the literature, are combined in a different way. The results are applied to density, regression, nonlinear autoregression and Gaussian white noise models. In the latter we have also considered a loss function which is different from the usual l2 norm, namely the pointwise loss. In this case it is possible to prove that the adaptive Bayesian approach for the l2 loss is strongly suboptimal and we provide a lower bound on the rate.

ano.nymous@ccsd.cnrs.fr.invalid (Julyan Arbel), Julyan Arbel

It has been proven that the knowledge of an accurate approximation of the Dirichlet-to-Neumann (DtN) map is useful for a large range of applications in wave scattering problems. We are concerned in this paper with the construction of an approximate local DtN operator for time-harmonic elastic waves. The main contributions are the following. First, we derive exact operators using Fourier analysis in the case of an elastic half-space. These results are then extended to a general three-dimensional smooth closed surface by using a local tangent plane approximation. Next, a regularization step improves the accuracy of the approximate DtN operators and a localization process is proposed. Finally, a first application is presented in the context of the On-Surface Radiation Conditions method. The efficiency of the approach is investigated for various obstacle geometries at high frequencies.

ano.nymous@ccsd.cnrs.fr.invalid (Stéphanie Chaillat), Stéphanie Chaillat

The main purpose of this paper is to investigate the strong approximation of the integrated empirical process. More precisely, we obtain the exact rate of the approximations by a sequence of weighted Brownian bridges and a weighted Kiefer process. Our arguments are based in part on the Komlós et al. (1975)'s results. Applications include the two-sample testing procedures together with the change-point problems. We also consider the strong approximation of the integrated empirical process when the parameters are estimated. Finally, we study the behavior of the self-intersection local time of the partial sum process representation of the integrated empirical process.Reference: Koml\'os, J., Major, P. and Tusn\'ady, G. (1975). An approximation of partial sums of independent RV's and the sample DF. I. Z. Wahrscheinlichkeitstheorie und Verw. Gebiete, 32, 111-131.

ano.nymous@ccsd.cnrs.fr.invalid (Sergio Alvarez-Andrade), Sergio Alvarez-Andrade

This paper deals with parameter and state estimation in a bounded-error context for uncertain dynamical aerospace models when the input is considered optimized or not. In a bounded-error context, perturbations are assumed bounded but otherwise unknown. The parameters to be estimated are also considered bounded. The tools of the presented work are based on a guaranteed numerical set integration solver of ordinary differential equations combined with adapted set inversion computation. The main contribution of this work consists in developing procedures for parameter estimation whose performance is highly related with the input of system. In this paper, a comparison with a classical non-optimized input is proposed.

ano.nymous@ccsd.cnrs.fr.invalid (Qiaochu Li), Qiaochu Li

A real time algorithm for cardiac and respiratory gating, which only requires an ECG sensor, is proposed here. Three ECG electrodes are placed in such a manner that the modulation of the recorded ECG by the respiratory signal would be maximal; hence, given only one signal we can achieve both cardiac and respiratory MRI gating. First, an off-line learning phase based on wavelet decomposition is run to compute an optimal QRS filter. Afterwards, on one hand the QRS filter is used to accomplish R peak detection, and on the other, a low pass filtering process allows the retrieval of the respiration cycle so that the image acquisition sequences would be triggered by the R peaks only during the expiration phase.

ano.nymous@ccsd.cnrs.fr.invalid (D Abi-Abdallah), D Abi-Abdallah

Blood flow in high static magnetic fields induces elevated voltages that disrupt the ECG signal recorded simultaneously during MRI scans for synchronization purposes. This is known as the magnetohydrodynamic (MHD) effect, it increases the amplitude of the T wave, thus hindering correct R peak detection. In this paper, we present an algorithm for extracting an efficient reference signal from an ECG contaminated by the Nuclear Magnetic Resonance (NMR) environment, that performs a good separation of the R-wave and the MHD artifacts. The proposed signal processing method is based on sub-band decomposition using the wavelet transform, and has been tested on human and small rodents ECG signals acquired during MRI scans in various magnetic field intensities. The results showed an almost flawless trigger generation in fields up to 4.7 Tesla during the three tested imaging sequences (GE, FSE and IRSE)

ano.nymous@ccsd.cnrs.fr.invalid (D Abi-Abdallah), D Abi-Abdallah

Blood flow in high static magnetic fields induces elevated voltages that contaminate the ECG signal which is recorded simultaneously during MRI scans for synchronization purposes. This is known as the magnetohydrodynamic (MHD) effect, it increases the amplitude of the T wave, thus hindering correct R peak detection. In this paper, we inspect the MHD induced alterations of human ECG signals recorded in a 1.5 Tesla steady magnetic field and establish a primary characterization of the induced changes using time and frequency domain analysis. We also reexamine our previously developed real time algorithm for MRI cardiac gating and determine that, with a minor modification, this algorithm is capable of achieving perfect detection even in the presence of strong MHD artifacts.

ano.nymous@ccsd.cnrs.fr.invalid (Dima Abi Abdallah), Dima Abi Abdallah

Cardiac Magnetic Resonance Imaging (MRI) requires synchronization to overcome motion related artifacts caused by the heart’s contractions and the chest wall movements during respiration. Achieving good image quality necessitates combining cardiac and respiratory gating to produce, in real time, a trigger signal that sets off the consecutive image acquisitions. This guarantees that the data collection always starts at the same point of the cardiac cycle during the exhalation phase. In this paper, we present a real time algorithm for extracting a cardiac-respiratory trigger signal using only one, adequately placed, ECG sensor. First, an off-line calculation phase, based on wavelet decomposition, is run to compute an optimal QRS filter. This filter is used, afterwards, to accomplish R peak detection, while a low pass filtering process allows the retrieval of the respiration cycle. The algorithm’s synchronization capabilities were assessed during mice cardiac MRI sessions employing three different imaging sequences, and three specific wavelet functions. The prominent image enhancement gave a good proof of correct triggering. QRS detection was almost flawless for all signals. As for the respiration cycle retrieval it was evaluated on contaminated simulated signals, which were artificially modulated to imitate respiration. The results were quite satisfactory.

ano.nymous@ccsd.cnrs.fr.invalid (Dima Abi-Abdallah), Dima Abi-Abdallah

Blood flow in a steady magnetic field has been of great interest over the past years.Many researchers have examined the effects of magnetic fields on velocity profiles and arterial pressure, and major studies focused on steady or sinusoidal flows. In this paper we present a solution for pulsed magnetohydrodynamic blood flow with a somewhat realistic physiological pressure wave obtained using a windkessel lumped model. A pressure gradient is derived along a rigid vessel placed at the output of a compliant module which receives the ventricle outflow. Then, velocity profile and flow rate expressions are derived in the rigid vessel in the presence of a steady transverse magnetic field. As expected, results showed flow retardation and flattening. The adaptability of our solution approach allowed a comparison with previously addressed flow cases and calculations presented a good coherence with those well established solutions.

ano.nymous@ccsd.cnrs.fr.invalid (Dima Abi Abdallah), Dima Abi Abdallah

Karhunen-Loève's decompositions (KLD) or the proper orthogonal decompositions (POD) of bivariate functions are revisited in this work. We investigate the truncation error first for regular functions and try to improve and sharpen bounds found in the literature. However it happens that (KL)-series expansions are in fact more sensitive to the liability of fields to approximate to be well represented by a small sum of products of separated variables functions. We consider this very issue for some interesting fields solutions of partial differential equations such as the transient heat problem and Poisson's equation. The main tool to state approximation bounds is linear algebra. We show how the singular value decomposition underlying the (KL)-expansion is connected to the spectrum of some Gram matrices. Deriving estimates on the truncation error is thus strongly tied to the spectral properties of these Gram matrices which are structured matrices with low displacement ranks.

ano.nymous@ccsd.cnrs.fr.invalid (Mejdi Azaïez), Mejdi Azaïez

We consider an inverse problem that arises in the management of water resources and pertains to the analysis of the surface waters pollution by organic matter. Most of physical models used by engineers derive from various additions and corrections to enhance the earlier deoxygenation-reaeration model proposed by Streeter and Phelps in 1925, the unknowns being the biochemical oxygen demand (BOD) and the dissolved oxygen (DO) concentrations. The one we deal with includes Taylor's dispersion to account for the heterogeneity of the contamination in all space directions. The system we obtain is then composed of two reaction-dispersion equations. The particularity is that both Neumann and Dirichlet boundary conditions are available on the DO tracer while the BOD density is free of any condition. In fact, for real-life concerns, measurements on the dissolved oxygen are easy to obtain and to save. In the contrary, collecting data on the biochemical oxygen demand is a sensitive task and turns out to be a long-time process. The global model pursues the reconstruction of the BOD density, and especially of its flux along the boundary. Not only this problem is plainly worth studying for its own interest but it can be also a mandatory step in other applications such as the identification of the pollution sources location. The non-standard boundary conditions generate two difficulties in mathematical and computational grounds. They set up a severe coupling between both equations and they are cause of ill-posedness for the data reconstruction problem. Existence and stability fail. Identifiability is therefore the only positive result one can seek after ; it is the central purpose of the paper. We end by some computational experiences to assess the capability of the mixed finite element capability in the missing data recovery (on the biochemical oxygen demand).

ano.nymous@ccsd.cnrs.fr.invalid (Mejdi Azaïez), Mejdi Azaïez

The direct electrochemical reduction of UO2 solid pellets was carried out in LiF-CaF2 (+ 2 mass. % Li2O) at 850°C. An inert gold anode was used instead of the usual reactive sacrificial carbon anode. In this case, oxidation of oxide ions present in the melt yields O2 gas evolution on the anode. Electrochemical characterisations of UO2 pellets were performed by linear sweep voltammetry at 10mV/s and reduction waves associated to oxide direct reduction were observed at a potential 150mV more positive in comparison to the solvent reduction. Subsequent, galvanostatic electrolyses runs were carried out and products were characterised by SEM-EDX, EPMA/WDS and XRD. In one of the runs, uranium oxide was partially reduced and three phases were observed: non reduced UO2 in the centre, pure metallic uranium on the external layer and an intermediate phase representing the initial stage of reduction taking place at the grain boundaries. In another run, the UO2 sample was fully reduced. Due to oxygen removal, the U matrix had a typical coral-like structure which is characteristic of the pattern observed after the electroreduction of solid oxides.

ano.nymous@ccsd.cnrs.fr.invalid (Mathieu Gibilaro), Mathieu Gibilaro

The aim of this article is to explore the possibility of using a family of fixed finite elements shape functions to solve a Dirichlet boundary value problem with an alternative variational formulation. The domain is embedded in a bounding box and the finite element approximation is associated to a regular structured mesh of the box. The shape of the domain is independent of the discretization mesh. In these conditions, a meshing tool is never required. This may be especially useful in the case of evolving domains, for example shape optimization or moving interfaces. This is not a new idea, but we analyze here a special approach. The main difficulty of the approach is that the associated quadratic form is not coercive and an inf-sup condition has to be checked. In dimension one, we prove that this formulation is well posed and we provide error estimates. Nevertheless, our proof relying on explicit computations is limited to that case and we give numerical evidence in dimension two that the formulation does not provide a reliable method. We first add a regularization through a Nitscheterm and we observe that some instabilities still remain. We then introduce and justify a geometrical regularization. A reliable method is obtained using both regularizations.

ano.nymous@ccsd.cnrs.fr.invalid (Gaël Dupire), Gaël Dupire

The aim of this article is to explore the possibility of using a family of fixed finite element shape functions that does not match the domain to solve a boundary value problem with Dirichlet boundary condition. The domain is embedded in a bounding box and the finite element approximation is associated to a regular structured mesh of the box. The shape of the domain is independent of the discretization mesh. In these conditions, a meshing tool is never required. This may be especially useful in the case of evolving domains, for example shape optimization or moving interfaces. Nitsche method has been intensively applied. However, Nitsche is weighted with the mesh size h and therefore is a purely discrete point of view with no interpretation in terms of a continuous variational approach associated with a boundary value problem. In this paper, we introduce an alternative to Nitsche method which is associated with a continuous bilinear form. This extension has strong restrictions: it needs more regularity on the data than the usual method. We prove the well-posedness of our formulation and error estimates. We provide numerical comparisons with Nitsche method.

ano.nymous@ccsd.cnrs.fr.invalid (Jean-Paul Boufflet), Jean-Paul Boufflet

This paper addresses a complex multi-physical phenomemon involving cardiac electrophysiology and hemodynamics. The purpose is to model and simulate a phenomenon that has been observed in MRI machines: in the presence of a strong magnetic field, the T-wave of the electrocardiogram (ECG) gets bigger, which may perturb ECG-gated imaging. This is due a magnetohydrodynamic (MHD) eff ect occurring in the aorta. We reproduce this experimental observation through computer simulations on a realistic anatomy, and with a three-compartment model: inductionless magnetohydrodynamic equations in the aorta, bidomain equations in the heart and electrical di ffusion in the rest of the body. These compartments are strongly coupled and solved using fi nite elements. Several benchmark tests are proposed to assess the numerical solutions and the validity of some modeling assumptions. Then, ECGs are simulated for a wide range of magnetic field intensities (from 0 to 20 Tesla).

ano.nymous@ccsd.cnrs.fr.invalid (Vincent Martin), Vincent Martin

In order to optimize the performance of a diesel engine subject to legislative constraints on pollutant emissions, it is necessary to improve their design, and to identify how design parameters a ect engine behaviours. One speci city of this work is that it does not exist a physical model of engine behaviour under all possible operational conditions. A powerful strategy for engine modeling is to build a fast emulator based on carefully chosen observations, made according to an experimental design. In this paper, two Kriging models are considered. One is based on a geostatistical approach and the other corresponds to a Gaussian process metamodel approach. Our aim is to show that the use of each of these methods does not lead to the same results, particularly when "atypical" points are present in our database. In a more precise way, the statistical approach allows us to obtain a good quality modeling even if atypical data are present, while this situation leads to a bad quality of the modeling by the geostatistical approach. This behaviour takes a fundamental importance for the problem of the pollutant emissions, because the analysis of these atypical data, which are rarely erroneous data, can supply precious information for the engine tuning in the design stage.

ano.nymous@ccsd.cnrs.fr.invalid (Sébastien Castric), Sébastien Castric

Nowdays, one of the greatest problems that earth has to face up is pollution, and that is what leads European Union to make stricter laws about pollution constraints. Moreover, the European laws lead to the increase of emission constraints. In order to take into account these constraints, automotive constructors are obliged to create more and more complex systems. The use of model to predict systems behavior in order to make technical choices or to understand its functioning, has become very important during the last decade. This paper presents two stage approaches for the prediction of NOx (nitrogen oxide) emissions, which are based on an ordinary Kriging method. In the first stage, a reduction of data will take place by selecting signals with correlations studies and by using a fast Fourier transformation. In the second stage, the Kriging method is used to solve the problem of the estimation of NOx emissions under given conditions. Numerical results are presented and compared to highlight the effectiveness of the proposed methods

ano.nymous@ccsd.cnrs.fr.invalid (El Hassane Brahmi), El Hassane Brahmi

A new theorem is provided to test the identifiability of discrete-time systems with polynomial nonlinearities. That extends to discrete-time systems the local state isomorphism approach for continuous-time systems. Two examples are provided to illustrate the approach.

ano.nymous@ccsd.cnrs.fr.invalid (Floriane Anstett), Floriane Anstett

Enhancing the safety of high-temperature reactors (HTRs) is based on the quality of the fuel particles, requiring good knowledge of the microstructure of the four-layer particles designed to retain the fission products during irradiation and under accidental conditions. This paper focuses on the intensive research work performed to characterize the micro- and nanostructure of each unirradiated layer (silicon carbide and pyrocarbon coatings). The analytic expertise developed in the 1970s has been recovered and innovative advanced characterization methods have been developed to improve the process parameters and to ensure the production reproducibility of coatings.

ano.nymous@ccsd.cnrs.fr.invalid (D. Helary), D. Helary

Electron back-scattering diffraction (EBSD) can be successfully performed on SiC coatings for HTR fuel particles. EBSD grain maps obtained from thick and thin unirradiated samples are presented, along with pole figures showing textures and a chart showing the distribution of grain aspect ratios. This information is of great interest, and contributes to improving the process parameters and ensuring the reproducibility of coatings

ano.nymous@ccsd.cnrs.fr.invalid (D. Helary), D. Helary

Nous présentons un environnement de génération automatique de simulations entièrement basé sur les technologies XML. Le langage de description proposé permet de décrire des objets mathématiques tels que des systèmes d'équations différentielles, des systèmes d'équations non-linéaires, des équations aux dérivées partielles en dimension 2, ou bien de simples courbes et surfaces. Il permet aussi de décrire les paramètres dont dépendent ces objets. Ce langage est indépendant du logiciel et permet donc de garantir la pérennité du travail des auteurs ainsi que leur mutualisation et leur réutilisation. Nous décrivons aussi l'architecture d'une «chaîne de compilation» permettant de transformer ces fichiers XML sous forme de scripts et de les faire fonctionner dans le logiciel Scilab.

ano.nymous@ccsd.cnrs.fr.invalid (Stéphane Mottelet), Stéphane Mottelet