Retour Accueil / Recherche / Publications sur H.A.L.
[...]
ano.nymous@ccsd.cnrs.fr.invalid (Hanna Bacave), Hanna Bacave
In industrial Computer-Assisted Engineering, it is common to deal with vector fields or multiple field variables. In this paper, different vector-valued extensions of the Empirical Interpolation Method (EIM) are considered. EIM has been shown to be a valuable tool for dimensionality reduction, reduced-order modeling for nonlinear problems and/or synthesis of families of solutions for parametric problems. Besides already existing vector-valued extensions, a new vector-valued EIM-the so-called VEIM approach-allowing interpolation on all the vector components is proposed and analyzed in this paper. This involves vector-valued basis functions, same magic points shared by all the components and linear combination matrices rather than scalar coefficients. Coefficient matrices are determined under constraints of point-wise interpolation properties for all the components and exact reconstruction property for the snapshots selected during the greedy iterative process. For numerical experiments, various vector-valued approaches including VEIM are tested and compared on various one, two and three-dimensional problems. All methods return robustness, stability and rather good convergence properties as soon as the Kolmogorov width of the dataset is not too big. Depending of the use case, a suitable and convenient method can be chosen among the different vector-valued EIM candidates.
ano.nymous@ccsd.cnrs.fr.invalid (Florian de Vuyst), Florian de Vuyst
We propose a way to account for inspection errors in a particular framework. We consider a situation where the lifetime of a system depends essentially of a particular part. A deterioration of this part is regarded as an unacceptable state for the safety of the system and a major renewal is deemed necessary. Thus the statistical analysis of the deterioration time distribution of this part is of primary interest for the preventive maintenance of the system. In this context, we faced the following problem. In the early life of the system, unwarranted renewals of the part are decided upon, caused by overly cautious behaviour. Such unnecessary renewals make the statistical analysis of deterioration time data difficult and can induce an underestimation of the mean life of the part. To overcome this difficulty, we propose to regard the problem as an incomplete data model. We present its estimation under the maximum likelihood methodology. Numerical experiments show that this approach eliminates the pessimistic bias in the estimation of the mean life of the part. We also present a Bayesian analysis of the problem which can be useful in a small sample setting.
ano.nymous@ccsd.cnrs.fr.invalid (Gilles Celeux), Gilles Celeux
The present thesis constructs an alternative framework to online matching algorithms on large graphs. Using the configuration model to mimic the degree distributions of large networks, we are able to build algorithms based on local matching policies for nodes. Thus, we are allowed to predict and approximate the performances of a class of matching policies given the degree distributions of the initial network. Towards this goal, we use a generalization of the differential equation method to measure valued processes. Throughout the text, we provide simulations and a comparison to the seminal work of Karp, Vazirani and Vazirani based on the prevailing viewpoint in online bipartite matching.
ano.nymous@ccsd.cnrs.fr.invalid (Mohamed Habib Aliou Diallo Aoudi), Mohamed Habib Aliou Diallo Aoudi
The main goal of this research is to develop a data-driven reduced order model (ROM) strategy from high-fidelity simulation result data of a full order model (FOM). The goal is to predict at lower computational cost the time evolution of solutions of Fluid-Structure Interaction (FSI) problems. For some FSI applications like tire/water interaction, the FOM solid model (often chosen as quasistatic) can take far more computational time than the HF fluid one. In this context, for the sake of performance one could only derive a reduced-order model for the structure and try to achieve a partitioned HF fluid solver coupled with a ROM solid one. In this paper, we present a datadriven partitioned ROM on a study case involving a simplified 1D-1D FSI problem representing an axisymmetric elastic model of an arterial vessel, coupled with an incompressible fluid flow. We derive a purely data-driven solid ROM for FOM fluid-ROM structure partitioned coupling and present early results.
ano.nymous@ccsd.cnrs.fr.invalid (Azzeddine Tiba), Azzeddine Tiba
The hidden Markov models (HMM) are used in many different fields, to study the dynamics of a process that cannot be directly observed. However, in some cases, the structure of dependencies of a HMM is too simple to describe the dynamics of the hidden process. In particular, in some applications in finance or in ecology, the transition probabilities of the hidden Markov chain can also depend on the current observation. In this work we are interested in extending the classical HMM to this situation. We define a new model, referred to as the Observation Driven-Hidden Markov Model (OD-HMM). We present a complete study of the general non-parametric OD-HMM with discrete and finite state spaces (hidden and observed variables). We study its identifiability. Then we study the consistency of the maximum likelihood estimators. We derive the associated forward-backward equations for the E-step of the EM algorithm. The quality of the procedure is tested on simulated data sets. Finally, we illustrate the use of the model on an application on the study of annual plants dynamics. This works sets theoretical and practical foundations for a new framework that could be further extended, on one hand to the non-parametric context to simplify estimation, and on the other hand to the hidden semi-Markov models for more realism.
ano.nymous@ccsd.cnrs.fr.invalid (Hanna Bacave), Hanna Bacave
We deploy artificial neural networks to unfold neutron spectra from measured energy-integrated quantities. These neutron spectra represent an important parameter allowing to compute the absorbed dose and the kerma to serve radiation protection in addition to nuclear safety. The built architectures are inspired from convolutional neural networks. The first architecture is made up of residual transposed convolution's blocks while the second is a modified version of the U-net architecture. A large and balanced dataset is simulated following "realistic" physical constraints to train the architectures in an efficient way. Results show a high accuracy prediction of neutron spectra ranging from thermal up to fast spectrum. The dataset processing, the attention paid to performances' metrics and the hyperoptimization are behind the architectures' robustness.
ano.nymous@ccsd.cnrs.fr.invalid (Maha Bouhadida), Maha Bouhadida
We study the time evolution of an increasing stochastic process governed by a first-order stochastic differential system. This defines a particular piecewise deterministic Markov process (PDMP). We consider a Markov renewal process (MRP) associated to the PDMP and its Markov renewal equation (MRE) which is solved in order to obtain a closed-form solution of the transition function of the PDMP. It is then applied in the framework of survival analysis to evaluate the reliability function of a given system. We give a numerical illustration and we compare this analytical solution with the Monte Carlo estimator.
ano.nymous@ccsd.cnrs.fr.invalid (Julien Chiquet), Julien Chiquet
In this paper, we use a particular piecewise deterministic Markov process (PDMP) to model the evolution of a degradation mechanism that may arise in various structural components, namely, the fatigue crack growth. We first derive some probability results on the stochastic dynamics with the help of Markov renewal theory: a closed-form solution for the transition function of the PDMP is given. Then, we investigate some methods to estimate the parameters of the dynamical system, involving Bogolyubov's averaging principle and maximum likelihood estimation for the infinitesimal generator of the underlying jump Markov process. Numerical applications on a real crack data set are given.
ano.nymous@ccsd.cnrs.fr.invalid (Julien Chiquet), Julien Chiquet
During a severe accident in a nuclear reactor, extreme temperatures may be reached (T>2500 K). In these conditions, the nuclear fuel may react with the Zircaloy cladding and then with the steel vessel, forming a mixture of solid-liquid phases called in-vessel corium. In the worst scenario, this mixture may penetrate the vessel and reach the concrete underneath the reactor. In order to develop the TAF-ID thermodynamic database (www.oecd-nea.orgiscienceitaf-id) on nuclear fuels and to predict the high temperature behaviour of the corium + concrete system, new high temperature thermodynamic data are needed. The LM2T at CEA Saclay centre started an experimental campaign of phase equilibria measurements at high temperature (up to 2600 K) on interesting corium sub-systems. In particular, a heat treatment at 2500 K has been performed on two prototypic ex-vessel corium samples (within the U-Zr-Al-Ca-Si-O system) with different amounts of CaO and SiO$_2$. The results show that depending on the SiO2-content, the final configuration of the samples can be significantly different. The sample with the higher CaO-content showed a dendritic structure representative of a single quenched liquid phase, whilst the sample richer in SiO2 exhibited a microstructure which suggests the presence of a liquid miscibility gap. Furthermore a new laser heating setup has been conceived. This technique allows very high temperature measures (T > 3000 K) limiting the interactions between the sample and the surroundings.
ano.nymous@ccsd.cnrs.fr.invalid (Andrea Quaini), Andrea Quaini
Motivated by a wide range of assemble-to-order systems and systems of the collaborativeeconomy applications, we introduce a stochastic matching model on hypergraphs and multigraphs, extending the model introduced by Mairesse and Moyal 2016. In this thesis, the stochastic matching model on general graph structures are defined as follows: given a compatibility general graph structure S = (V; S) which of a set of nodes denoted by V that represent the classes of items and by a set of edges denoted by S that allows matching between different classes of items. Items arrive at the system at a random time, by a sequence (assumed to be i:i:d:) that consists of different classes of V; and request to be matched due to their compatibility according to S: The compatibility by groups of two or more (hypergraphical cases) and by groups of two with possibilities of matching between the items of the same classes (multigraphical cases). The unmatched items are stored in the system and wait for a future compatible item and as soon as they are matched they leave it together. Upon arrival, an item may find several possible matches, the items that leave the system depend on a matching policy _ to be specified. We study the stability of the stochastic matching model on hypergraphs, for different hypergraphical topologies. Then, the stability of the stochastic matching model on multigraphs using the maximal subgraph and minimal blow-up to distinguish the zone of stability.
ano.nymous@ccsd.cnrs.fr.invalid (Youssef Rahmé), Youssef Rahmé
This work is part of a general study on the long-term safety of the geological repository of nuclear wastes. A diffusion equation with a moving boundary in one dimension is introduced and studied. The model describes some mechanisms involved in corrosion processes at the surface of carbon steel canisters in contact with a claystone formation. The main objective of the paper is to prove the existence of global weak solutions to the problem. For this, a semi-discrete in time minimizing movements scheme à la De Giorgi is introduced. First, the existence of solutions to the scheme is established and then, using a priori estimates, it is proved that as the time step goes to zero these solutions converge up to extraction towards a weak solution to the free boundary model.
ano.nymous@ccsd.cnrs.fr.invalid (Benoît Merlet), Benoît Merlet
Motivation: Comparing single-stranded nucleic acids (ssNAs) secondary structures is fundamental when investigating their function and evolution and predicting the effect of mutations on their structures. Many comparison metrics exist, although they are either too elaborate or not sensitive enough to distinguish close ssNAs structures. Results: In this context, we developed AptaMat, a simple and sensitive algorithm for ssNAs secondary structures comparison based on matrices representing the ssNAs secondary structures and a metric built upon the Manhattan distance in the plane. We applied AptaMat to several examples and compared the results to those obtained by the most frequently used metrics, namely the Hamming distance and the RNAdistance, and by a recently developed image-based approach. We showed that AptaMat is able to discriminate between similar sequences, outperforming all the other here considered metrics. In addition, we showed that AptaMat was able to correctly classify 14 RFAM families within a clustering procedure.
ano.nymous@ccsd.cnrs.fr.invalid (Thomas Binet), Thomas Binet
In this dissertation we are concerned with semiparametric models. These models have success and impact in mathematical statistics due to their excellent scientific utility and intriguing theoretical complexity. In the first part of the thesis, we consider the problem of the estimation of a parameter θ, in Banach spaces, maximizing some criterion function which depends on an unknown nuisance parameter h, possibly infinite-dimensional. We show that the m out of n bootstrap, in a general setting, is weakly consistent under conditions similar to those required for weak convergence of the non smooth M-estimators. In this framework, delicate mathematical derivations will be required to cope with estimators of the nuisance parameters inside non-smooth criterion functions. We then investigate an exchangeable weighted bootstrap for function-valued estimators defined as a zero point of a function-valued random criterion function. The main ingredient is the use of a differential identity that applies when the random criterion function is linear in terms of the empirical measure. A large number of bootstrap resampling schemes emerge as special cases of our settings. Examples of applications from the literature are given to illustrate the generality and the usefulness of our results. The second part of the thesis is devoted to the statistical models with multiple change-points. The main purpose of this part is to investigate the asymptotic properties of semiparametric M-estimators with non-smooth criterion functions of the parameters of multiple change-points model for a general class of models in which the form of the distribution can change from segment to segment and in which, possibly, there are parameters that are common to all segments. Consistency of the semiparametric M-estimators of the change-points is established and the rate of convergence is determined. The asymptotic normality of the semiparametric M-estimators of the parameters of the within-segment distributions is established under quite general conditions. We finally extend our study to the censored data framework. We investigate the performance of our methodologies for small samples through simulation studies.
ano.nymous@ccsd.cnrs.fr.invalid (Anouar Abdeldjaoued Ferfache), Anouar Abdeldjaoued Ferfache
Adverse Outcome Pathways (AOPs) are increasingly used to support the integration of in vitro data in hazard assessment for chemicals. Quantitative AOPs (qAOPs) use mathematical models to describe the relationship between key events (KEs). In this paper, data obtained in three cell lines, LHUMES, HepG2 and RPTEC/TERT1, using similar experimental protocols, was used to calibrate a qAOP of mitochondrial toxicity for two chemicals, rotenone and deguelin. The objectives were to determine whether the same qAOP could be used for the three cell types, and to test chemical-independence by cross-validation with a dataset obtained on eight other chemicals in LHUMES cells. Repeating the calibration approach for both chemicals in three cell lines highlighted various practical difficulties. Even when the same readouts of KEs are measured, the mathematical functions used to describe the key event relationships may not be the same. Cross-validation in LHUMES cells was attempted by estimating chemical-specific potency at the molecular initiating events and using the rest of the calibrated qAOP to predict downstream KEs: toxicity of azoxystrobin, carboxine, mepronil and thifluzamide was underestimated.Selection of most relevant readouts and accurate characterization of the molecular initiating event for cross validation are critical when designing in vitro experiments targeted at calibrating qAOPs.
ano.nymous@ccsd.cnrs.fr.invalid (Cleo Tebby), Cleo Tebby
In this paper we analyse a finite volume scheme for a nonlocal version of the Shigesada-Kawazaki-Teramoto (SKT) cross-diffusion system. We prove the existence of solutions to the scheme, derive qualitative properties of the solutions and prove its convergence. The proofs rely on a discrete entropy-dissipation inequality, discrete compactness arguments, and on the novel adaptation of the so-called duality method at the discrete level. Finally, thanks to numerical experiments, we investigate the influence of the nonlocality in the system: on convergence properties of the scheme, as an approximation of the local system and on the development of diffusive instabilities.
ano.nymous@ccsd.cnrs.fr.invalid (Maxime Herda), Maxime Herda
In this paper, we investigate the asymptotic properties of Le Cam's one-step estimator for weak Fractionally AutoRegressive Integrated Moving-Average (FARIMA) models. For these models, noises are uncorrelated but neither necessarily independent nor martingale differences errors. We show under some regularity assumptions that the onestep estimator is strongly consistent and asymptotically normal with the same asymptotic variance as the least squares estimator. We show through simulations that the proposed estimator reduces computational time compared with the least squares estimator. An application for providing remotely computed indicators for time series is proposed.
ano.nymous@ccsd.cnrs.fr.invalid (Samir Ben Hariz), Samir Ben Hariz
Compressible multi-material flows are omnipresent in scientifc and industrial applications: from the supernova explosions in space, high speed flows in jet and rocket propulsion to the scenario of the underwater explosions, and vapor explosions in the post accidental situation in the nuclear reactors, their application covers almost all the aspects of classical fluid physics. In the numerical simulations of these flows, interfaces play a very crucial role. A poor numerical resolution of the interfaces could make it very difficult to account for the physics like material separation, location of the shocks and the contact discontinuities, and the transfer of the mass, momentum, heat between different materials/phases. Owing to such an importance, the sharp interface capturing remains a very active area of research in computational Physics. To address this problem in this paper we focus on the Interface Capturing (IC) strategy, and thus we make the use of a newly developed Diffuse Interface Method (DIM) called: Multidimensional Limiting Process-Upper Bound (MLP-UB). Our analysis shows that this method is easy to implement, easily extendable to multiple space dimensions, can deal with any number of material interfaces, and produces sharp shape-preserving interfaces, along with their accurate interaction with shocks and contact discontinuities. Numerical experiments show very good results even over rather coarse meshes.
ano.nymous@ccsd.cnrs.fr.invalid (Shambhavi Nandan), Shambhavi Nandan
Dans cet article, nous évaluons sur données agrégées les élasticités-prix du commerce international pour six grands pays développés : la France, l'Allemagne, l'Italie, l'Espagne, le Royaume-Uni et les États-Unis. Ces estimations actualisent les travaux de Ducoudré et Heyer (2014). Si elles s'appuient pour l'essentiel sur les données fournies par la comptabilité nationale, la demande adressée pour chaque pays est issue d'une nouvelle base de données construite à l'OFCE retraçant les flux de commerce et les prix au niveau mondial dans 43 zones géographiques. Celle-ci ne se limite plus aux seuls flux de marchandises comme cela était le cas dans nos anciens travaux mais intègre désormais les échanges de services, ces derniers restant toujours très dynamiques sur la période récente et représentant une part de plus en plus importante dans le commerce mondial. Il ressort de nos estimations qu'en termes de volume d'exportations, l'Italie et l'Espagne sont les 2 pays les plus sensibles à une variation des prix relatifs. Concernant les élasticités-prix des prix à l'exportation, ce sont les États-Unis qui se singularisent, avec une élasticité de 0,23, soit bien en-dessous de celles estimées pour les autres pays qui se situent aux alentours de 0,5, reflétant ainsi le pouvoir de marché des firmes américaines. S'agissant des importations, l'Espagne et le Royaume-Uni ont l'élasticité-prix la plus élevée (respectivement 0,92 et 0,99). Ces deux pays sont aussi ceux qui connaissent la diminution la plus importante du volume de leurs importations, comparée aux autres pays étudiés, à la suite d'une dépréciation de 10 % du taux de change de leur devise par rapport à leurs concurrents, une fois pris en compte l'ajustement des prix d'importations.
ano.nymous@ccsd.cnrs.fr.invalid (Bruno Ducoudre), Bruno Ducoudre
For a system, a priori identifiability is a theoretical property depending only on the model and guarantees that its parameters can be uniquely determined from observations. This paper provides a survey of the various and numerous definitions of a priori identifiability given in the literature, for both deterministic continuous and discrete-time models. A classification is done by distinguishing analytical and algebraic definitions as well as local and global ones. Moreover, this paper provides an overview on the distinct methods to test the parameter identifiability. They are classified into the so-called output equality approaches, local state isomorphism approaches and differential algebra approaches. A few examples are detailed to illustrate the methods and complete this survey.
ano.nymous@ccsd.cnrs.fr.invalid (Floriane Anstett-Collin), Floriane Anstett-Collin
CRF19 is a recombinant form of HIV-1 subtypes D, A1 and G, which was first sampled in Cuba in 1999, but was already present there in 1980s. CRF19 was reported almost uniquely in Cuba, where it accounts for ∼25% of new HIV-positive patients and causes rapid progression to AIDS (∼3 years). We analyzed a large data set comprising ∼350 pol and env sequences sampled in Cuba over the last 15 years and ∼350 from Los Alamos database. This data set contained both CRF19 (∼315), and A1, D and G sequences. We performed and combined analyses for the three A1, G and D regions, using fast maximum likelihood approaches, including: (1) phylogeny reconstruction, (2) spatio-temporal analysis of the virus spread, and ancestral character reconstruction for (3) transmission mode and (4) drug resistance mutations (DRMs). We verified these results with a Bayesian approach. This allowed us to acquire new insights on the CRF19 origin and transmission patterns. We showed that CRF19 recombined between 1966 and 1977, most likely in Cuban community stationed in Congo region. We further investigated CRF19 spread on the Cuban province level, and discovered that the epidemic started in 1970s, most probably in Villa Clara, that it was at first carried by heterosexual transmissions, and then quickly spread in the 1980s within the “men having sex with men” (MSM) community, with multiple transmissions back to heterosexuals. The analysis of the transmission patterns of common DRMs found very few resistance transmission clusters. Our results show a very early introduction of CRF19 in Cuba, which could explain its local epidemiological success. Ignited by a major founder event, the epidemic then followed a similar pattern as other subtypes and CRFs in Cuba. The reason for the short time to AIDS remains to be understood and requires specific surveillance, in Cuba and elsewhere.
ano.nymous@ccsd.cnrs.fr.invalid (Anna Zhukova), Anna Zhukova
We extend the general stochastic matching model on graphs introduced in [13], to matching models on multigraphs, that is, graphs with self-loops. The evolution of the model can be described by a discrete time Markov chain whose positive recurrence is investigated. Necessary and sufficient stability conditions are provided, together with the explicit form of the stationary probability in the case where the matching policy is 'First Come, First Matched'.
ano.nymous@ccsd.cnrs.fr.invalid (Jocelyn Begeot), Jocelyn Begeot
We address the problem of unsupervised domain adaptation under the setting of generalized target shift (both class-conditional and label shifts occur). We show that in that setting, for good generalization, it is necessary to learn with similar source and target label distributions and to match the class-conditional probabilities. For this purpose, we propose an estimation of target label proportion by blending mixture estimation and optimal transport. This estimation comes with theoretical guarantees of correctness. Based on the estimation, we learn a model by minimizing a importance weighted loss and a Wasserstein distance between weighted marginals. We prove that this minimization allows to match class-conditionals given mild assumptions on their geometry. Our experimental results show that our method performs better on average than competitors accross a range domain adaptation problems including digits,VisDA and Office. Code for this paper is available at \url{https://github.com/arakotom/mars_domain_adaptation}.
ano.nymous@ccsd.cnrs.fr.invalid (Alain Rakotomamonjy), Alain Rakotomamonjy
Internet of Things (IoT) applications using sensors and actuators raise new privacy related threats such as drivers and vehicles tracking and profiling. These threats can be addressed by developing adaptive and context-aware privacy protection solutions to face the environmental constraints (memory, energy, communication channel, etc.), which cause a number of limitations of applying cryptographic schemes. This paper proposes a privacy preserving solution in ITS context relying on a game theory model between two actors (data holder and data requester) using an incentive motivation against a privacy concession, or leading an active attack. We describe the game elements (actors, roles, states, strategies, and transitions), and find an equilibrium point reaching a compromise between privacy concessions and incentive motivation. Finally, we present numerical results to analyze and evaluate the game theory-based theoretical formulation.
ano.nymous@ccsd.cnrs.fr.invalid (Arbia Riahi Sfar), Arbia Riahi Sfar
We present here the results regarding the characterization of chemical composition and size distribution of aerosols released during laser cutting of two types of fuel debris simulants (Ex-Vessel and In-Vessel scenarios) in air and underwater conditions in the context of Fukushima Daiichi dismantling. The aerosols have systematically an aerodynamic mass median diameter below 1 μm, with particle sizes generally comprised between 60 nm and 160 nm for air cutting conditions, and larger diameters (300-400 nm) for underwater experiments. Regarding the chemical composition, iron, chromium and nickel are mainly found by more than 50 % in the samples whereas radioactive surrogates of Uranium (Hafnium) are undetectable. When compositions are transposed to radioactivity, taking into account radioisotope inventories 10 years after the accident, it is well evidenced that the radioactivity is carried out by small particles in air condition tests (median size around 100 nm) than underwater (median size around 400 nm): 50 % of the radioactivity is present in particles below 90 nm, and 99 % below 950 nm. Caesium carries the largest part of the radioactivity at all sizes below 1 μm in the case of an Ex- Vessel fuel debris simulant. For the In-Vessel, the aerosol median size for the radioactivity is situated around 100 nm, with 59 % of the radioactivity is carried by strontium, 17 % by barium and 16 % by minor actinides (modelled by cerium) and 7% by the caesium. For sizes above 1.6 μm, cerium representing alpha particles (surrogate of plutonium) is almost the only radioactivity-bearing element (96–97 % of the radioactivity). The data produced here could already be used for modelling or designing development of strategies to implement insitu the laser cutting for fuel debris retrieval and safety associated strategies.
ano.nymous@ccsd.cnrs.fr.invalid (Claire Dazon), Claire Dazon
Dans le contexte du démantèlement des réacteurs de Fukushima Daiichi, plusieurs projets ont été subventionnés par le gouvernement japonais pour préparer les opérations de retrait du corium. Dans ce cadre, une étude conjointe menée entre ONET Technologies et les laboratoires du CEA et de l’IRSN a permis de démontrer la faisabilité de l’utilisation de la technique de découpe par laser et d’estimer le terme source aérosol ainsi généré. Deux simulants du corium synthétisés et caractérisés par le CEA-Cadarache ont fait l’objet d’essais de tirs laser sous air et sous eau au sein de l’installation DELIA du CEA Saclay, et les aérosols émis ont été caractérisés par l’IRSN. La caractérisation des particules émises en termes de concentration et de distribution granulométrique a permis d’apporter des informations pour prédire notamment le transport et le dépôt des particules, mais la connaissance de la composition chimique par classe de taille est une information nécessaire pour une meilleure gestion des risques professionnels et environnementaux. Cet article présente les résultats concernant la caractérisation de la composition chimique de l’aérosol d’un simulant du corium, en condition de découpe laser sous air, et la distribution granulométrique associée
ano.nymous@ccsd.cnrs.fr.invalid (Emmanuel Porcheron), Emmanuel Porcheron
Dans le cadre d’un programme pluriannuel, des campagnes de sondages ont été réalisées sur les deux versants du col du Petit-Saint-Bernard (2188 m, Alpes occidentales), entre 750 et 3000 m d’altitude. La méthode de travail néglige les prospections au sol, au profit de la multiplication des sondages manuels, implantés dans des contextes topographiques sélectionnés et menés jusqu’à la base des remplissages holocènes. Les résultats obtenus documentent dans la longue durée l’évolution de la dynamique pédo-sédimentaire et la fréquentation des différents étages d’altitude. La signification des données archéologiques collectées est discutée par rapport à l’état des connaissances dans une zone de comparaison groupant les vallées voisines des Alpes occidentales, par rapport aux modèles de peuplement existants et par rapport aux indications taphonomiques apportées par l’étude pédo-sédimentaire. Un programme d’analyses complémentaires destiné à préciser le contexte, la taphonomie et le statut fonctionnel
ano.nymous@ccsd.cnrs.fr.invalid (Pierre-Jérôme Rey), Pierre-Jérôme Rey
This paper introduces a new approach for the forecasting of solar radiation series at a located station for very short time scale. We built a multivariate model in using few stations (3 stations) separated with irregular distances from 26 km to 56 km. The proposed model is a spatio temporal vector autoregressive VAR model specifically designed for the analysis of spatially sparse spatio-temporal data. This model differs from classic linear models in using spatial and temporal parameters where the available pre-dictors are the lagged values at each station. A spatial structure of stations is defined by the sequential introduction of predictors in the model. Moreover, an iterative strategy in the process of our model will select the necessary stations removing the uninteresting predictors and also selecting the optimal p-order. We studied the performance of this model. The metric error, the relative root mean squared error (rRMSE), is presented at different short time scales. Moreover, we compared the results of our model to simple and well known persistence model and those found in literature.
ano.nymous@ccsd.cnrs.fr.invalid (Maïna André), Maïna André
We introduce the binacox, a prognostic method to deal with the problem of detect- ing multiple cut-points per features in a multivariate setting where a large number of continuous features are available. The method is based on the Cox model and com- bines one-hot encoding with the binarsity penalty, which uses total-variation regular- ization together with an extra linear constraint, and enables feature selection. Original nonasymptotic oracle inequalities for prediction (in terms of Kullback-Leibler diver- gence) and estimation with a fast rate of convergence are established. The statistical performance of the method is examined in an extensive Monte Carlo simulation study, and then illustrated on three publicly available genetic cancer datasets. On these high- dimensional datasets, our proposed method signi cantly outperforms state-of-the-art survival models regarding risk prediction in terms of the C-index, with a computing time orders of magnitude faster. In addition, it provides powerful interpretability from a clinical perspective by automatically pinpointing signi cant cut-points in relevant variables.
ano.nymous@ccsd.cnrs.fr.invalid (Simon Bussy), Simon Bussy
The γ-irradiation of a biphasic system composed of tri-n-butylphosphate in tetrapropylene hydrogen (TPH) in contact with palladium(II) nitrate in nitric acid aqueous solution led to the formation of two precipitates. A thorough characterization of these solids was performed by means of various analytical techniques including X-Ray Diffraction (XRD), Thermal Gravimetric Analysis coupled with a Differential Scanning Calorimeter (TGA-DSC), X-ray Photoelectron Spectroscopy (XPS), InfraRed (IR), RAMAN and Nuclear Magnetic Resonance (NMR) Spectroscopy, and ElectroSpray Ionization Mass Spectrometry (ESI-MS). Investigations showed that the two precipitates exhibit quite similar structures. They are composed at least of two compounds: palladium cyanide and palladium species containing ammonium, phosphorous or carbonyl groups. Several mechanisms are proposed to explain the formation of Pd(CN)2.
ano.nymous@ccsd.cnrs.fr.invalid (Bénédicte Simon), Bénédicte Simon
L'analyse par microsonde électronique (EPMA) permet de quantifier, avec une grande précision, les concentrations élémentaires d'échantillons de compositions inconnues. Elle permet, par exemple, de quantifier les actinides présents dans les combustibles nucléaires neufs ou irradiés, d'aider à la gestion des déchets nucléaires ou encore de dater certaines roches. Malheureusement, ces analyses quantitatives ne sont pas toujours réalisables dû à l'indisponibilité des étalons de référence pour certains actinides. Afin de pallier cette difficulté, une méthode d'analyse dite « sans standard » peut-être employée au moyen d'étalons virtuels. Ces derniers sont obtenus à partir de formules empiriques ou à partir de calculs basés sur des modèles théoriques. Toutefois, ces calculs requièrent la connaissance de paramètres physiques généralement mal connus, comme c'est le cas pour les sections efficaces de production de rayons X. La connaissance précise de ces sections efficaces est requise dans de nombreuses applications telles que dans les codes de transport de particules et dans les simulations Monte-Carlo. Ces codes de calculs sont très utilisés en médecine et particulièrement en imagerie médicale et dans les traitements par faisceau d'électrons. Dans le domaine de l'astronomie, ces données sont utilisées pour effectuer des simulations servant à prédire les compositions des étoiles et des nuages galactiques ainsi que la formation des systèmes planétaires.Au cours de ce travail, les sections efficaces de production des raies L et M du plomb, du thorium et de l'uranium ont été mesurées par impact d'électrons sur des cibles minces autosupportées d'épaisseur variant de 0,2 à 8 nm. Les résultats expérimentaux ont été comparés avec les prédictions théoriques de sections efficaces d'ionisation calculées grâce à l'approximation de Born en ondes distordues (DWBA) et avec les prédictions de formules analytiques utilisées dans les applications pratiques. Les sections efficaces d'ionisation ont été converties en sections efficaces de productions de rayons X grâce aux paramètres de relaxation atomique extraits de la littérature. Les résultats théoriques du modèle DWBA sont en excellents accords avec les résultats expérimentaux. Ceci permet de confirmer les prédictions de ce modèle et de valider son utilisation pour le calcul de standards virtuels.Les prédictions de ce modèle ont été intégrées dans le code Monte-Carlo PENELOPE afin de calculer l'intensité de rayons X produite par des standards pur d'actinides. Les calculs ont été réalisés pour les éléments dont le numéro atomique est 89 ≤ Z ≤ 99 et pour des tensions d'accélération variant du seuil d'ionisation jusque 40 kV, par pas de 0,5 kV. Pour une utilisation pratique, les intensités calculées pour les raies L et M les plus intenses ont été regroupées dans une base de données.Les prédictions des standards virtuels ainsi obtenus ont été comparées avec des mesures effectuées sur des échantillons de composition connue (U, UO2, ThO2, ThF4, PuO2…) et avec les données acquises lors de précédentes campagnes de mesures. Le dosage des actinides à l'aide de ces standards virtuels a montré un bon accord avec les résultats attendus. Ceci confirme la fiabilité des standards virtuels développés et démontre que la quantification des actinides par microsonde électronique est réalisable sans standards d'actinides et avec un bon niveau de confiance.
ano.nymous@ccsd.cnrs.fr.invalid (Aurélien Moy), Aurélien Moy
This paper deals with optimal input design for parameter estimation in a bounded-error context. Uncertain controlled nonlinear dynamical models, when the input can be parametrized by a finite number of parameters, are considered. The main contribution of this paper concerns criteria for obtaining optimal inputs in this context. Two input design criteria are proposed and analysed. They involve sensitivity functions. The first criterion requires the inversion of the Gram matrix of sensitivity functions. The second one does not require this inversion and is then applied for parameter estimation of a model taken from the aeronautical domain. The estimation results obtained using an optimal input are compared with those obtained with an input optimized in a more classical context (Gaussian measurement noise and parameters a priori known to belong to some boxes). These results highlight the potential of optimal input design in a bounded-error context.
ano.nymous@ccsd.cnrs.fr.invalid (Carine Jauberthie), Carine Jauberthie
We introduce a new algorithm of proper generalized decomposition (PGD) for parametric symmetric elliptic partial differential equations. For any given dimension, we prove the existence of an optimal subspace of at most that dimension which realizes the best approximation---in the mean parametric norm associated to the elliptic operator---of the error between the exact solution and the Galerkin solution calculated on the subspace. This is analogous to the best approximation property of the proper orthogonal decomposition (POD) subspaces, except that in our case the norm is parameter-dependent. We apply a deflation technique to build a series of approximating solutions on finite-dimensional optimal subspaces, directly in the online step, and we prove that the partial sums converge to the continuous solution in the mean parametric elliptic norm. We show that the standard PGD for the considered parametric problem is strongly related to the deflation algorithm introduced in this paper. This opens the possibility of computing the PGD expansion by directly solving the optimization problems that yield the optimal subspaces.
ano.nymous@ccsd.cnrs.fr.invalid (M. Azaïez), M. Azaïez
We consider a degenerate parabolic system modelling the flow of fresh and saltwater in an anisotropic porous medium in the context of seawater intrusion. We propose and analyze a nonlinear Control Volume Finite Element scheme. This scheme ensures the nonnegativity of the discrete solution without any restriction on the mesh and on the anisotropy tensor. Moreover It also provides a control on the entropy. Based on these nonlinear stability results, we show that the scheme converges towards a weak solution to the problem. Numerical results are provided to illustrate the behavior of the model and of the scheme.
ano.nymous@ccsd.cnrs.fr.invalid (Ahmed Ait Hammou Oulhaj), Ahmed Ait Hammou Oulhaj
A sensitivity analysis of a suspension model has been performed in order to highlight the most influential parameters on the sprung mass displacement. To analyse this dynamical model, a new global and bounded dynamic method is investigated. This method, based on the interval analysis, consists in determining lower and upper bounds including the dynamic sensitivity indices. It requires only the knowledge of the parameter variation ranges and not the joint probability density function of the parameters which is hard to estimate. The advantage of the proposed approach is that it takes into account the recursive behavior of the system dynamics.
ano.nymous@ccsd.cnrs.fr.invalid (Sabra Hamza), Sabra Hamza
The present work is concerned with the shape reconstruction problem of isotropic elastic inclusions from far-field data obtained by the scattering of a finite number of time-harmonic incident plane waves. This paper aims at completing the theoretical framework which is necessary for the application of geometric optimization tools to the inverse transmission problem in elastodynamics. The forward problem is reduced to systems of boundary integral equations following the direct and indirect methods initially developed for solving acoustic transmission problems. We establish the Fréchet differentiability of the boundary to far-field operator and give a characterization of the first Fréchet derivative and its adjoint operator. Using these results we propose an inverse scattering algorithm based on the iteratively regularized Gauß Newton method and show numerical experiments in the special case of star-shaped obstacles.
ano.nymous@ccsd.cnrs.fr.invalid (Frédérique Le Louër), Frédérique Le Louër
In this communication, we perform the sensitivity analysis of a building energy model. The aim is to assess the impact of the weather data on the performance of a model of a passive house, in order to better control it. The weather data are uncertain dynamic inputs to the model. To evaluate their impact, the problem of generating coherent weather data arises. To solve it, we carry out the Karhunen-Loève decomposition of the uncertain dynamic inputs. We then propose an approach for the sensitivity analysis of this kind of models. The originality for sensitivity analysis purpose is to separate the random variable of the dynamic inputs, propagated to the model response, from the deterministic spatio/temporal function. This analysis highlights the role of the solar gain on a high-insulated passive building, during winter time.
ano.nymous@ccsd.cnrs.fr.invalid (Floriane Anstett-Collin), Floriane Anstett-Collin
A real time algorithm for cardiac and respiratory gating, which only requires an ECG sensor, is proposed here. Three ECG electrodes are placed in such a manner that the modulation of the recorded ECG by the respiratory signal would be maximal; hence, given only one signal we can achieve both cardiac and respiratory MRI gating. First, an off-line learning phase based on wavelet decomposition is run to compute an optimal QRS filter. Afterwards, on one hand the QRS filter is used to accomplish R peak detection, and on the other, a low pass filtering process allows the retrieval of the respiration cycle so that the image acquisition sequences would be triggered by the R peaks only during the expiration phase.
ano.nymous@ccsd.cnrs.fr.invalid (D Abi-Abdallah), D Abi-Abdallah
Cardiac Magnetic Resonance Imaging (MRI) requires synchronization to overcome motion related artifacts caused by the heart’s contractions and the chest wall movements during respiration. Achieving good image quality necessitates combining cardiac and respiratory gating to produce, in real time, a trigger signal that sets off the consecutive image acquisitions. This guarantees that the data collection always starts at the same point of the cardiac cycle during the exhalation phase. In this paper, we present a real time algorithm for extracting a cardiac-respiratory trigger signal using only one, adequately placed, ECG sensor. First, an off-line calculation phase, based on wavelet decomposition, is run to compute an optimal QRS filter. This filter is used, afterwards, to accomplish R peak detection, while a low pass filtering process allows the retrieval of the respiration cycle. The algorithm’s synchronization capabilities were assessed during mice cardiac MRI sessions employing three different imaging sequences, and three specific wavelet functions. The prominent image enhancement gave a good proof of correct triggering. QRS detection was almost flawless for all signals. As for the respiration cycle retrieval it was evaluated on contaminated simulated signals, which were artificially modulated to imitate respiration. The results were quite satisfactory.
ano.nymous@ccsd.cnrs.fr.invalid (Dima Abi-Abdallah), Dima Abi-Abdallah
In this paper we describe a high order spectral algorithm for solving the time-harmonic Navier equations in the exterior of a bounded obstacle in three space dimensions, with Dirichlet or Neumann boundary conditions. Our approach is based on combined-field boundary integral equation (CFIE) reformulations of the Navier equations. We extend the spectral method developped by Ganesh and Hawkins - for solving second kind boundary integral equations in electromagnetism - to linear elasticity for solving CFIEs that commonly involve integral operators with a strongly singular or hypersingular kernel. The numerical scheme applies to boundaries which are globally parameterised by spherical coordinates. The algorithm has the interesting feature that it leads to solve linear systems with substantially fewer unknowns than with other existing fast methods. The computational performances of the proposed spectral algorithm are demonstrated on numerical examples for a variety of three-dimensional convex and non-convex smooth obstacles.
ano.nymous@ccsd.cnrs.fr.invalid (Frédérique Le Louër), Frédérique Le Louër
We consider the question of giving an upper bound for the first nontrivial eigenvalue of the Wentzell-Laplace operator of a domain $\Omega$, involving only geometrical informations. We provide such an upper bound, by generalizing Brock's inequality concerning Steklov eigenvalues, and we conjecture that balls maximize the Wentzell eigenvalue, in a suitable class of domains, which would improve our bound. To support this conjecture, we prove that balls are critical domains for the Wentzell eigenvalue, in any dimension, and that they are local maximizers in dimension 2 and 3, using an order two sensitivity analysis. We also provide some numerical evidence.
ano.nymous@ccsd.cnrs.fr.invalid (Marc Dambrine), Marc Dambrine
In this paper, we address the issue of performing sensitivity analysis of complex models presenting uncertain static and dynamic inputs. The dynamic inputs are viewed as random processes which can be represented by a linear combination of the deterministic functions depending on time whose coefficients are uncorrelated random variables. To achieve this, the Karhunen-Loève decomposition of the dynamic inputs is performed. For sensitivity analysis purposes, the influence of the dynamic inputs onto the model response is then given by the one of the uncorrelated random coefficients of the Karhunen-Loève decomposition, which is the originality here. The approach is applied to a building energy model, in order to assess the impact of the uncertainties of the material properties and the weather data on the energy performance of a real low energy consumption house.
ano.nymous@ccsd.cnrs.fr.invalid (Floriane Anstett-Collin), Floriane Anstett-Collin
We construct and analyze a family of well-conditioned boundary integral equations for the Krylov iterative solution of three-dimensional elastic scattering problems by a bounded rigid obstacle. We develop a new potential theory using a rewriting of the Somigliana integral representation formula. From these results, we generalize to linear elasticity the well-known Brakhage-Werner and Combined Field Integral Equation formulations. We use a suitable approximation of the Dirichlet-to-Neumann (DtN) map as a regularizing operator in the proposed boundary integral equations. The construction of the approximate DtN map is inspired by the On-Surface Radiation Conditions method. We prove that the associated integral equations are uniquely solvable and possess very interesting spectral properties. Promising analytical and numerical investigations, in terms of spherical harmonics, with the elastic sphere are provided.
ano.nymous@ccsd.cnrs.fr.invalid (Marion Darbas), Marion Darbas
Uncertainty Analysis and Sensitivity Analysis of complex models: Coping with dynamic and static inputs
ano.nymous@ccsd.cnrs.fr.invalid (Floriane Anstett-Collin), Floriane Anstett-Collin
The direct electrochemical reduction of UO2 solid pellets was carried out in LiF-CaF2 (+ 2 mass. % Li2O) at 850°C. An inert gold anode was used instead of the usual reactive sacrificial carbon anode. In this case, oxidation of oxide ions present in the melt yields O2 gas evolution on the anode. Electrochemical characterisations of UO2 pellets were performed by linear sweep voltammetry at 10mV/s and reduction waves associated to oxide direct reduction were observed at a potential 150mV more positive in comparison to the solvent reduction. Subsequent, galvanostatic electrolyses runs were carried out and products were characterised by SEM-EDX, EPMA/WDS and XRD. In one of the runs, uranium oxide was partially reduced and three phases were observed: non reduced UO2 in the centre, pure metallic uranium on the external layer and an intermediate phase representing the initial stage of reduction taking place at the grain boundaries. In another run, the UO2 sample was fully reduced. Due to oxygen removal, the U matrix had a typical coral-like structure which is characteristic of the pattern observed after the electroreduction of solid oxides.
ano.nymous@ccsd.cnrs.fr.invalid (Mathieu Gibilaro), Mathieu Gibilaro
This article concerns maximum-likelihood estimation for discrete time homogeneous nonparametric semi-Markov models with finite state space. In particular, we present the exact maximum-likelihood estimator of the semi-Markov kernel which governs the evolution of the semi-Markov chain (SMC). We study its asymptotic properties in the following cases: (i) for one observed trajectory, when the length of the observation tends to infinity, and (ii) for parallel observations of independent copies of an SMC censored at a fixed time, when the number of copies tends to infinity. In both cases, we obtain strong consistency, asymptotic normality, and asymptotic efficiency for every finite dimensional vector of this estimator. Finally, we obtain explicit forms for the covariance matrices of the asymptotic distributions.
ano.nymous@ccsd.cnrs.fr.invalid (Samis Trevezas), Samis Trevezas
The aim of our work is to reconstruct an inclusion immersed in a fluid flowing in a larger bounded domain via a boundary measurement. Here the fluid motion is assumed to be governed by the Stokes equations. We study the inverse problem thanks to the tools of shape optimization by minimizing a Kohn-Vogelius type cost functional. We first characterize the gradient of this cost functional in order to make a numerical resolution. Then, in order to study the stability of this problem, we give the expression of the shape Hessian. We show the compactness of the Riesz operator corresponding to this shape Hessian at a critical point which explains why the inverse problem is ill-posed. Therefore we need some regularization methods to solve numerically this problem. We illustrate those general results by some explicit calculus of the shape Hessian in some particular geometries. In particular, we solve explicitly the Stokes equations in a concentric annulus. Finally, we present some numerical simulations using a parametric method.
ano.nymous@ccsd.cnrs.fr.invalid (Fabien Caubet), Fabien Caubet
We consider the flow of a viscous incompressible fluid in a rigid homogeneous porous medium provided with boundary conditions on the pressure around a circular well. When the boundary pressure presents high variations, the permeability of the medium depends on the pressure, so that the model is nonlinear. We propose a spectral discretization of the resulting system of equations which takes into account the axisymmetry of the domain and of the flow. We prove optimal error estimates and present some numerical experiments which confirm the interest of the discretization.
ano.nymous@ccsd.cnrs.fr.invalid (Mejdi Azaïez), Mejdi Azaïez
A new theorem is provided to test the identifiability of discrete-time systems with polynomial nonlinearities. That extends to discrete-time systems the local state isomorphism approach for continuous-time systems. Two examples are provided to illustrate the approach.
ano.nymous@ccsd.cnrs.fr.invalid (Floriane Anstett), Floriane Anstett
Enhancing the safety of high-temperature reactors (HTRs) is based on the quality of the fuel particles, requiring good knowledge of the microstructure of the four-layer particles designed to retain the fission products during irradiation and under accidental conditions. This paper focuses on the intensive research work performed to characterize the micro- and nanostructure of each unirradiated layer (silicon carbide and pyrocarbon coatings). The analytic expertise developed in the 1970s has been recovered and innovative advanced characterization methods have been developed to improve the process parameters and to ensure the production reproducibility of coatings.
ano.nymous@ccsd.cnrs.fr.invalid (D. Helary), D. Helary
Electron back-scattering diffraction (EBSD) can be successfully performed on SiC coatings for HTR fuel particles. EBSD grain maps obtained from thick and thin unirradiated samples are presented, along with pole figures showing textures and a chart showing the distribution of grain aspect ratios. This information is of great interest, and contributes to improving the process parameters and ensuring the reproducibility of coatings
ano.nymous@ccsd.cnrs.fr.invalid (D. Helary), D. Helary
The mortar spectral element method is a domain decomposition technique that allows for discretizing second- or fourth-order elliptic equations when set in standard Sobolev spaces.he aim of this paper is to extend this method to problems formulated in the space of square-integrable vector fields with square-integrable curl.We consider the problem of computing the vector potential associated with a divergence- free function in dimension 3 and propose a discretization of it. The numerical analysis of the discrete problem is performed and numerical experiments are presented, they turn out to be in good coherency with the theoretical results.
ano.nymous@ccsd.cnrs.fr.invalid (Mjedi Azaïez), Mjedi Azaïez