Retour Accueil / Recherche / Publications sur H.A.L.
In this paper, we derive a periodic model from a one dimensional nonlocal eikonal equation set on the full space modeling dislocation dynamics. Thanks to a gradient entropy estimate, we show that this periodic model converges toward the initial one when the period goes to infinity. Moreover, we design a semi-explicit numerical scheme for the periodic model that we introduce. We show the well-posedness of the scheme and a discrete gradient entropy inequality. We also prove the convergence of the scheme and we present some numerical experiments.
ano.nymous@ccsd.cnrs.fr.invalid (Diana Al Zareef), Diana Al Zareef
We introduce binacox, a prognostic method to deal with the problem of detecting multiple cut-points per feature in a multivariate setting where a large number of continuous features are available. The method is based on the Cox model and combines one-hot encoding with the binarsity penalty, which uses total-variation regularization together with an extra linear constraint, and enables feature selection. Original nonasymptotic oracle inequalities for prediction (in terms of Kullback-Leibler divergence) and estimation with a fast rate of convergence are established. The statistical performance of the method is examined in an extensive Monte Carlo simulation study, and then illustrated on three publicly available genetic cancer data sets. On these high-dimensional data sets, our proposed method outperforms state-of-the-art survival models regarding risk prediction in terms of the C-index, with a computing time orders of magnitude faster. In addition, it provides powerful interpretability from a clinical perspective by automatically pinpointing significant cut-points in relevant variables.
ano.nymous@ccsd.cnrs.fr.invalid (Simon Bussy), Simon Bussy
In this paper we design, analyze and simulate a finite volume scheme for a cross-diffusion system which models chemotaxis with local sensing. This system has the same gradient flow structure as the celebrated minimal Keller-Segel system, but unlike the latter, its solutions are known to exist globally in 2D. The long-time behavior of solutions is only partially understood which motivates numerical exploration with a reliable numerical method. We propose a linearly implicit, two-point flux finite volume approximation of the system. We show that the scheme preserves, at the discrete level, the main features of the continuous system, namely mass, non-negativity of solution, entropy, and duality estimates. These properties allow us to prove the well-posedness, unconditional stability and convergence of the scheme. We also show rigorously that the scheme possesses an asymptotic preserving (AP) property in the quasi-stationary limit. We complement our analysis with thorough numerical experiments investigating convergence and AP properties of the scheme as well as its reliability with respect to stability properties of steady solutions.
ano.nymous@ccsd.cnrs.fr.invalid (Maxime Herda), Maxime Herda
In this work, we propose a large-graph limit estimate of the matching coverage for several matching algorithms, on general graphs generated by the configuration model. For a wide class of local matching algorithms, namely, algorithms that only use information on the immediate neighborhood of the explored nodes, we propose a joint construction of the graph by the configuration model, and of the resulting matching on the latter graph. This leads to a generalization in infinite dimension of the differential equation method of Wormald: We keep track of the matching algorithm over time by a measure-valued CTMC, for which we prove the convergence, to the large-graph limit, to a deterministic hydrodynamic limit, identified as the unique solution of a system of ODE's in the space of integer measures. Then, the asymptotic proportion of nodes covered by the matching appears as a simple function of that solution. We then make this solution explicit for three particular local algorithms: the classical greedy algorithm, and then the uni-min and uni-max algorithms, two variants of the greedy algorithm that select, as neighbor of any explored node, its neighbor having the least (respectively largest) residual degree.
ano.nymous@ccsd.cnrs.fr.invalid (Mohamed Habib Aliou Diallo Aoudi), Mohamed Habib Aliou Diallo Aoudi
The hidden Markov models (HMM) are used in many different fields, to study the dynamics of a process that cannot be directly observed. However, in some cases, the structure of dependencies of a HMM is too simple to describe the dynamics of the hidden process. In particular, in some applications in finance or in ecology, the transition probabilities of the hidden Markov chain can also depend on the current observation. In this work we are interested in extending the classical HMM to this situation. We define a new model, referred to as the Observation Driven-Hidden Markov Model (OD-HMM). We present a complete study of the general non-parametric OD-HMM with discrete and finite state spaces (hidden and observed variables). We study its identifiability. Then we study the consistency of the maximum likelihood estimators. We derive the associated forward-backward equations for the E-step of the EM algorithm. The quality of the procedure is tested on simulated data sets. Finally, we illustrate the use of the model on an application on the study of annual plants dynamics. This works sets theoretical and practical foundations for a new framework that could be further extended, on one hand to the non-parametric context to simplify estimation, and on the other hand to the hidden semi-Markov models for more realism.
ano.nymous@ccsd.cnrs.fr.invalid (Hanna Bacave), Hanna Bacave
This paper considers the statistical inference for stationary time series under weak assumptions. Firstly, a frequency domain approach is proposed for fast estimation based on a one step procedure. This method correct an initial Whittle guess estimator on a subsample by a single Fisher scoring step. The resulting estimator shares the same asymptotic properties of the Whittle estimator on the whole sample and reduce drastically the computation time. Secondly, the asymptotic covariance matrix of the Whittle estimator is estimated for full inference solving an open question raised by Shao, X. (2010).
ano.nymous@ccsd.cnrs.fr.invalid (Samir Ben Hariz), Samir Ben Hariz
In this dissertation, we are interested in nonparametric regression estimation models. More precisely, we are concerned with a class of conditional U-statistics estimators. Conditional U-statistics can be viewed as a generalization of the Nadaray-Watson estimator. The latter uses a smoothing kernel function to “average” response variable values within a predictor range. Stute generalizes the Nadaraya-Watson estimator first by replacing simple weighted averages in the numerator and denominator with U-statistics. Then, using a collection of predictor random variables, he predicts the conditional expectation of the U-statistic kernel function. This generalization is prosperous and influential in mathematical statistics due to its outstanding scientific utility and fascinating theoretical complexity. However, like any other kernel estimation technique, the question of choosing a suitable bandwidth to balance the variance-bias trade off is a subject that remains insufficiently addressed in the literature on conditional U-statistics when explanatory variables are functional. In the first part, we introduce the k nearest neighborhoods estimator of the conditional U-statistics depending on an infinite-dimensional covariate. A sharp uniform in the number of neighborhoods (UINN) limit law for the proposed estimator is presented. Such a result allows the NN to vary within a complete range for which the estimator is consistent. Consequently, it represents an interesting guideline in practice to select the optimal NN in nonparametric functional data analysis. In addition, uniform consistency is also established over ϕ ∈F for a suitably restricted class F, in both cases bounded and unbounded, satisfying some moment conditions and some mild conditions on the model. As a by-product of our proofs, we state consistency results for the k-NN conditional U-statistics, under the random censoring, are uniform in the number of neighbors. The second part of the thesis deals with a general nonparametric statistical curve estimation setting, including the Stute estimator as a particular case. The class of “delta sequence estimators” is defined and treated here. This class also includes the orthogonal series and histogram methods. We partially extend these results to the setting of the functional data. The major part of the thesis is motivated by machine learning problems, including, among many others, the discrimination problems, the metric learning, and the multipartite ranking.
ano.nymous@ccsd.cnrs.fr.invalid (Amel Nezzal), Amel Nezzal
A cross-diffusion system with Lotka--Volterra reaction terms in a bounded domain with no-flux boundary conditions is analyzed. The system is a nonlocal regularization of a generalized Busenberg--Travis model, which describes segregating population species with local averaging. The partial velocities are the solutions of an elliptic regularization of Darcy's law, which can be interpreted as a Brinkman's law. The following results are proved: the existence of global weak solutions; localization limit; boundedness and uniqueness of weak solutions (in one space dimension); exponential decay of the solutions. Moreover, the weak--strong uniqueness property for the limiting system is shown.
ano.nymous@ccsd.cnrs.fr.invalid (Ansgar Jüngel), Ansgar Jüngel
This work examines the asymptotic characteristics of a conditional set-indexed empirical process composed of functional ergodic random variables with missing at random (MAR). This paper’s findings enlarge the previous advancements in functional data analysis through the use of empirical process methodologies. These results are shown under specific structural hypotheses regarding entropy and under appealing situations regarding the model. The regression operator’s asymptotic (1−α)-confidence interval is provided for 0<α<1 as an application. Additionally, we offer a classification example to demonstrate the practical importance of the methodology.
ano.nymous@ccsd.cnrs.fr.invalid (Salim Bouzebda), Salim Bouzebda
Gaussian smoothed sliced Wasserstein distance has been recently introduced for comparing probability distributions, while preserving privacy on the data. It has been shown that it provides performances similar to its non-smoothed (non-private) counterpart. However, the computational and statistical properties of such a metric have not yet been well-established. This work investigates the theoretical properties of this distance as well as those of generalized versions denoted as Gaussian-smoothed sliced divergences. We first show that smoothing and slicing preserve the metric property and the weak topology. To study the sample complexity of such divergences, we then introduce $\hat{\hat\mu}_{n}$ the double empirical distribution for the smoothed-projected $\mu$. The distribution $\hat{\hat\mu}_{n}$ is a result of a double sampling process: one from sampling according to the origin distribution $\mu$ and the second according to the convolution of the projection of $\mu$ on the unit sphere and the Gaussian smoothing. We particularly focus on the Gaussian smoothed sliced Wasserstein distance and prove that it converges with a rate $O(n^{-1/2})$. We also derive other properties, including continuity, of different divergences with respect to the smoothing parameter. We support our theoretical findings with empirical studies in the context of privacy-preserving domain adaptation.
ano.nymous@ccsd.cnrs.fr.invalid (Mokhtar Z. Alaya), Mokhtar Z. Alaya
We consider a repairable system modeled by a semi-Markov process (SMP), where we include a geometric renewal process for system degradation upon repair, and replacement strategies for non-repairable failure or upon N repairs. First Pérez-Ocón and Torres-Castro studied this system (Pérez-Ocón and Torres-Castro in Appl Stoch Model Bus Ind 18(2):157–170, 2002) and proposed availability calculation using the Laplace Transform. In our work, we consider an extended state space for up and down times separately. This allows us to leverage the standard theory for SMP to obtain all reliability related measurements such as reliability, availability (point and steady-state), mean times and rate of occurrence of failures of the system with general initial law. We proceed with a convolution algebra, which allows us to obtain final closed form formulas for the above measurements. Finally, numerical examples are given to illustrate the methodology.
ano.nymous@ccsd.cnrs.fr.invalid (Jingqi Zhang), Jingqi Zhang
Dans ce manuscrit, nous développons une méthode numérique adaptée à la simulation d’écoulements de fluides compressibles non miscibles. Pour modéliser ces écoulements, nous analysons un système totalement conservatif original comptant six équations, ferme par une équation d’état stiffened-gas et une équation d’équilibre en pression. Nous introduisons également un schéma numérique d’ordre 2, en espace et en temps, spécialement conçu pour la capture des interfaces entre les fluides dans des configurations à plusieurs dimensions. Pour atteindre l’ordre 2, nous mettons au point une méthode de reconstruction de pente multidimensionnelle basée sur le critère de stabilité : local extremum diminishing (LED). Le schéma d’ordre 2 associé au modèle totalement conservatif entraine l’apparition d’oscillations dans les profils de pression. Pour éviter ces oscillations parasites, nous démontrons un ensemble de propriétés essentielles. Tout d’abord, nous trouvons des conditions de stabilité, de type CFL, imposées par les reconstructions de pente. Puis, nous démontrons un théorème garantissant la consistance entre l’équation d’énergie et le transport des fractions volumiques. Ensuite, nous proposons une reconstruction de la pression en deux temps pour assurer la positivité de l’énergie interne. Enfin, nous développons une méthode numérique à une seule étape adaptée à la simulation d’écoulements faisant intervenir plus de deux fluides. L’ensemble des résultats présentés dans ce document est illustré par des cas test, à une, deux ou trois dimensions d’espace.
ano.nymous@ccsd.cnrs.fr.invalid (Vincent Mahy), Vincent Mahy
We consider a Semi-Markov Process (SMP) to model the evolution of bladder cancer, which takes different states over time. A multi-state model has been constructed and applied to data collected from 847 patients during a period of fifteen years. Biomedicine databases usually contain censored data and this study shows that, despite this, a good fit of the main survival measures is achieved by using our specific model. This paper aims to present estimators for the semi-Markov kernel, the survival function and the mean time to disease progression. The strong consistency properties of the estimators are proved.
ano.nymous@ccsd.cnrs.fr.invalid (Alicia Perez A. P. das Neves Yedig), Alicia Perez A. P. das Neves Yedig
In the last years, the automotive engineering industry has been deeply influenced by the use of «machine learning» techniques for new design and innovation purposes. However, some specific engineering aspects like numerical optimization study still require the development of suitable high-performance machine learning approaches involving parametrized Finite Elements (FE) structural dynamics simulation data. Weight reduction on a car body is a crucial matter that improves the environmental impact and the cost of the product. The actual optimization process at Renault SA uses numerical Design of Experiments (DOE) to find the right thicknesses and materials for each part of the vehicle that guarantees a reduced weight while keeping a good behavior of the car body, identified by criteria or sensors on the body (maximum displacements, upper bounds of instantaneous acceleration …). The usual DOE methodology generally uses between 3 and 10 times the numbers of parameters of the study (which means, for a 30-parameters study, at least 90 simulations, with typically 10 h per run on a 140-core computer). During the last 2 years, Renault’s teams strived to develop a disruptive methodology to conduct optimization study. By ‘disruptive’, we mean to find a methodology that cuts the cost of computational effort by several orders of magnitude. It is acknowledged that standard DoEs need a number of simulations which is at least proportional to the dimension of the parameter space, leading generally to hundreds of fine simulations for real applications. Comparatively, a disruptive method should require about 10 fine evaluations only. This can be achieved by means of a combination of massive data knowledge extraction of FE crash simulation results and the help of parallel high-performance computing (HPC). For instance, in the recent study presented by Assou et al. (A car crash reduced order model with random forest. In: 4th International workshop on reduced basis, POD and PGD Model Reduction Techniques—MORTech 2017. 2017), it took 10 runs to find a solution of a 34-parameter problem that fulfils the specifications. In order to improve this method, we must extract more knowledge from the simulation results (correlations, spatio-temporal features, explanatory variables) and process them in order to find efficient ways to describe the car crash dynamics and link criteria/quantities of interest with some explanatory variables. One of the improvements made in the last months is the use of the so-called Empirical Interpolation Method (EIM, [Barrault et al.]) to identify the few time instants and spatial nodes of the FE-mesh (referred to as magic points) that “explain” the behavior of the body during the crash, within a dimensionality reduction approach. The EIM method replaces a former K -Means algorithm (Davies et al. in IEEE Trans Pattern Anal Mach Intell, 1(2):224–227, 1979) which was processed online, for each ROM. Instead, the computation of EIM method is done offline, once for all, for each simulation. This new method allows us to compute a ROM quite faster, and to reduce the number of features that we use for the regression step (~ 100). The nonlinear regression step is achieved by a standard Random Forest (RF, [Breiman. Mach Learn 45:5–32, 2001]) algorithm. Another improvement of the method is the characterization of numerical features describing the shape of the body, at a nodal scale. The characteristics of orientation of the elements surrounding a mesh node must be taken into account to describe the behavior of the node during the crash. The actual method integrates some numerical features, computed from the orientation of the elements around each node, to explain the node behavior. The paper is organized as follows: The introduction states the scientific and industrial context of the research. Then, the ReCUR Method is detailed, and the recent improvements are highlighted. Results are presented and discussed before having some concluding remarks on this piece of work.
ano.nymous@ccsd.cnrs.fr.invalid (Etienne Gstalter), Etienne Gstalter
This paper focuses on the low-dimensional representation of multivariate functions. We study a recursive POD representation, based upon the use of the power iterate algorithm to recursively expand the modes retained in the previous step. We obtain general error estimates for the truncated expansion, and prove that the recursive POD representation provides a quasi-optimal approximation in $$L^2$$ L 2 norm. We also prove an exponential rate of convergence, when applied to the solution of the reaction-diffusion partial differential equation. Some relevant numerical experiments show that the recursive POD is computationally more accurate than the Proper Generalized Decomposition for multivariate functions. We also recover the theoretical exponential convergence rate for the solution of the reaction-diffusion equation.
ano.nymous@ccsd.cnrs.fr.invalid (M. Azaïez), M. Azaïez
The problem of estimating the spatio-functional expectile regression for a given spatial mixing structure Xi,Yi∈F×R, when i∈ZN,N≥1 and F is a metric space, is investigated. We have proposed the M-estimation procedure to construct the Spatial Local Linear (SLL) estimator of the expectile regression function. The main contribution of this study is the establishment of the asymptotic properties of the SLL expectile regression estimator. Precisely, we establish the almost-complete convergence with rate. This result is proven under some mild conditions on the model in the mixing framework. The implementation of the SLL estimator is evaluated using an empirical investigation. A COVID-19 data application is performed, allowing this work to highlight the substantial superiority of the SLL-expectile over SLL-quantile in risk exploration.
ano.nymous@ccsd.cnrs.fr.invalid (Ali Laksaci), Ali Laksaci
Stute presented the so-called conditional U-statistics generalizing the Nadaraya–Watson estimates of the regression function. Stute demonstrated their pointwise consistency and the asymptotic normality. In this paper, we extend the results to a more abstract setting. We develop an asymptotic theory of conditional U-statistics for locally stationary random fields {Xs,An:sinRn} observed at irregularly spaced locations in Rn=[0,An]d as a subset of Rd. We employ a stochastic sampling scheme that may create irregularly spaced sampling sites in a flexible manner and includes both pure and mixed increasing domain frameworks. We specifically examine the rate of the strong uniform convergence and the weak convergence of conditional U-processes when the explicative variable is functional. We examine the weak convergence where the class of functions is either bounded or unbounded and satisfies specific moment conditions. These results are achieved under somewhat general structural conditions pertaining to the classes of functions and the underlying models. The theoretical results developed in this paper are (or will be) essential building blocks for several future breakthroughs in functional data analysis.
ano.nymous@ccsd.cnrs.fr.invalid (Salim Bouzebda), Salim Bouzebda
[...]
ano.nymous@ccsd.cnrs.fr.invalid (Stéphane Mottelet), Stéphane Mottelet
Three dimensional elliptic problems with variable coefficients and line Dirac sources arise in a number of fields. The lack of regularity on the solution prompts users to turn towards alternative variational formulations. Rather than using weighted Sobolev spaces, we prefer the dual variational formulation written in the Hilbertian Lebesgue space, the one used by G. Stampacchia [Séminaire Jean Leray, 1964]. The key work is to show a singular/regular expansion where the singularity of the potential is fully expressed by a convolution formula, based on the Green kernel of the Laplacian. The correction term restores the boundary condition and fits with the standard variational formulation of Poisson equation (in the Sobolev space H^1). We intend to develop a thorough analysis of the proposed expansion while avoiding stringent assumptions on the conductivities. Sharp technical tools, as those developed in [E. Di-Giorgi, Mem. Accad. Sci. Torino. 1957] and [N. G. Meyers Ann. Scuo. Norm. Sup. Pisa, 1963], are necessary in the proofs.
ano.nymous@ccsd.cnrs.fr.invalid (Eya Bejaoui), Eya Bejaoui
By constructing jointly a random graph and an associated exploration process, we define the dynamics of a “parking process” on a class of uniform random graphs as a measure-valued Markov process, representing the empirical degree distribution of non-explored nodes. We then establish a functional law of large numbers for this process as the number of vertices grows to infinity, allowing us to assess the jamming constant of the considered random graphs, i.e. the size of the maximal independent set discovered by the exploration algorithm. This technique, which can be applied to any uniform random graph with a given–possibly unbounded–degree distribution, can be seen as a generalization in the space of measures, of the differential equation method introduced by Wormald.
ano.nymous@ccsd.cnrs.fr.invalid (Paola Bermolen), Paola Bermolen
[...]
ano.nymous@ccsd.cnrs.fr.invalid (Hanna Bacave), Hanna Bacave
We propose a way to account for inspection errors in a particular framework. We consider a situation where the lifetime of a system depends essentially of a particular part. A deterioration of this part is regarded as an unacceptable state for the safety of the system and a major renewal is deemed necessary. Thus the statistical analysis of the deterioration time distribution of this part is of primary interest for the preventive maintenance of the system. In this context, we faced the following problem. In the early life of the system, unwarranted renewals of the part are decided upon, caused by overly cautious behaviour. Such unnecessary renewals make the statistical analysis of deterioration time data difficult and can induce an underestimation of the mean life of the part. To overcome this difficulty, we propose to regard the problem as an incomplete data model. We present its estimation under the maximum likelihood methodology. Numerical experiments show that this approach eliminates the pessimistic bias in the estimation of the mean life of the part. We also present a Bayesian analysis of the problem which can be useful in a small sample setting.
ano.nymous@ccsd.cnrs.fr.invalid (Gilles Celeux), Gilles Celeux
The main goal of this research is to develop a data-driven reduced order model (ROM) strategy from high-fidelity simulation result data of a full order model (FOM). The goal is to predict at lower computational cost the time evolution of solutions of Fluid-Structure Interaction (FSI) problems. For some FSI applications like tire/water interaction, the FOM solid model (often chosen as quasistatic) can take far more computational time than the HF fluid one. In this context, for the sake of performance one could only derive a reduced-order model for the structure and try to achieve a partitioned HF fluid solver coupled with a ROM solid one. In this paper, we present a datadriven partitioned ROM on a study case involving a simplified 1D-1D FSI problem representing an axisymmetric elastic model of an arterial vessel, coupled with an incompressible fluid flow. We derive a purely data-driven solid ROM for FOM fluid-ROM structure partitioned coupling and present early results.
ano.nymous@ccsd.cnrs.fr.invalid (Azzeddine Tiba), Azzeddine Tiba
We deploy artificial neural networks to unfold neutron spectra from measured energy-integrated quantities. These neutron spectra represent an important parameter allowing to compute the absorbed dose and the kerma to serve radiation protection in addition to nuclear safety. The built architectures are inspired from convolutional neural networks. The first architecture is made up of residual transposed convolution's blocks while the second is a modified version of the U-net architecture. A large and balanced dataset is simulated following "realistic" physical constraints to train the architectures in an efficient way. Results show a high accuracy prediction of neutron spectra ranging from thermal up to fast spectrum. The dataset processing, the attention paid to performances' metrics and the hyperoptimization are behind the architectures' robustness.
ano.nymous@ccsd.cnrs.fr.invalid (Maha Bouhadida), Maha Bouhadida
The world is producing 2.5 quintillion bytes daily, known as big data. Volume, value, variety, velocity, and veracity define the five characteristics of big data that represent a fundamental complexity for many machine learning algorithms, such as clustering, image recognition, and other modern learning techniques. With this large data, hyperparameter estimations do not take the form of the sample mean (not linear). Instead, they takethe form of average over m-tuples, known as the U-statistic estimator in probabilityand statistics. In this work, we treat the collection of U-statistics, known as the Uprocess,for two types of dependent variables, the Markovian data, and locally stationary random variables. Thus, we have divided our work into two parts to address each type independently.In the first part, we deal with Markovian data. The approach relies on regenerative methods, which essentially involve dividing the sample into independent and identically distributed (i.i.d.) blocks of data, where each block corresponds to the path segments between two visits of an atom called A, forming a renewal sequence. We derive the limiting theory for Harris recurrent Markov chain over uniformly bounded and unbounded classes of functions. We show that the results can be generalized also to the bootstrappe dU statistics. The bootstrap approach bypasses the problems faced with the asymptotic behavior due to the unknown parameters of limiting distribution. Furthermore, the bootstrap technique we use in this thesis is the renewal bootstrap, where the bootstrap samplevis formed by resampling the blocks. Since the non-bootstrapped blocks are independent, most proofs reduce to the i.i.d. case. The main difficulties are related to the randomsize of the resampled blocks, which creates a problem with random stopping times. This problem is degraded by replacing the random stopping time with their expectation. Also, since we resample from a random number of blocks, and the bootstrap equicontinuity can be verified by comparing with the initial process, the weak convergence of the bootstrap U-process must be treated very carefully. We successfully derive the results in the case of the k-Harris Markov chain. We extend all the above results to the case where the degreeof U-statistic grows with the sample size n, with the kernel varying in a class of functions. We provide the uniform limit theory for the renewal bootstrap for the infinite-degree U-process with the help of the decoupling technique combined with symmetrization techniques in addition to the chaining inequality. Remaining in the Markovian setting, we extend the weighted bootstrap empirical processes to a high-dimensional estimation. We consider an exchangeably weighted bootstrap of the general function-indexed empirical U-processes. In the second part of this thesis, dependent data are represented by locally stationary random variables. Propelled by the increasing representation of the data by functionalor curves time series and the non-stationary behavior of the latter, we are interested in the conditional U-process of locally stationary functional time series. More precisely, we investigate the weak convergence of the conditional U-processes in the locally stationary functional mixing data framework. We treat the weak convergence in both caseswhen the class of functions is bounded or unbounded, satisfying some moment conditions. Finally, we extend the asymptotic theory of conditional U-process to the locallystationary functional random field {Xs,An : s ∈ Rn} observed at irregular spaced locations in Rn = [0,An]d ∈ Rd, and include both pure increasing domain and mixed increasing domain. We treat the weak convergence in both cases when the class of functions is boundedor unbounded, satisfying some moment conditions. These results are established underfairly general structural conditions on the classes of functions and the underlying models.
ano.nymous@ccsd.cnrs.fr.invalid (Inass Soukarieh), Inass Soukarieh
Motivated by a wide range of assemble-to-order systems and systems of the collaborativeeconomy applications, we introduce a stochastic matching model on hypergraphs and multigraphs, extending the model introduced by Mairesse and Moyal 2016. In this thesis, the stochastic matching model on general graph structures are defined as follows: given a compatibility general graph structure S = (V; S) which of a set of nodes denoted by V that represent the classes of items and by a set of edges denoted by S that allows matching between different classes of items. Items arrive at the system at a random time, by a sequence (assumed to be i:i:d:) that consists of different classes of V; and request to be matched due to their compatibility according to S: The compatibility by groups of two or more (hypergraphical cases) and by groups of two with possibilities of matching between the items of the same classes (multigraphical cases). The unmatched items are stored in the system and wait for a future compatible item and as soon as they are matched they leave it together. Upon arrival, an item may find several possible matches, the items that leave the system depend on a matching policy _ to be specified. We study the stability of the stochastic matching model on hypergraphs, for different hypergraphical topologies. Then, the stability of the stochastic matching model on multigraphs using the maximal subgraph and minimal blow-up to distinguish the zone of stability.
ano.nymous@ccsd.cnrs.fr.invalid (Youssef Rahmé), Youssef Rahmé
To obtain the highest confidence on the correction of numerical simulation programs implementing the finite element method, one has to formalize the mathematical notions and results that allow to establish the soundness of the method. Sobolev spaces are the mathematical framework in which most weak formulations of partial derivative equations are stated, and where solutions are sought. These functional spaces are built on integration and measure theory. Hence, this chapter in functional analysis is a mandatory theoretical cornerstone for the definition of the finite element method. The purpose of this document is to provide the formal proof community with very detailed pen-and-paper proofs of the main results from integration and measure theory.
ano.nymous@ccsd.cnrs.fr.invalid (François Clément), François Clément
Statistical models with multiple change points in presence of censored data are used in many fields; however, the theoretical properties of M-estimators of such models have received relatively little attention. The main purpose of the present work is to investigate the asymptotic properties of M-estimators of the parameters of a multiple change-point model for a general class of models in which the form of the distribution can change from segment to segment and in which, possibly, there are parameters that are common to all segments, in the setting of a known number of change points. Consistency of the M-estimators of the change points is established and the rate of convergence is determined. The asymptotic normality of the M-estimators of the parameters of the within-segment distributions is established. Since the approaches used in the complete data models are not easily extended to multiple change-point models in the presence of censoring, we have used some general results of Kaplan-Meier integrals. We investigate the performance of the methodology for small samples through a simulation study.
ano.nymous@ccsd.cnrs.fr.invalid (Salim Bouzebda), Salim Bouzebda
This work is part of a general study on the long-term safety of the geological repository of nuclear wastes. A diffusion equation with a moving boundary in one dimension is introduced and studied. The model describes some mechanisms involved in corrosion processes at the surface of carbon steel canisters in contact with a claystone formation. The main objective of the paper is to prove the existence of global weak solutions to the problem. For this, a semi-discrete in time minimizing movements scheme à la De Giorgi is introduced. First, the existence of solutions to the scheme is established and then, using a priori estimates, it is proved that as the time step goes to zero these solutions converge up to extraction towards a weak solution to the free boundary model.
ano.nymous@ccsd.cnrs.fr.invalid (Benoît Merlet), Benoît Merlet
Principal component analysis is a recognized powerful and practical method in statistics and data science. It can also be used in modeling as a dimensionality reduction tool to achieve low-order models of complex multiphysics or engineering systems. Model-order reduction (MOR) methodologies today are an important topic for engineering design and analysis. Design space exploration or accelerated numerical optimization for example are made easier by the use of reduced-order models. In this chapter, we will talk about the use of higher-order singular value decompositions (HOSVD) applied to spatiotemporal problems that are parameterized by a set of design variables or physical parameters. Here we consider a data-driven reduced order modeling based on a design of computer experiment: from high-dimensional computational results returned by high-fidelity solvers (e.g. finite element ones), the HOSVD allows us to determine spatial, time and parameters principal components. The dynamics of the system can then be retrieved by identifying the low-order discrete dynamical system. As application, we will consider the dynamics of deformable capsules flowing into microchannels. The study of such fluid-structure interaction problems is motivated by the use of microcapsules as innovative drug delivery carriers through blood vessels.
ano.nymous@ccsd.cnrs.fr.invalid (Florian de Vuyst), Florian de Vuyst
We focus on the ill posed data completion problem and its finite element approximation, when recast via the variational duplication Kohn-Vogelius artifice and the condensation Steklov-Poincaré operators. We try to understand the useful hidden features of both exact and discrete problems. When discretized with finite elements of degree one, the discrete and exact problems behave in diametrically opposite ways. Indeed, existence of the discrete solution is always guaranteed while its uniqueness may be lost. In contrast, the solution of the exact problem may not exist, but it is unique. We show how existence of the so called "weak spurious modes", of the exact variational formulation, is source of instability and the reason why existence may fail. For the discrete problem, we find that the cause of non uniqueness is actually the occurrence of "spurious modes". We track their fading effect asymptotically when the mesh size tends to zero. In order to restore uniqueness, we recall the discrete version of the Holmgren principle, introduced in [Azaïez et al, IPSE, 18, 2011], and we discuss the effect on uniqueness of the finite element mesh, using some graph theory basic material.
ano.nymous@ccsd.cnrs.fr.invalid (F Ben Belgacem), F Ben Belgacem
[...]
ano.nymous@ccsd.cnrs.fr.invalid (Mustapha Mohammedi), Mustapha Mohammedi
In this dissertation we are concerned with semiparametric models. These models have success and impact in mathematical statistics due to their excellent scientific utility and intriguing theoretical complexity. In the first part of the thesis, we consider the problem of the estimation of a parameter θ, in Banach spaces, maximizing some criterion function which depends on an unknown nuisance parameter h, possibly infinite-dimensional. We show that the m out of n bootstrap, in a general setting, is weakly consistent under conditions similar to those required for weak convergence of the non smooth M-estimators. In this framework, delicate mathematical derivations will be required to cope with estimators of the nuisance parameters inside non-smooth criterion functions. We then investigate an exchangeable weighted bootstrap for function-valued estimators defined as a zero point of a function-valued random criterion function. The main ingredient is the use of a differential identity that applies when the random criterion function is linear in terms of the empirical measure. A large number of bootstrap resampling schemes emerge as special cases of our settings. Examples of applications from the literature are given to illustrate the generality and the usefulness of our results. The second part of the thesis is devoted to the statistical models with multiple change-points. The main purpose of this part is to investigate the asymptotic properties of semiparametric M-estimators with non-smooth criterion functions of the parameters of multiple change-points model for a general class of models in which the form of the distribution can change from segment to segment and in which, possibly, there are parameters that are common to all segments. Consistency of the semiparametric M-estimators of the change-points is established and the rate of convergence is determined. The asymptotic normality of the semiparametric M-estimators of the parameters of the within-segment distributions is established under quite general conditions. We finally extend our study to the censored data framework. We investigate the performance of our methodologies for small samples through simulation studies.
ano.nymous@ccsd.cnrs.fr.invalid (Anouar Abdeldjaoued Ferfache), Anouar Abdeldjaoued Ferfache
We study the existence and uniqueness of a nonlinear system of eikonal equations in one space dimension for any BV initial data. We present two results. In the first one, we prove the existence of a discontinuous viscosity solution without any monotony conditions neither on the velocities nor on the initial data. In the second, we show the continuity of the constructed solution under continuous initial data, and continuous velocities verifying a certain monotony condition. We present an application to a system modeling the dynamics of dislocations densities.
ano.nymous@ccsd.cnrs.fr.invalid (Maryam Al Zohbi), Maryam Al Zohbi
In this thesis, we are mainly interested in the theoretical and numerical study of certain equations that describe the dynamics of dislocation densities. Dislocations are microscopic defects in materials, which move under the effect of an external stress. As a first work, we prove a global in time existence result of a discontinuous solution to a diagonal hyperbolic system, which is not necessarily strictly hyperbolic, in one space dimension. Then in another work, we broaden our scope by proving a similar result to a non-linear eikonal system, which is in fact a generalization of the hyperbolic system studied first. We also prove the existence and uniqueness of a continuous solution to the eikonal system. After that, we study this system numerically in a third work through proposing a finite difference scheme approximating it, of which we prove the convergence to the continuous problem, strengthening our outcomes with some numerical simulations. On a different direction, we were enthused by the theory of differential contraction to evolutionary equations. By introducing a new distance, we create a new family of contracting positive solutions to the evolutionary p-Laplacian equation.
ano.nymous@ccsd.cnrs.fr.invalid (Maryam Al Zohbi), Maryam Al Zohbi
Dans cette thèse, nous nous intéressons à l’analyse théorique et numérique de la dynamique des densités des dislocations. Les dislocations sont des défauts linéaires qui se déplacent dans les cristaux lorsque ceux-ci sont soumis à des contraintes extérieures. D’une manière générale, la dynamique des densités des dislocations est décrite par un système d’équations de transport, où les champs de vitesse dépendent de manière non-locale des densités des dislocations. Au départ, notre travail se focalise sur l’étude d’un système unidimensionnel (2 × 2) de type Hamilton-Jacobi dérivé d’un système bidimensionnel proposé par Groma et Balogh en 1999. Pour ce modèle, nous montrons un résultat d’existence globale et d’unicité. En addition, nous nous intéressons à l’étude numérique de ce problème, complété par des conditions initiales croissantes, en proposant un schéma aux différences finies implicite dont on prouve la convergence. Ensuite, en s’inspirant du travail effectué pour la résolution de la dynamique des densités des dislocations, nous mettons en œuvre une théorie plus générale permettant d’obtenir un résultat similaire d’existence et d’unicité d’une solution dans le cas des systèmes de type eikonal unidimensionnels. En considérant des conditions initiales croissantes, nous faisons une étude numérique pour ce système. Sous certaines conditions de monotonies sur la vitesse, nous proposons un schéma aux différences finies implicite permettant de calculer la solution discrète et simuler ainsi la dynamique des dislocations à travers ce modèle.
ano.nymous@ccsd.cnrs.fr.invalid (Aya Oussaily), Aya Oussaily
Adverse Outcome Pathways (AOPs) are increasingly used to support the integration of in vitro data in hazard assessment for chemicals. Quantitative AOPs (qAOPs) use mathematical models to describe the relationship between key events (KEs). In this paper, data obtained in three cell lines, LHUMES, HepG2 and RPTEC/TERT1, using similar experimental protocols, was used to calibrate a qAOP of mitochondrial toxicity for two chemicals, rotenone and deguelin. The objectives were to determine whether the same qAOP could be used for the three cell types, and to test chemical-independence by cross-validation with a dataset obtained on eight other chemicals in LHUMES cells. Repeating the calibration approach for both chemicals in three cell lines highlighted various practical difficulties. Even when the same readouts of KEs are measured, the mathematical functions used to describe the key event relationships may not be the same. Cross-validation in LHUMES cells was attempted by estimating chemical-specific potency at the molecular initiating events and using the rest of the calibrated qAOP to predict downstream KEs: toxicity of azoxystrobin, carboxine, mepronil and thifluzamide was underestimated.Selection of most relevant readouts and accurate characterization of the molecular initiating event for cross validation are critical when designing in vitro experiments targeted at calibrating qAOPs.
ano.nymous@ccsd.cnrs.fr.invalid (Cleo Tebby), Cleo Tebby
[...]
ano.nymous@ccsd.cnrs.fr.invalid (Florian de Vuyst), Florian de Vuyst
[...]
ano.nymous@ccsd.cnrs.fr.invalid (Tarik Fahlaoui), Tarik Fahlaoui
In this paper we analyse a finite volume scheme for a nonlocal version of the Shigesada-Kawazaki-Teramoto (SKT) cross-diffusion system. We prove the existence of solutions to the scheme, derive qualitative properties of the solutions and prove its convergence. The proofs rely on a discrete entropy-dissipation inequality, discrete compactness arguments, and on the novel adaptation of the so-called duality method at the discrete level. Finally, thanks to numerical experiments, we investigate the influence of the nonlocality in the system: on convergence properties of the scheme, as an approximation of the local system and on the development of diffusive instabilities.
ano.nymous@ccsd.cnrs.fr.invalid (Maxime Herda), Maxime Herda
In this paper, we investigate the asymptotic properties of Le Cam's one-step estimator for weak Fractionally AutoRegressive Integrated Moving-Average (FARIMA) models. For these models, noises are uncorrelated but neither necessarily independent nor martingale differences errors. We show under some regularity assumptions that the onestep estimator is strongly consistent and asymptotically normal with the same asymptotic variance as the least squares estimator. We show through simulations that the proposed estimator reduces computational time compared with the least squares estimator. An application for providing remotely computed indicators for time series is proposed.
ano.nymous@ccsd.cnrs.fr.invalid (Samir Ben Hariz), Samir Ben Hariz
Compressible multi-material flows are omnipresent in scientifc and industrial applications: from the supernova explosions in space, high speed flows in jet and rocket propulsion to the scenario of the underwater explosions, and vapor explosions in the post accidental situation in the nuclear reactors, their application covers almost all the aspects of classical fluid physics. In the numerical simulations of these flows, interfaces play a very crucial role. A poor numerical resolution of the interfaces could make it very difficult to account for the physics like material separation, location of the shocks and the contact discontinuities, and the transfer of the mass, momentum, heat between different materials/phases. Owing to such an importance, the sharp interface capturing remains a very active area of research in computational Physics. To address this problem in this paper we focus on the Interface Capturing (IC) strategy, and thus we make the use of a newly developed Diffuse Interface Method (DIM) called: Multidimensional Limiting Process-Upper Bound (MLP-UB). Our analysis shows that this method is easy to implement, easily extendable to multiple space dimensions, can deal with any number of material interfaces, and produces sharp shape-preserving interfaces, along with their accurate interaction with shocks and contact discontinuities. Numerical experiments show very good results even over rather coarse meshes.
ano.nymous@ccsd.cnrs.fr.invalid (Shambhavi Nandan), Shambhavi Nandan
In this paper, we consider the problem of identifying a single moving point source for a three-dimensional wave equation from boundary measurements. Precisely, we show that the knowledge of the field generated by the source at six different points of the boundary over a finite time interval is sufficient to determine uniquely its trajectory. We also derive a Lipschitz stability estimate for the inversion.
ano.nymous@ccsd.cnrs.fr.invalid (Hanin Al Jebawy), Hanin Al Jebawy
The purpose of this note is to provide an approximation for the generalized bootstrapped empirical process achieving the rate in [38]. The proof is based on the same arguments used in [36]. As a conséquence, we establish an approximation of the bootstrapped kernel distribution estimation. Purthermore, our results are applied to two-sample testing procedures as well as to change-point problems. We end with establishing strong approximations of the bootstrapped empirical process when the parameters are estimated.
ano.nymous@ccsd.cnrs.fr.invalid (Salim Bouzebda), Salim Bouzebda
This thesis is divided into two parts. The first part is dedicated to the study of inverse problems for wave equations and their application to medical imaging. More precisely, we focus our work on the study of the photo-acoustic and thermo-acoustic tomography techniques. They are multi-wave imaging techniques based on the photo-acoustic effect that was discovered in 1880 by Alexander Graham Bell. The inverse problem we are concerned in throughout this thesis is the problem of recovering small absorbers in a bounded domain Ω R3. We provide a direct reconstruction method based on the algebraic algorithm that was developed first in without following the quantitative photo-acoustic tomography approach (qPAT). This algorithm allows us to reconstruct the number of the absorbers and their locations from a single Cauchy data, in addition to some information on optical parameters such as the conductivity and the absorption coefficient that can serve as an important diagnostic information in detecting tumors. The main difference between PAT and TAT is in the type of optical pulse used. In PAT, a high frequency radiation is delivered into the biological tissue to be imaged, while in TAT low frequency radiations are used, which makes some differences in the physical and mathematical setting of the problem. In this dissertation we study the both mathematical models, and propose reconstruction algorithms for the two inverse problems. The second part of this thesis is devoted to the study of non-autonomous semilinear elliptic equations. We study the existence of radial solution in Rn with non zero limiting behavior.
ano.nymous@ccsd.cnrs.fr.invalid (Hanin Al Jebawy), Hanin Al Jebawy
In this work, we design and analyze a Hybrid High-Order (HHO) discretization method for incompressible flows of non-Newtonian fluids with power-like convective behaviour. We work under general assumptions on the viscosity and convection laws, that are associated with possibly different Sobolev exponents r ∈ (1, ∞) and s ∈ (1, ∞). After providing a novel weak formulation of the continuous problem, we study its well-posedness highlighting how a subtle interplay between the exponents r and s determines the existence and uniqueness of a solution. We next design an HHO scheme based on this weak formulation and perform a comprehensive stability and convergence analysis, including convergence for general data and error estimates for shear-thinning fluids and small data. The HHO scheme is validated on a complete panel of model problems.
ano.nymous@ccsd.cnrs.fr.invalid (Daniel Castanon Quiroz), Daniel Castanon Quiroz
Integration, just as much as differentiation, is a fundamental calculus tool that is widely used in many scientific domains. Formalizing the mathematical concept of integration and the associated results in a formal proof assistant helps in providing the highest confidence on the correctness of numerical programs involving the use of integration, directly or indirectly. By its capability to extend the (Riemann) integral to a wide class of irregular functions, and to functions defined on more general spaces than the real line, the Lebesgue integral is perfectly suited for use in mathematical fields such as probability theory, numerical mathematics, and real analysis. In this article, we present the Coq formalization of $\sigma$-algebras, measures, simple functions, and integration of nonnegative measurable functions, up to the full formal proofs of the Beppo Levi (monotone convergence) theorem and Fatou's lemma. More than a plain formalization of the known literature, we present several design choices made to balance the harmony between mathematical readability and usability of Coq theorems. These results are a first milestone toward the formalization of $L^p$~spaces such as Banach spaces.
ano.nymous@ccsd.cnrs.fr.invalid (Sylvie Boldo), Sylvie Boldo
Integration, just as much as differentiation, is a fundamental calculus tool that is widely used in many scientific domains. Formalizing the mathematical concept of integration and the associated results in a formal proof assistant helps in providing the highest confidence on the correctness of numerical programs involving the use of integration, directly or indirectly. By its capability to extend the (Riemann) integral to a wide class of irregular functions, and to functions defined on more general spaces than the real line, the Lebesgue integral is perfectly suited for use in mathematical fields such as probability theory, numerical mathematics, and real analysis. In this article, we present the Coq formalization of $\sigma$-algebras, measures, simple functions, and integration of nonnegative measurable functions, up to the full formal proofs of the Beppo Levi (monotone convergence) theorem and Fatou's lemma. More than a plain formalization of the known literature, we present several design choices made to balance the harmony between mathematical readability and usability of Coq theorems. These results are a first milestone toward the formalization of $L^p$~spaces such as Banach spaces.
ano.nymous@ccsd.cnrs.fr.invalid (Sylvie Boldo), Sylvie Boldo
[...]
ano.nymous@ccsd.cnrs.fr.invalid (Frédérique Le Louër), Frédérique Le Louër
[...]
ano.nymous@ccsd.cnrs.fr.invalid (Elias Zgheib), Elias Zgheib
Recent works in the Boundary Element Method (BEM) community have been devoted to the derivation of fast techniques to perform the matrix vector product needed in the iterative solver. Fast BEMs are now very mature. However, it has been shown that the number of iterations can significantly hinder the overall efficiency of fast BEMs. The derivation of robust preconditioners is now inevitable to increase the size of the problems that can be considered. Analytical precon-ditioners offer a very interesting strategy by improving the spectral properties of the boundary integral equations ahead from the discretization. The main contribution of this paper is to propose new analytical preconditioners to treat Neumann exterior scattering problems in 2D and 3D elasticity. These preconditioners are local approximations of the adjoint Neumann-to-Dirichlet map. We propose three approximations with different orders. The resulting boundary integral equations are preconditioned Combined Field Integral Equations (CFIEs). An analytical spectral study confirms the expected behavior of the preconditioners, i.e., a better eigenvalue clustering especially in the elliptic part contrary to the standard CFIE of the first-kind. We provide various 2D numerical illustrations of the efficiency of the method for different smooth and non smooth geometries. In particular, the number of iterations is shown to be independent of the density of discretization points per wavelength which is not the case of the standard CFIE. In addition, it is less sensitive to the frequency.
ano.nymous@ccsd.cnrs.fr.invalid (Stéphanie Chaillat), Stéphanie Chaillat
An innovative data-driven model-order reduction technique is proposed to model dilute micrometric or nanometric suspensions of microcapsules, i.e., microdrops protected in a thin hyperelastic membrane, which are used in Healthcare as innovative drug vehicles. We consider a microcapsule flowing in a similar-size microfluidic channel and vary systematically the governing parameter, namely the capillary number, ratio of the viscous to elastic forces, and the confinement ratio, ratio of the capsule to tube size. The resulting space-time-parameter problem is solved using two global POD reduced bases, determined in the offline stage for the space and parameter variables, respectively. A suitable low-order spatial reduced basis is then computed in the online stage for any new parameter instance. The time evolution of the capsule dynamics is achieved by identifying the nonlinear low-order manifold of the reduced variables; for that, a point cloud of reduced data is computed and a diffuse approximation method is used. Numerical comparisons between the full-order fluid-structure interaction model and the reduced-order one confirm both accuracy and stability of the reduction technique over the whole admissible parameter domain. We believe that such an approach can be applied to a broad range of coupled problems especially involving quasistatic models of structural mechanics.
ano.nymous@ccsd.cnrs.fr.invalid (Toufik Boubehziz), Toufik Boubehziz
Concise formulae are given for the cumulant matrices of a real-valued (zero-mean) random vector up to order 6. In addition to usual matrix operations, they involve only the Kronecker product, the vec operator, and the commutation matrix. Orders 5 and 6 are provided here for the first time; the same method as provided in the paper can be applied to compute higher orders. An immediate consequence of these formulae is to return 1) upper bounds on the rank of the cumulant matrices and 2) the expression of the sixth-order moment matrix of a Gaussian vector. Due to their conciseness, the proposed formulae also have a computational advantage as compared to the repeated use of Leonov and Shiryaev formula.
ano.nymous@ccsd.cnrs.fr.invalid (Hanany Ould-Baba), Hanany Ould-Baba
For a system, a priori identifiability is a theoretical property depending only on the model and guarantees that its parameters can be uniquely determined from observations. This paper provides a survey of the various and numerous definitions of a priori identifiability given in the literature, for both deterministic continuous and discrete-time models. A classification is done by distinguishing analytical and algebraic definitions as well as local and global ones. Moreover, this paper provides an overview on the distinct methods to test the parameter identifiability. They are classified into the so-called output equality approaches, local state isomorphism approaches and differential algebra approaches. A few examples are detailed to illustrate the methods and complete this survey.
ano.nymous@ccsd.cnrs.fr.invalid (Floriane Anstett-Collin), Floriane Anstett-Collin
CRF19 is a recombinant form of HIV-1 subtypes D, A1 and G, which was first sampled in Cuba in 1999, but was already present there in 1980s. CRF19 was reported almost uniquely in Cuba, where it accounts for ∼25% of new HIV-positive patients and causes rapid progression to AIDS (∼3 years). We analyzed a large data set comprising ∼350 pol and env sequences sampled in Cuba over the last 15 years and ∼350 from Los Alamos database. This data set contained both CRF19 (∼315), and A1, D and G sequences. We performed and combined analyses for the three A1, G and D regions, using fast maximum likelihood approaches, including: (1) phylogeny reconstruction, (2) spatio-temporal analysis of the virus spread, and ancestral character reconstruction for (3) transmission mode and (4) drug resistance mutations (DRMs). We verified these results with a Bayesian approach. This allowed us to acquire new insights on the CRF19 origin and transmission patterns. We showed that CRF19 recombined between 1966 and 1977, most likely in Cuban community stationed in Congo region. We further investigated CRF19 spread on the Cuban province level, and discovered that the epidemic started in 1970s, most probably in Villa Clara, that it was at first carried by heterosexual transmissions, and then quickly spread in the 1980s within the “men having sex with men” (MSM) community, with multiple transmissions back to heterosexuals. The analysis of the transmission patterns of common DRMs found very few resistance transmission clusters. Our results show a very early introduction of CRF19 in Cuba, which could explain its local epidemiological success. Ignited by a major founder event, the epidemic then followed a similar pattern as other subtypes and CRFs in Cuba. The reason for the short time to AIDS remains to be understood and requires specific surveillance, in Cuba and elsewhere.
ano.nymous@ccsd.cnrs.fr.invalid (Anna Zhukova), Anna Zhukova
This thesis is dedicated to the estimation of two statistical models: the simultaneous regression quantiles model and the blind deconvolution model. It therefore consists of two parts. In the first part, we are interested in estimating several quantiles simultaneously in a regression context via the Bayesian approach. Assuming that the error term has an asymmetric Laplace distribution and using the relation between two distinct quantiles of this distribution, we propose a simple fully Bayesian method that satisfies the noncrossing property of quantiles. For implementation, we use Metropolis-Hastings within Gibbs algorithm to sample unknown parameters from their full conditional distribution. The performance and the competitiveness of the underlying method with other alternatives are shown in simulated examples. In the second part, we focus on recovering both the inverse filter and the noise level of a noisy blind deconvolution model in a parametric setting. After the characterization of both the true noise level and inverse filter, we provide a new estimation procedure that is simpler to implement compared with other existing methods. As well, we consider the estimation of the unknown discrete distribution of the input signal. We derive strong consistency and asymptotic normality for all our estimates. Including a comparison with another method, we perform a consistent simulation study that demonstrates empirically the computational performance of our estimation procedures.
ano.nymous@ccsd.cnrs.fr.invalid (Josephine Merhi Bleik), Josephine Merhi Bleik
Internet of Things (IoT) applications using sensors and actuators raise new privacy related threats such as drivers and vehicles tracking and profiling. These threats can be addressed by developing adaptive and context-aware privacy protection solutions to face the environmental constraints (memory, energy, communication channel, etc.), which cause a number of limitations of applying cryptographic schemes. This paper proposes a privacy preserving solution in ITS context relying on a game theory model between two actors (data holder and data requester) using an incentive motivation against a privacy concession, or leading an active attack. We describe the game elements (actors, roles, states, strategies, and transitions), and find an equilibrium point reaching a compromise between privacy concessions and incentive motivation. Finally, we present numerical results to analyze and evaluate the game theory-based theoretical formulation.
ano.nymous@ccsd.cnrs.fr.invalid (Arbia Riahi Sfar), Arbia Riahi Sfar
We consider a two-component mixture model where one component distribution is known while the mixing proportion and the other component distribution are unknown. These kinds of models were first introduced in biology to study the differences in expression between genes. The various estimation methods proposed till now have all assumed that the unknown distribution belongs to a parametric family. In this paper, we show how this assumption can be relaxed. First, we note that generally the above model is not identifiable, but we show that under moment and symmetry conditions some ‘almost everywhere’ identifiability results can be obtained. Where such identifiability conditions are fulfilled we propose an estimation method for the unknown parameters which is shown to be strongly consistent under mild conditions. We discuss applications of our method to microarray data analysis and to the training data problem. We compare our method to the parametric approach using simulated data and, finally, we apply our method to real data from microarray experiments.
ano.nymous@ccsd.cnrs.fr.invalid (Laurent Bordes), Laurent Bordes
Dans le contexte du démantèlement des réacteurs de Fukushima Daiichi, plusieurs projets ont été subventionnés par le gouvernement japonais pour préparer les opérations de retrait du corium. Dans ce cadre, une étude conjointe menée entre ONET Technologies et les laboratoires du CEA et de l’IRSN a permis de démontrer la faisabilité de l’utilisation de la technique de découpe par laser et d’estimer le terme source aérosol ainsi généré. Deux simulants du corium synthétisés et caractérisés par le CEA-Cadarache ont fait l’objet d’essais de tirs laser sous air et sous eau au sein de l’installation DELIA du CEA Saclay, et les aérosols émis ont été caractérisés par l’IRSN. La caractérisation des particules émises en termes de concentration et de distribution granulométrique a permis d’apporter des informations pour prédire notamment le transport et le dépôt des particules, mais la connaissance de la composition chimique par classe de taille est une information nécessaire pour une meilleure gestion des risques professionnels et environnementaux. Cet article présente les résultats concernant la caractérisation de la composition chimique de l’aérosol d’un simulant du corium, en condition de découpe laser sous air, et la distribution granulométrique associée
ano.nymous@ccsd.cnrs.fr.invalid (Emmanuel Porcheron), Emmanuel Porcheron
We consider in this paper a model parabolic variational inequality. This problem is discretized with conforming Lagrange finite elements of order $p ≥ 1$ in space and with the backward Euler scheme in time. The nonlinearity coming from the complementarity constraints is treated with any semismooth Newton algorithm and we take into account in our analysis an arbitrary iterative algebraic solver. In the case $p = 1$, when the system of nonlinear algebraic equations is solved exactly, we derive an a posteriori error estimate on both the energy error norm and a norm approximating the time derivative error. When $p ≥ 1$, we provide a fully computable and guaranteed a posteriori estimate in the energy error norm which is valid at each step of the linearization and algebraic solvers. Our estimate, based on equilibrated flux reconstructions, also distinguishes the discretization, linearization, and algebraic error components. We build an adaptive inexact semismooth Newton algorithm based on stopping the iterations of both solvers when the estimators of the corresponding error components do not affect significantly the overall estimate. Numerical experiments are performed with the semismooth Newton-min algorithm and the semismooth Newton-Fischer-Burmeister algorithm in combination with the GMRES iterative algebraic solver to illustrate the strengths of our approach.
ano.nymous@ccsd.cnrs.fr.invalid (Jad Dabaghi), Jad Dabaghi
We propose an adaptive inexact version of a class of semismooth Newton methods that is aware of the continuous (variational) level. As a model problem, we study the system of variational inequalities describing the contact between two membranes. This problem is discretized with conforming finite elements of order $p \geq 1$, yielding a nonlinear algebraic system of variational inequalities. We consider any iterative semismooth linearization algorithm like the Newton-min or the Newton--Fischer--Burmeister which we complementby any iterative linear algebraic solver. We then derive an a posteriori estimate on the error between the exact solution at the continuous level and the approximate solution which is valid at any step of the linearization and algebraic resolutions. Our estimate is based on flux reconstructions in discrete subspaces of $\mathbf{H}(\mathrm{div}, \Omega)$ and on potential reconstructions in discrete subspaces of $H^1(\Omega)$ satisfying the constraints. It distinguishes the discretization, linearization, and algebraic components of the error. Consequently, we can formulate adaptive stopping criteria for both solvers, giving rise to an adaptive version of the considered inexact semismooth Newton algorithm. Under these criteria, the efficiency of the leading estimates is also established, meaning that we prove them equivalent with the error up to a generic constant. Numerical experiments for the Newton-min algorithm in combination with the GMRES algebraic solver confirm the efficiency of the developed adaptive method.
ano.nymous@ccsd.cnrs.fr.invalid (Jad Dabaghi), Jad Dabaghi
On s’intéresse dans cette thèse à l’apprentissage d’un modèle réduit précis et stable, à partir de données correspondant à la solution d’une équation aux dérivées partielles (EDP), et générées par un solveur haute fidélité (HF). Pour ce faire, on utilise la méthode Dynamic Mode Decomposition (DMD) ainsi que la méthode de réduction Proper Orthogonal Decomposition (POD). Le modèle réduit appris est facilement interprétable, et par une analyse spectrale a posteriori de ce modèle on peut détecter les anomalies lors de la phase d’apprentissage. Les extensions au cas de couplage EDP-EDO, ainsi qu’au cas d’EDP d’ordre deux en temps sont présentées. L’apprentissage d’un modèle réduit dans le cas d’un système dynamique contrôlé par commutation, où la règle de contrôle est apprise à l’aide d’un réseau de neurones artificiel (ANN), est également traité. Un inconvénient de la réduction POD, est la difficile interprétation de la représentation basse dimension. On proposera alors l’utilisation de la méthode Empirical Interpolation Method (EIM). La représentation basse dimension est alors intelligible, et consiste en une restriction de la solution en des points sélectionnés. Cette approche sera ensuite étendue au cas d’EDP dépendant d’un paramètre, et où l’algorithme Kernel Ridge Regression (KRR) nous permettra d’apprendre la variété solution. Ainsi, on présentera l’apprentissage d’un modèle réduit paramétré. L’extension au cas de données bruitées ou bien au cas d’EDP d’évolution non linéaire est présentée en ouverture.
ano.nymous@ccsd.cnrs.fr.invalid (Tarik Fahlaoui), Tarik Fahlaoui
Let (S D-Omega) be the Stokes operator defined in a bounded domain Omega of R-3 with Dirichlet boundary conditions. We prove that, generically with respect to the domain Omega with C-5 boundary, the spectrum of (S D-Omega) satisfies a non-resonant property introduced by C. Foias and J.C. Saut in [17] to linearize the Navier-Stokes system in a bounded domain Omega of R-3 with Dirichlet boundary conditions. For that purpose, we first prove that, generically with respect to the domain Omega with C-5 boundary, all the eigenvalues of (SD Omega) are simple. That answers positively a question raised by J.H. Ortega and E. Zuazua in [27, Section 6]. The proofs of these results follow a standard strategy based on a contradiction argument requiring shape differentiation. One needs to shape differentiate at least twice the initial problem in the direction of carefully chosen domain variations. The main step of the contradiction argument amounts to study the evaluation of Dirichlet-to-Neumann operators associated to these domain variations. (C) 2014 Elsevier Masson SAS. All rights reserved.
ano.nymous@ccsd.cnrs.fr.invalid (Yacine Chitour), Yacine Chitour
Dans le cadre d’un programme pluriannuel, des campagnes de sondages ont été réalisées sur les deux versants du col du Petit-Saint-Bernard(2188 m, Alpes occidentales), entre 750 et 3000 m d’altitude. La méthode de travail néglige les prospections au sol, au profit de la multiplication des sondages manuels, implantés dans des contextes topographiques sélectionnés et menés jusqu’à la base des remplissages holocènes. Les résultats obtenus documentent dans la longue durée l’évolution de la dynamique pédo-sédimentaire et la fréquentation des différents étages d’altitude. La signification des données archéologiques collectées est discutée par rapport à l’état des connaissances dans une zone de comparaison groupant les vallées voisines des Alpes occidentales, par rapport aux modèles de peuplement existants et par rapport aux indications taphonomiques apportées par l’étude pédo-sédimentaire. Un programme d’analyses complémentaires destiné à préciser le contexte, la taphonomie et le statut fonctionnel
ano.nymous@ccsd.cnrs.fr.invalid (Pierre-Jérôme Rey), Pierre-Jérôme Rey
In this thesis, we are interested in the theoretical and numerical analysis o the dynamics of dislocation densities, where dislocations are crystalline defects appearing at the microscopic scale in metallic alloys. Particularly, the study of the Groma-Czikor-Zaiser model (GCZ) and the study of the Groma-Balog model (GB) are considered. The first is actually a system of parabolic type equations, where as, the second is a system of non-linear Hamilton-Jacobi equations. Initially, we demonstrate an existence and uniqueness result of a regular solution using a comparison principle and a fixed point argument for the GCZ model. Next, we establish a time-based global existence result for the GB model, based on notions of discontinuous viscosity solutions and a new estimate of total solution variation, as well as finite velocity propagation of the governed equations. This result is extended also to the cas of general Hamilton-Jacobi equation systems. Finally, we propose a semi-explicit numerical scheme allowing the discretization of the GB model. Based on the theoretical study, we prove that the discrete solution converges toward the continuous solution, as well as an estimate of error between the continuous solution and the numerical solution has been established. Simulations showing the robustness of the numerical scheme are also presented.
ano.nymous@ccsd.cnrs.fr.invalid (Vivian Rizik), Vivian Rizik
This paper investigates an identifiability method for a class of systems of reaction diffusion equations in the L^2 framework. This class is composed of a master system of ordinary differential equations coupled with a slave system of diffusion equations. It can model two populations, the second one being diffusive contrary to the first one. The identifiability method is based on an elimination procedure providing relations called input-output polynomials and linking the unknown parameters , the inputs and the outputs of the model. These polynomials can also be used to estimate the parameters as shown in this article. To our best knowledge, such an identifiability method and a parameter estimation procedure have not yet been explored for such a system in the L^2 framework. This work is applied on an epidemiological model describing the propagation of the chikungunya in a local population.
ano.nymous@ccsd.cnrs.fr.invalid (Nathalie Verdière), Nathalie Verdière
Identifying a word (pattern) in a long sequence of letters is not an easy task. To achieve this objective several models have been proposed under the assumption that the sequence of letters is described by a Markov chain. The Markovian hypothesis imposes restrictions on the distribution of the sojourn time in a state, which has geometric distribution in a discrete process. This is the main drawback when applying Markov chains in real problems. By contrast, semi-Markov processes generalize semi-Markov processes. In semi-Markov processes the sojourn time in a state can be governed by any distribution function. The goal of this article is compute the first hitting time (position) of a word (pattern) in a semi-Markov sequence. To achieve this objective we use the auxiliary prefix and backward chain. To give an example of the applications of the proposed model, the model is tested in a bacteriophage DNA sequence where is looking the enzyme SmaI. We compute the probability that a word occurs for the first time after n nucleotides in a DNA sequence. The corresponding probability distribution, the mean waiting position, the variance and rate of the occurrence of the word are obtained.
ano.nymous@ccsd.cnrs.fr.invalid (Brenda Ivette Garcia-Maya), Brenda Ivette Garcia-Maya
We consider Bienaymé-Galton-Watson and continuous-time Markov branching processes and prove diffusion approximation results in the near critical case, in fixed and random environment. In one hand, in the fixed environment case, we give new proofs and derive necessary and sufficient conditions for diffusion approximation to get hold of Feller-Jiřina and Jagers theorems. In the other hand, we propose a continuous-time Markov branching process with random environments and obtain diffusion approximation results. An averaging result is also presented. Proofs here are new, where weak convergence in the Skorohod space is proved via singular perturbation technique for convergence of generators and tightness of the distributions of the considered families of stochastic processes.
ano.nymous@ccsd.cnrs.fr.invalid (Nikolaos Limnios), Nikolaos Limnios
Many biological networks include cyclic structures. In such cases, Bayesian networks (BNs), which must be acyclic, are not sound models for structure learning. Dynamic BNs can be used but require relatively large time series data. We discuss an alternative model that embeds cyclic structures within acyclic BNs, allowing us to still use the fac-torization property and informative priors on network structure. We present an implementation in the linear Gaussian case, where cyclic structures are treated as multivariate nodes. We use a Markov Chain Monte Carlo algorithm for inference, allowing us to work with posterior distribution on the space of graphs.
ano.nymous@ccsd.cnrs.fr.invalid (Witold Wiecek), Witold Wiecek
This paper deals with optimal input design for parameter estimation in a bounded-error context. Uncertain controlled nonlinear dynamical models, when the input can be parametrized by a finite number of parameters, are considered. The main contribution of this paper concerns criteria for obtaining optimal inputs in this context. Two input design criteria are proposed and analysed. They involve sensitivity functions. The first criterion requires the inversion of the Gram matrix of sensitivity functions. The second one does not require this inversion and is then applied for parameter estimation of a model taken from the aeronautical domain. The estimation results obtained using an optimal input are compared with those obtained with an input optimized in a more classical context (Gaussian measurement noise and parameters a priori known to belong to some boxes). These results highlight the potential of optimal input design in a bounded-error context.
ano.nymous@ccsd.cnrs.fr.invalid (Carine Jauberthie), Carine Jauberthie
One of the important challenges for the decommissioning of the damaged reactors of the Fukushima Daiichi Nuclear Power Plant is the safe retrieval of the fuel debris or corium. It is especially primordial to investigate the cutting conditions for air configuration and for underwater configuration at different water levels. Concerning the cutting techniques, the laser technique is well adapted to the cutting of expected material such as corium that has an irregular shape and heterogeneous composition. A French consortium (ONET Technologies, CEA and IRSN) is being subsidized by the Japanese government to implement R&D related to the laser cutting of Fukushima Daiichi fuel debris and related to dust collection technology. Debris simulant have been manufactured in the PLINIUS platform to represent Molten Core Concrete Interaction as estimated from Fukushima Daiichi calculations. In this simulant, uranium is replaced by hafnium and the major fission products have been replaced by their natural isotopes. During laser cutting experiments in the DELIA facility, aerosols have been collected thanks to filters and impactors. The collected aerosols have been analyzed. Both chemical analysis (dissolution + ICP MS and ICP AES) and microscopic analyses (SEM EDS) will be presented and discussed. These data provide insights on the expected dust releases during cutting and can be converted to provide radioactivity estimates. They have also been successfully compared to thermodynamic calculations with the NUCLEA database.
ano.nymous@ccsd.cnrs.fr.invalid (Christophe Journeau), Christophe Journeau
In this work we present a novel discrete fracture model for single-phase Darcy flow in porous media with fractures of co-dimension one, which introduces an additional unknown at the fracture interface. Inspired by the fictitious domain method this Lagrange multiplier couples fracture and matrix domain and represents a local exchange of the fluid. The multipliers naturally impose the equality of the pressures at the fracture interface. The model is thus appropriate for domains with fractures of permeability higher than that in the surrounding bulk domain. In particular the novel approach allows for independent, regular meshing of fracture and matrix domain and therefore avoids the generation of small elements. We show existence and uniqueness of the weak solution of the continuous primal formulation. Moreover we discuss the discrete inf-sup condition of two different finite element formulations. Several numerical examples verify the accuracy and convergence of proposed method.
ano.nymous@ccsd.cnrs.fr.invalid (Markus Köppel), Markus Köppel
In this work we introduce a stabilized, numerical method for a multi-dimensional, discrete-fracture model (DFM) for single-phase Darcy flow in fractured porous media. In the model, introduced in an earlier work, flow in the (n − 1)-dimensional fracture domain is coupled with that in the n-dimensional bulk or matrix domain by the use of Lagrange multipliers. Thus the model permits a finite element discretization in which the meshes in the fracture and matrix domains are independent so that irregular meshing and in particular the generation of small elements can be avoided. In this paper we introduce in the numerical formulation, which is a saddle-point problem based on a primal, variational formulation for flow in the matrix domain and in the fracture system, a consistent stabilizing term which penalizes discontinuities in the Lagrange multipliers. For this penalized scheme we show stability and prove convergence. With numerical experiments we analyze the performance of the method for various choices of the penalization parameter and compare with other numerical DFM's.
ano.nymous@ccsd.cnrs.fr.invalid (Markus Köppel), Markus Köppel
The purpose is a finite element approximation of the heat diffusion problem in composite media, with non-linear contact resistance at the interfaces. As already explained in [Journal of Scientific Computing, {\bf 63}, 478-501(2015)], hybrid dual formulations are well fitted to complicated composite geometries and provide tractable approaches to variationally express the jumps of the temperature. The finite elements spaces are standard. Interface contributions are added to the variational problem to account for the contact resistance. This is an important advantage for computing codes developers. We undertake the analysis of the non-linear heat problem for a large range of contact resistance and we investigate its discretization by hybrid dual finite element methods. Numerical experiments are presented at the end to support the theoretical results.
ano.nymous@ccsd.cnrs.fr.invalid (F Ben Belgacem), F Ben Belgacem
We introduce a new algorithm of proper generalized decomposition (PGD) for parametric symmetric elliptic partial differential equations. For any given dimension, we prove the existence of an optimal subspace of at most that dimension which realizes the best approximation---in the mean parametric norm associated to the elliptic operator---of the error between the exact solution and the Galerkin solution calculated on the subspace. This is analogous to the best approximation property of the proper orthogonal decomposition (POD) subspaces, except that in our case the norm is parameter-dependent. We apply a deflation technique to build a series of approximating solutions on finite-dimensional optimal subspaces, directly in the online step, and we prove that the partial sums converge to the continuous solution in the mean parametric elliptic norm. We show that the standard PGD for the considered parametric problem is strongly related to the deflation algorithm introduced in this paper. This opens the possibility of computing the PGD expansion by directly solving the optimization problems that yield the optimal subspaces.
ano.nymous@ccsd.cnrs.fr.invalid (M. Azaïez), M. Azaïez
Background and Objective: This paper deals with the improvement of parameter estimation in terms of precision and computational time for dynamical models in a bounded error context. Methods: To improve parameter estimation, an optimal initial state design is proposed combined with a contractor. This contractor is based on a volumetric criterion and an original condition initializing this contractor is given. Based on a sensitivity analysis, our optimal initial state design methodology consists in searching the minimum value of a proposed criterion for the interested parameters. In our framework, the uncertainty (on measurement noise and parameters) is supposed unknown but belongs to known bounded intervals. Thus guaranteed state and sensitivity estimation have been considered. An elementary effect analysis on the number of sampling times is also implemented to achieve the fast and guaranteed parameter estimation. Results: The whole procedure is applied to a pharmacokinetics model and simulation results are given. Conclusions: The good improvement of parameter estimation in terms of computational time and precision for the case study highlights the potential of the proposed methodology.
ano.nymous@ccsd.cnrs.fr.invalid (Qiaochu Li), Qiaochu Li
En toxicologie, un schéma de mode d’action (AOP : Adverse Outcome Pathway) est un cadre conceptuel qui décrit qualitativement les connaissances existantes concernant les liens entre deux points d’ancrage : un événement initiateur moléculaire (MIE : Molecular Initiating Event) et un résultat défavorable (AO : Adverse Outcome) à un niveau d’organisation biolo- gique pertinent pour l’évaluation du risque. La version quantitative d’un AOP, le qAOP, promet d’être un outil puissant pour l’évaluation des risques, grâce notamment à sa capacité de pré- diction. Cet article présente une méthode de modélisation originale de qAOPs par les réseaux bayésiens dynamiques.
ano.nymous@ccsd.cnrs.fr.invalid (Frédéric Y. Bois), Frédéric Y. Bois
In this chapter, we present the empirical estimation of some reliability measures, such as the rate of occurrence of failures and the steady-state availability, for a discrete-time semi-Markov system. The probability of first occurred failure is introduced and estimated. A numerical application is given to illustrate the strong consistency of these estimators.
ano.nymous@ccsd.cnrs.fr.invalid (Stylianos Georgiadis), Stylianos Georgiadis
We introduce in this paper a technique for the reduced order approximation of parametric symmetric elliptic partial differential equations. For any given dimension, we prove the existence of an optimal subspace of at most that dimension which realizes the best approximation in mean of the error with respect to the parameter in the quadratic norm associated to the elliptic operator between the exact solution and the Galerkin solution calculated on the subspace. This is analogous to the best approximation property of the Proper Orthogonal Decomposition (POD) subspaces, excepting that in our case the norm is parameter-depending, and then the POD optimal sub-spaces cannot be characterized by means of a spectral problem. We apply a deflation technique to build a series of approximating solutions on finite-dimensional optimal subspaces, directly in the on-line step. We prove that the partial sums converge to the continuous solutions in mean quadratic elliptic norm.
ano.nymous@ccsd.cnrs.fr.invalid (Mejdi Azaiez), Mejdi Azaiez
This paper focuses on Generalized Impedance Boundary Conditions (GIBC) with second order derivatives in the context of linear elasticity and general curved interfaces. A condition of the Wentzell type modeling thin layer coatings on some elastic structure is obtained through an asymptotic analysis of order one of the transmission problem at the thin layer interfaces with respect to the thickness parameter. We prove the well-posedness of the approximate problem and the theoretical quadratic accuracy of the boundary conditions. Then we perform a shape sensitivity analysis of the GIBC model in order to study a shape optimization/optimal design problem. We prove the existence and characterize the first shape derivative of this model. A comparison with the asymptotic expansion of the first shape derivative associated to the original thin layer transmission problem shows that we can interchange the asymptotic and shape derivative analysis. Finally we apply these results to the compliance minimization problem. We compute the shape derivative of the compliance in this context and present some numerical simulations.
ano.nymous@ccsd.cnrs.fr.invalid (Fabien Caubet), Fabien Caubet
The main purpose of this paper is to investigate the strong approximation of the $p$-fold integrated empirical process, $p$ being a fixed positive integer. More precisely, we obtain the exact rate of the approximations by a sequence of weighted Brownian bridges and a weighted Kiefer process. Our arguments are based in part on results of Koml\'os, Major and Tusn\'ady (1975). Applications include the two-sample testing procedures together with the change-point problems. We also consider the strong approximation of integrated empirical processes when the parameters are estimated. Finally, we study the behavior of the self-intersection local time of the partial sum process representation of integrated empirical processes.
ano.nymous@ccsd.cnrs.fr.invalid (Sergio Alvarez-Andrade), Sergio Alvarez-Andrade
The Finite Element Method is a widely-used method to solve numerical problems coming for instance from physics or biology. To obtain the highest confidence on the correction of numerical simulation programs implementing the Finite Element Method, one has to formalize the mathematical notions and results that allow to establish the sound-ness of the method. The Lax–Milgram theorem may be seen as one of those theoretical cornerstones: under some completeness and coercivity assumptions, it states existence and uniqueness of the solution to the weak formulation of some boundary value problems. This article presents the full formal proof of the Lax–Milgram theorem in Coq. It requires many results from linear algebra, geometry, functional analysis , and Hilbert spaces.
ano.nymous@ccsd.cnrs.fr.invalid (Sylvie Boldo), Sylvie Boldo
Faults and geological barriers can drastically affect the flow patterns in porous media. Such fractures can be modeled as interfaces that interact with the surrounding matrix. We propose a new technique for the estimation of the location and hydrogeological properties of a small number of large fractures in a porous medium from given distributed pressure or flow data. At each iteration, the algorithm builds a short list of candidates by comparing fracture indicators. These indicators quantify at the first order the decrease of a data misfit function; they are cheap to compute. Then, the best candidate is picked up by minimization of the objective function for each candidate. Optimally driven by the fit to the data, the approach has the great advantage of not requiring remeshing, nor shape derivation. The stability of the algorithm is shown on a series of numerical examples representative of typical situations.
ano.nymous@ccsd.cnrs.fr.invalid (Hend Ben Ameur), Hend Ben Ameur
To obtain the highest confidence on the correction of numerical simulation programs implementing the finite element method, one has to formalize the mathematical notions and results that allow to establish the soundness of the method. The Lax-Milgram theorem may be seen as one of those theoretical cornerstones: under some completeness and coercivity assumptions, it states existence and uniqueness of the solution to the weak formulation of some boundary value problems. The purpose of this document is to provide the formal proof community with a very detailed pen-and-paper proof of the Lax-Milgram theorem.
ano.nymous@ccsd.cnrs.fr.invalid (François Clément), François Clément
A sensitivity analysis of a suspension model has been performed in order to highlight the most influential parameters on the sprung mass displacement. To analyse this dynamical model, a new global and bounded dynamic method is investigated. This method, based on the interval analysis, consists in determining lower and upper bounds including the dynamic sensitivity indices. It requires only the knowledge of the parameter variation ranges and not the joint probability density function of the parameters which is hard to estimate. The advantage of the proposed approach is that it takes into account the recursive behavior of the system dynamics.
ano.nymous@ccsd.cnrs.fr.invalid (Sabra Hamza), Sabra Hamza
A mathematical model for the forward problem in electroencephalographic (EEG) source localization in neonates is proposed. The model is able to take into account the presence and ossification process of fontanels which are characterized by a variable conductivity. A subtraction approach is used to deal with the singularity in the source term, and existence and uniqueness results are proved for the continuous problem. Discretization is performed with 3D Finite Elements of type P1 and error estimates are proved in the energy (H 1-)norm. Numerical simulations for a three-layer spherical model as well as for a realistic neonatal head model have been obtained and corroborate the theoretical results. A mathematical tool related to the concept of Gâteau derivatives is introduced which is able to measure the sensitivity of the electric potential with respect to small variations in the fontanel conductivity. Numerical simulations attest that the presence of fontanels in neonates does have an impact on EEG measurements. The present work is an essential preamble to the numerical analysis of the corresponding EEG source reconstruction.
ano.nymous@ccsd.cnrs.fr.invalid (M Darbas), M Darbas
Using a preconditioned Richardson iterative method as a regularization to the data completion problem is the aim of the contribution. The problem is known to be exponentially ill posed that makes its numerical treatment a hard task. The approach we present relies on the Steklov-Poincaré variational framework introduced in [Inverse Problems, vol. 21, 2005]. The resulting algorithm turns out to be equivalent to the Kozlov-Maz’ya-Fomin method in [Comp. Math. Phys., vol. 31, 1991]. We conduct a comprehensive analysis on the suitable stopping rules that provides some optimal estimates under the General Source Condition on the exact solution. Some numerical examples are finally discussed to highlight the performances of the method.
ano.nymous@ccsd.cnrs.fr.invalid (Duc Thang Du), Duc Thang Du
The magnetohydrodynamics laws govern the motion of a conducting fluid, such as blood, in an externally applied static magnetic field B 0. When an artery is exposed to a magnetic field, the blood charged particles are deviated by the Lorentz force thus inducing electrical currents and voltages along the vessel walls and in the neighboring tissues. Such a situation may occur in several bio-medical applications: magnetic resonance imaging (MRI), magnetic drug transport and targeting, tissue engineering… In this paper, we consider the steady unidirectional blood flow in a straight circular rigid vessel with non-conducting walls, in the presence of an exterior static magnetic field. The exact solution of Gold (1962) (with the induced fields not neglected) is revisited. It is shown that the integration over a cross section of the vessel of the longitudinal projection of the Lorentz force is zero, and that this result is related to the existence of current return paths, whose contributions compensate each other over the section. It is also demonstrated that the classical definition of the shear stresses cannot apply in this situation of magnetohydrodynamic flow, because, due to the existence of the Lorentz force, the axisymmetry is broken.
ano.nymous@ccsd.cnrs.fr.invalid (Agnès Drochon), Agnès Drochon
L’auteur s’intéresse au problème de l’analyse en composantes indépendantes multidimensionnelles (ACIM) qui est la généralisation naturelle du problème ordinaire de l’analyse en composantes indépendantes (ACI). Tout d’abord, afin de faciliter l’utilisation des cumulants des ordres supérieurs, nous présentons de nou- velles formules pour le calcul matriciel des matrices de cumulants d’un vecteur aléatoire réel à partir de ses matrices de moments. Outre les opérations matricielles usuelles, ces formules utilisent uniquement le produit de Kronecker, l’opérateur vec et des matrices de commutation. Nous pouvons immédiatement à partir de ces formules examiner de plus près les structures particulières des matrices de cumulants et ainsi donner des résultats sur les rangs de ces matrices qui caractérisent la dépendance entre les variables aléatoires constituant le vecteur aléatoire. L’intérêt pratique principal de nos formules matricielles réside certainement dans une évaluation des cumulants beaucoup plus aisée et rapide qu’avec la méthode usuelle basée sur une utilisation répétée des formules de Leonov et Shiryaev. Dans la deuxième partie de cette thèse, nous montrons que sous les hypothèses usuelles de l’analyse en composantes indépendantes mul- tidimensionnelles, les matrices de cumulants contractées à un ordre statistique quelconque sont toutes bloc-diagonalisables dans la même base. Nous en déduisons des algorithmes de résolution d’ACIM par bloc-diagonalisation conjointe et comparons les résultats obtenus aux ordres 3 à 6, entre eux et avec d’autres méthodes, sur quelques signaux synthétiques. Des exemples simples ont élaborés afin de justifier la nécessité de combiner des ordres différents pour garantir la meilleure séparation. Nous prouvons aussi que le cas le plus simple à traiter est celui de mélanges de sources qui ont différentes dimensions. Dans la dernière partie de cette thèse nous proposons une famille de méthodes qui exploitent uniquement les sta- tistiques d’ordres supérieurs à deux. Sous certaines hypothèses supplémentaires, ces méthodes permettent après un blanchiment d’ordre quatre des observations de résoudre complètement le problème ACIM bruité en bloc diagonalisant conjointement un ensemble de matrices de cumulants issues des statistiques d’ordres supérieurs strictement à quatre. Une comparaison avec les méthodes ACIM à blanchiment d’ordre deux pour la séparation des activités électriques foetale et maternelle (mesurées via trois électrodes placées sur l’abdomen de la mère) montre que cette nouvelle famille est mieux adaptée à cette application : elles permettent une séparation quasi parfaite de ces deux contributions.
ano.nymous@ccsd.cnrs.fr.invalid (Hanany Ould-Baba), Hanany Ould-Baba
We derive rates of contraction of posterior distributions on non-parametric models resulting from sieve priors. The aim of the study was to provide general conditions to get posterior rates when the parameter space has a general structure, and rate adaptation when the parameter is, for example, a Sobolev class. The conditions employed, although standard in the literature, are combined in a different way. The results are applied to density, regression, nonlinear autoregression and Gaussian white noise models. In the latter we have also considered a loss function which is different from the usual l2 norm, namely the pointwise loss. In this case it is possible to prove that the adaptive Bayesian approach for the l2 loss is strongly suboptimal and we provide a lower bound on the rate.
ano.nymous@ccsd.cnrs.fr.invalid (Julyan Arbel), Julyan Arbel
The main purpose of this paper is to investigate the strong approximation of the integrated empirical process. More precisely, we obtain the exact rate of the approximations by a sequence of weighted Brownian bridges and a weighted Kiefer process. Our arguments are based in part on the Komlós et al. (1975)'s results. Applications include the two-sample testing procedures together with the change-point problems. We also consider the strong approximation of the integrated empirical process when the parameters are estimated. Finally, we study the behavior of the self-intersection local time of the partial sum process representation of the integrated empirical process.Reference: Koml\'os, J., Major, P. and Tusn\'ady, G. (1975). An approximation of partial sums of independent RV's and the sample DF. I. Z. Wahrscheinlichkeitstheorie und Verw. Gebiete, 32, 111-131.
ano.nymous@ccsd.cnrs.fr.invalid (Sergio Alvarez-Andrade), Sergio Alvarez-Andrade
This paper deals with parameter and state estimation in a bounded-error context for uncertain dynamical aerospace models when the input is considered optimized or not. In a bounded-error context, perturbations are assumed bounded but otherwise unknown. The parameters to be estimated are also considered bounded. The tools of the presented work are based on a guaranteed numerical set integration solver of ordinary differential equations combined with adapted set inversion computation. The main contribution of this work consists in developing procedures for parameter estimation whose performance is highly related with the input of system. In this paper, a comparison with a classical non-optimized input is proposed.
ano.nymous@ccsd.cnrs.fr.invalid (Qiaochu Li), Qiaochu Li
A real time algorithm for cardiac and respiratory gating, which only requires an ECG sensor, is proposed here. Three ECG electrodes are placed in such a manner that the modulation of the recorded ECG by the respiratory signal would be maximal; hence, given only one signal we can achieve both cardiac and respiratory MRI gating. First, an off-line learning phase based on wavelet decomposition is run to compute an optimal QRS filter. Afterwards, on one hand the QRS filter is used to accomplish R peak detection, and on the other, a low pass filtering process allows the retrieval of the respiration cycle so that the image acquisition sequences would be triggered by the R peaks only during the expiration phase.
ano.nymous@ccsd.cnrs.fr.invalid (D Abi-Abdallah), D Abi-Abdallah
Blood flow in high static magnetic fields induces elevated voltages that disrupt the ECG signal recorded simultaneously during MRI scans for synchronization purposes. This is known as the magnetohydrodynamic (MHD) effect, it increases the amplitude of the T wave, thus hindering correct R peak detection. In this paper, we present an algorithm for extracting an efficient reference signal from an ECG contaminated by the Nuclear Magnetic Resonance (NMR) environment, that performs a good separation of the R-wave and the MHD artifacts. The proposed signal processing method is based on sub-band decomposition using the wavelet transform, and has been tested on human and small rodents ECG signals acquired during MRI scans in various magnetic field intensities. The results showed an almost flawless trigger generation in fields up to 4.7 Tesla during the three tested imaging sequences (GE, FSE and IRSE)
ano.nymous@ccsd.cnrs.fr.invalid (D Abi-Abdallah), D Abi-Abdallah
Cardiac Magnetic Resonance Imaging (MRI) requires synchronization to overcome motion related artifacts caused by the heart’s contractions and the chest wall movements during respiration. Achieving good image quality necessitates combining cardiac and respiratory gating to produce, in real time, a trigger signal that sets off the consecutive image acquisitions. This guarantees that the data collection always starts at the same point of the cardiac cycle during the exhalation phase. In this paper, we present a real time algorithm for extracting a cardiac-respiratory trigger signal using only one, adequately placed, ECG sensor. First, an off-line calculation phase, based on wavelet decomposition, is run to compute an optimal QRS filter. This filter is used, afterwards, to accomplish R peak detection, while a low pass filtering process allows the retrieval of the respiration cycle. The algorithm’s synchronization capabilities were assessed during mice cardiac MRI sessions employing three different imaging sequences, and three specific wavelet functions. The prominent image enhancement gave a good proof of correct triggering. QRS detection was almost flawless for all signals. As for the respiration cycle retrieval it was evaluated on contaminated simulated signals, which were artificially modulated to imitate respiration. The results were quite satisfactory.
ano.nymous@ccsd.cnrs.fr.invalid (Dima Abi-Abdallah), Dima Abi-Abdallah
Blood flow in a steady magnetic field has been of great interest over the past years.Many researchers have examined the effects of magnetic fields on velocity profiles and arterial pressure, and major studies focused on steady or sinusoidal flows. In this paper we present a solution for pulsed magnetohydrodynamic blood flow with a somewhat realistic physiological pressure wave obtained using a windkessel lumped model. A pressure gradient is derived along a rigid vessel placed at the output of a compliant module which receives the ventricle outflow. Then, velocity profile and flow rate expressions are derived in the rigid vessel in the presence of a steady transverse magnetic field. As expected, results showed flow retardation and flattening. The adaptability of our solution approach allowed a comparison with previously addressed flow cases and calculations presented a good coherence with those well established solutions.
ano.nymous@ccsd.cnrs.fr.invalid (Dima Abi Abdallah), Dima Abi Abdallah
In this paper we describe a high order spectral algorithm for solving the time-harmonic Navier equations in the exterior of a bounded obstacle in three space dimensions, with Dirichlet or Neumann boundary conditions. Our approach is based on combined-field boundary integral equation (CFIE) reformulations of the Navier equations. We extend the spectral method developped by Ganesh and Hawkins - for solving second kind boundary integral equations in electromagnetism - to linear elasticity for solving CFIEs that commonly involve integral operators with a strongly singular or hypersingular kernel. The numerical scheme applies to boundaries which are globally parameterised by spherical coordinates. The algorithm has the interesting feature that it leads to solve linear systems with substantially fewer unknowns than with other existing fast methods. The computational performances of the proposed spectral algorithm are demonstrated on numerical examples for a variety of three-dimensional convex and non-convex smooth obstacles.
ano.nymous@ccsd.cnrs.fr.invalid (Frédérique Le Louër), Frédérique Le Louër
In this paper, we address the issue of performing sensitivity analysis of complex models presenting uncertain static and dynamic inputs. The dynamic inputs are viewed as random processes which can be represented by a linear combination of the deterministic functions depending on time whose coefficients are uncorrelated random variables. To achieve this, the Karhunen-Loève decomposition of the dynamic inputs is performed. For sensitivity analysis purposes, the influence of the dynamic inputs onto the model response is then given by the one of the uncorrelated random coefficients of the Karhunen-Loève decomposition, which is the originality here. The approach is applied to a building energy model, in order to assess the impact of the uncertainties of the material properties and the weather data on the energy performance of a real low energy consumption house.
ano.nymous@ccsd.cnrs.fr.invalid (Floriane Anstett-Collin), Floriane Anstett-Collin
Karhunen-Loève's decompositions (KLD) or the proper orthogonal decompositions (POD) of bivariate functions are revisited in this work. We investigate the truncation error first for regular functions and try to improve and sharpen bounds found in the literature. However it happens that (KL)-series expansions are in fact more sensitive to the liability of fields to approximate to be well represented by a small sum of products of separated variables functions. We consider this very issue for some interesting fields solutions of partial differential equations such as the transient heat problem and Poisson's equation. The main tool to state approximation bounds is linear algebra. We show how the singular value decomposition underlying the (KL)-expansion is connected to the spectrum of some Gram matrices. Deriving estimates on the truncation error is thus strongly tied to the spectral properties of these Gram matrices which are structured matrices with low displacement ranks.
ano.nymous@ccsd.cnrs.fr.invalid (Mejdi Azaïez), Mejdi Azaïez
Observability Gramians of diffusion equations have been recently connected to infinite Pick and Cauchy matrices. In fact, inverse or observability inequalities can be obtained after estimating the extreme eigenvalues of these structured matrices, with respect to the diffusion semi-group matrix. The purpose is hence to conduct a spectral study of a subclass of symmetric Cauchy matrices and present an algebraic way to show the desired observability results. We revisit observability inequalities for three different observation problems of the diffusion equation and show how they can be (re)stated through simple proofs.
ano.nymous@ccsd.cnrs.fr.invalid (Faker Ben Belgacem), Faker Ben Belgacem
The inverse problem under investigation consists of the boundary data completion in a deoxygenation-reaeration model in stream-waters. The unidimensional transport model we deal with is based on the one introduced by Streeter and Phelps, augmented by Taylor dispersion terms. The missing boundary condition is the load or/and the flux of the biochemical oxygen demand indicator at the outfall point. The counterpart is the availability of two boundary conditions on the dissolved oxygen tracer at the same point. The major consequences of these non-standard boundary conditions is that dispersive transport equations on both oxygen tracers are strongly coupled and the resulting system becomes ill-posed. The main purpose is a finite element space-discretization of the variational problem put under a non-symmetric mixed form. Combining analytical calculations, numerical computations and theoretical justifications, we try to elucidate the characteristics related to the ill-posedness of this data completion dynamical problem and understand its mathematical structure.
ano.nymous@ccsd.cnrs.fr.invalid (Faker Ben Belgacem), Faker Ben Belgacem
The paper addresses the separation of multidimensional sources, with possibly different dimensions, by means of higher-order cumulant matrices. First, it is rigorously proved, in a general setting, that contracted cumulant matrices of any order are all block-diagonalizable in the same basis. Second, a family of joint block-diagonalization algorithms is proposed that separate multidimensional sources by combining contracted cumulant matrices of arbitrary orders. Third, a specific solution is given to determine the source dimensions when they are unknown but all different. The performances of the proposed algorithms are compared between them and with algorithms of the literature based on orders 3 and 6.
ano.nymous@ccsd.cnrs.fr.invalid (Hanany Ould-Baba), Hanany Ould-Baba
Concise formulae are given for the cumulant matrices of a random vector up to order 6. In addition to usual matrix operations, they involve only the Kronecker product, the vec operator, and the commutation matrix. Orders 5 and 6 are provided here for the first time; the same method as provided in the paper can be applied to compute higher orders. An immediate consequence of these formulae is to return 1) upper bounds on the rank of the cumulant matrices and 2) the expression of the sixth-order moment matrix of a Gaussian vector. Due to their conciseness, the proposed formulae also have a computational advantage as compared to the repeated use of Leonov and Shiryaev formula.
ano.nymous@ccsd.cnrs.fr.invalid (Hanany Ould-Baba), Hanany Ould-Baba
Nous considérons une ́equation qui modélise la diffusion de la température dans une mousse de graphite contenant des capsules de sel. Les conditions de transition de la température entre le graphite et le sel doivent être traitées correctement. Nous effectuons l'analyse de ce modèle et prouvons qu'il est bien posé. Puis nous en proposons une discrétisation par éléments finis et effectuons l'analyse a priori du problème discret. Quelques expériences numériques confirment l'intérêt de cette approche.
ano.nymous@ccsd.cnrs.fr.invalid (Faker Ben Belgacem), Faker Ben Belgacem
We construct and analyze a family of well-conditioned boundary integral equations for the Krylov iterative solution of three-dimensional elastic scattering problems by a bounded rigid obstacle. We develop a new potential theory using a rewriting of the Somigliana integral representation formula. From these results, we generalize to linear elasticity the well-known Brakhage-Werner and Combined Field Integral Equation formulations. We use a suitable approximation of the Dirichlet-to-Neumann (DtN) map as a regularizing operator in the proposed boundary integral equations. The construction of the approximate DtN map is inspired by the On-Surface Radiation Conditions method. We prove that the associated integral equations are uniquely solvable and possess very interesting spectral properties. Promising analytical and numerical investigations, in terms of spherical harmonics, with the elastic sphere are provided.
ano.nymous@ccsd.cnrs.fr.invalid (Marion Darbas), Marion Darbas
Uncertainty Analysis and Sensitivity Analysis of complex models: Coping with dynamic and static inputs
ano.nymous@ccsd.cnrs.fr.invalid (Floriane Anstett-Collin), Floriane Anstett-Collin
Dans cet article nous allons estimer la fiabilité d'un système binaire et obtenir son intervalle de confiance par l'approximation normale asymptotique. Cette méthode peut s'appliquer aux systèmes complexes et de grande taille réduisant la largeur de l'intervalle de confiance.
ano.nymous@ccsd.cnrs.fr.invalid (Yunhui Hou), Yunhui Hou
We consider an inverse problem that arises in the management of water resources and pertains to the analysis of the surface waters pollution by organic matter. Most of physical models used by engineers derive from various additions and corrections to enhance the earlier deoxygenation-reaeration model proposed by Streeter and Phelps in 1925, the unknowns being the biochemical oxygen demand (BOD) and the dissolved oxygen (DO) concentrations. The one we deal with includes Taylor's dispersion to account for the heterogeneity of the contamination in all space directions. The system we obtain is then composed of two reaction-dispersion equations. The particularity is that both Neumann and Dirichlet boundary conditions are available on the DO tracer while the BOD density is free of any condition. In fact, for real-life concerns, measurements on the dissolved oxygen are easy to obtain and to save. In the contrary, collecting data on the biochemical oxygen demand is a sensitive task and turns out to be a long-time process. The global model pursues the reconstruction of the BOD density, and especially of its flux along the boundary. Not only this problem is plainly worth studying for its own interest but it can be also a mandatory step in other applications such as the identification of the pollution sources location. The non-standard boundary conditions generate two difficulties in mathematical and computational grounds. They set up a severe coupling between both equations and they are cause of ill-posedness for the data reconstruction problem. Existence and stability fail. Identifiability is therefore the only positive result one can seek after ; it is the central purpose of the paper. We end by some computational experiences to assess the capability of the mixed finite element capability in the missing data recovery (on the biochemical oxygen demand).
ano.nymous@ccsd.cnrs.fr.invalid (Mejdi Azaïez), Mejdi Azaïez
We are interested in an inverse problem of recovering the position of a pollutant or contaminant source in a stream water. Advection, dispersive transport and the reaction of the solute is commonly modeled by a linear or non-linear parabolic equation. In former works, it is established that a point-wise source is fully identifiable from measurements recorded by a couple of sensors placed, one up-stream and the other down-stream of the pollution source. The observability question we try to solve here is related to the redundancy of sensors when additional information is available on the point-wise source. It may occur, in hydrological engineering, that the intensity of the pollutant is known in advance. In this case, we pursue an identifiability result of a moving source location using a single observation. The chief mathematical tools to prove identifiability are the unique continuation theorem together with an appropriate maximum principle for the parabolic equation under investigation.
ano.nymous@ccsd.cnrs.fr.invalid (Faker Ben Belgacem), Faker Ben Belgacem
We consider a mixed reaction diffusion system describing the organic pollution in stream-waters. It may be viewed as the static version of Streeter-Phelps equations relating the Biochemical Oxygen Demand and Dissolved Oxygen to which dispersion terms are added. In this work, we propose a mixed variational formulation and prove its well-posedness. Next, we develop two finite element discretizations of this problem and establish optimal a priori error estimates for the second discrete problem.
ano.nymous@ccsd.cnrs.fr.invalid (Faker Ben Belgacem), Faker Ben Belgacem
We aim to reconstruct an inclusion ω immersed in a perfect fluid flowing in a larger bounded domain Ω via boundary measurements on ∂Ω. The obstacle ω is assumed to have a thin layer and is then modeled using generalized boundary conditions (precisely Ventcel boundary conditions). We first obtain an identifiability result (i.e. the uniqueness of the solution of the inverse problem) for annular configurations through explicit computations. Then, this inverse problem of reconstructing ω is studied thanks to the tools of shape optimization by minimizing a least squares type cost functional. We prove the existence of the shape derivatives with respect to the domain ω and characterize the gradient of this cost functional in order to make a numerical resolution. We also characterize the shape Hessian and prove that this inverse obstacle problem is unstable in the following sense: the functional is degenerated for highly oscillating perturbations. Finally, we present some numerical simulations in order to confirm and extend our theoretical results.
ano.nymous@ccsd.cnrs.fr.invalid (Fabien Caubet), Fabien Caubet
We study the stability of some critical (or equilibrium) shapes in the minimization problem of the energy dissipated by a fluid (i.e. the drag minimization problem) governed by the Stokes equations. We first compute the shape derivative up to the second order, then provide a sufficient condition for the shape Hessian of the energy functional to be coercive at a critical shape. Under this condition, the existence of such a local strict minimum is then proved using a precise upper bound for the variations of the second order shape derivative of the functional with respect to the coercivity and differentiability norms. Finally, for smooth domains, a lower bound of the variations of the drag is obtained in terms of the measure of the symmetric difference of domains.
ano.nymous@ccsd.cnrs.fr.invalid (Fabien Caubet), Fabien Caubet
The aim of this article is to explore the possibility of using a family of fixed finite elements shape functions to solve a Dirichlet boundary value problem with an alternative variational formulation. The domain is embedded in a bounding box and the finite element approximation is associated to a regular structured mesh of the box. The shape of the domain is independent of the discretization mesh. In these conditions, a meshing tool is never required. This may be especially useful in the case of evolving domains, for example shape optimization or moving interfaces. This is not a new idea, but we analyze here a special approach. The main difficulty of the approach is that the associated quadratic form is not coercive and an inf-sup condition has to be checked. In dimension one, we prove that this formulation is well posed and we provide error estimates. Nevertheless, our proof relying on explicit computations is limited to that case and we give numerical evidence in dimension two that the formulation does not provide a reliable method. We first add a regularization through a Nitscheterm and we observe that some instabilities still remain. We then introduce and justify a geometrical regularization. A reliable method is obtained using both regularizations.
ano.nymous@ccsd.cnrs.fr.invalid (Gaël Dupire), Gaël Dupire
The aim of this article is to explore the possibility of using a family of fixed finite element shape functions that does not match the domain to solve a boundary value problem with Dirichlet boundary condition. The domain is embedded in a bounding box and the finite element approximation is associated to a regular structured mesh of the box. The shape of the domain is independent of the discretization mesh. In these conditions, a meshing tool is never required. This may be especially useful in the case of evolving domains, for example shape optimization or moving interfaces. Nitsche method has been intensively applied. However, Nitsche is weighted with the mesh size h and therefore is a purely discrete point of view with no interpretation in terms of a continuous variational approach associated with a boundary value problem. In this paper, we introduce an alternative to Nitsche method which is associated with a continuous bilinear form. This extension has strong restrictions: it needs more regularity on the data than the usual method. We prove the well-posedness of our formulation and error estimates. We provide numerical comparisons with Nitsche method.
ano.nymous@ccsd.cnrs.fr.invalid (Jean-Paul Boufflet), Jean-Paul Boufflet
[...]
ano.nymous@ccsd.cnrs.fr.invalid (Nathalie Verdière), Nathalie Verdière
In the last years, several epidemics have been reported in particular the chikungunya epidemic on the R eunion Island. For predicting its possible evolution, new models describing the transmission of the chikungunya to the human population have been proposed and studied in the literature. In such models, some parameters are not directely accessible from experiments and for estimating them, iterative algorithms can be used. However, before searching for their values, it is essential to verify the identi ability of models parameters to assess wether the set of unknown parameters can be uniquely determined from the data. Thus, identi ability is particularly important in modeling. Indeed, if the model is not identi able, numerical procedures can fail and in that case, some supplementary data have to be added or the set of admissible data has to be reduced. Thus, this paper proposes to study the identi ability of the proposed models by (Moulay, Aziz-Alaoui & Cadivel 2011).
ano.nymous@ccsd.cnrs.fr.invalid (Djamila Moulay), Djamila Moulay
This paper considers two different methods in the analysis of nonlinear controlled dynamical system identifiability. The corresponding identifiability definitions are not equivalent. Moreover one is based on the construction of an input-output ideal and the other on the similarity transformation theorem. Our aim is to develop algorithms which give identifiability results from both approaches. Differential algebra theory allows realization of such a project. In order to state these algorithms, new results of differential algebra must be proved. Then the implementation of these algorithms is done in a symbolic computation language.
ano.nymous@ccsd.cnrs.fr.invalid (Lilianne Denis-Vidal), Lilianne Denis-Vidal
In this paper, we investigate the existence and characterizations of the Fréchet derivative of solutions to time-harmonic elastic scattering problems with respect to the boundary of the obstacle. Our analysis is based on a technique - the factorization of the difference of the far-field pattern for two different scatterers - introduced by Kress and Päivärinta to establish Fréchet differentiability in acoustic scattering. For the Dirichlet boundary condition an alternative proof of a differentiability result due to Charalambopoulos is provided and new results are proven for the Neumann and impedance exterior boundary value problems.
ano.nymous@ccsd.cnrs.fr.invalid (Frédérique Le Louër), Frédérique Le Louër
The aim of our work is to reconstruct an inclusion immersed in a fluid flowing in a larger bounded domain via a boundary measurement. Here the fluid motion is assumed to be governed by the Stokes equations. We study the inverse problem thanks to the tools of shape optimization by minimizing a Kohn-Vogelius type cost functional. We first characterize the gradient of this cost functional in order to make a numerical resolution. Then, in order to study the stability of this problem, we give the expression of the shape Hessian. We show the compactness of the Riesz operator corresponding to this shape Hessian at a critical point which explains why the inverse problem is ill-posed. Therefore we need some regularization methods to solve numerically this problem. We illustrate those general results by some explicit calculus of the shape Hessian in some particular geometries. In particular, we solve explicitly the Stokes equations in a concentric annulus. Finally, we present some numerical simulations using a parametric method.
ano.nymous@ccsd.cnrs.fr.invalid (Fabien Caubet), Fabien Caubet
In this paper we study the shape differentiability properties of a class of boundary integral operators and of potentials with weakly singular pseudo-homogeneous kernels acting between classical Sobolev spaces, with respect to smooth deformations of the boundary. We prove that the boundary integral operators are infinitely differentiable without loss of regularity. The potential operators are infinitely shape differentiable away from the boundary, whereas their derivatives lose regularity near the boundary. We study the shape differentiability of surface differential operators. The shape differentiability properties of the usual strongly singular or hypersingular boundary integral operators of interest in acoustic, elastodynamic or electromagnetic potential theory can then be established by expressing them in terms of integral operators with weakly singular kernels and of surface differential operators.
ano.nymous@ccsd.cnrs.fr.invalid (Martin Costabel), Martin Costabel
The Hidden semi-Markov models (HSMMs) have been introduced to overcome the constraint of a geometric sojourn time distribution for the different hidden states in the classical hidden Markov models. Several variations of HSMMs have been proposed that model the sojourn times by a parametric or a nonparametric family of distributions. In this article, we concentrate our interest on the nonparametric case where the duration distributions are attached to transitions and not to states as in most of the published papers in HSMMs. Therefore, it is worth noticing that here we treat the underlying hidden semi–Markov chain in its general probabilistic structure. In that case, Barbu and Limnios (2008) proposed an Expectation–Maximization (EM) algorithm in order to estimate the semi-Markov kernel and the emission probabilities that characterize the dynamics of the model. In this paper, we consider an improved version of Barbu and Limnios' EM algorithm which is faster than the original one. Moreover, we propose a stochastic version of the EM algorithm that achieves comparable estimates with the EM algorithm in less execution time. Some numerical examples are provided which illustrate the efficient performance of the proposed algorithms.
ano.nymous@ccsd.cnrs.fr.invalid (Sonia Malefaki), Sonia Malefaki
The interface problem describing the scattering of time-harmonic electromagnetic waves by a dielectric body is often formulated as a pair of coupled boundary integral equations for the electric and magnetic current densities on the interface Γ. In this paper, following an idea developed by Kleinman and Martin for acoustic scattering problems, we consider methods for solving the dielectric scattering problem using a single integral equation over Γ. for a single unknown density. One knows that such boundary integral formulations of the Maxwell equations are not uniquely solvable when the exterior wave number is an eigenvalue of an associated interior Maxwell boundary value problem. We obtain four different families of integral equations for which we can show that by choosing some parameters in an appropriate way, they become uniquely solvable for all real frequencies. We analyze the well-posedness of the integral equations in the space of finite energy on smooth and non-smooth boundaries.
ano.nymous@ccsd.cnrs.fr.invalid (Martin Costabel), Martin Costabel
Nowdays, one of the greatest problems that earth has to face up is pollution, and that is what leads European Union to make stricter laws about pollution constraints. Moreover, the European laws lead to the increase of emission constraints. In order to take into account these constraints, automotive constructors are obliged to create more and more complex systems. The use of model to predict systems behavior in order to make technical choices or to understand its functioning, has become very important during the last decade. This paper presents two stage approaches for the prediction of NOx (nitrogen oxide) emissions, which are based on an ordinary Kriging method. In the first stage, a reduction of data will take place by selecting signals with correlations studies and by using a fast Fourier transformation. In the second stage, the Kriging method is used to solve the problem of the estimation of NOx emissions under given conditions. Numerical results are presented and compared to highlight the effectiveness of the proposed methods
ano.nymous@ccsd.cnrs.fr.invalid (El Hassane Brahmi), El Hassane Brahmi
We consider a model for fluid flow in a porous medium with a fracture. In this model, the fracture is represented as an interface between subdomains, where specific equations have to be solved. In this article we analyse the discrete problem, assuming that the fracture mesh and the subdomain meshes are completely independent, but that the geometry of the fracture is respected. We show that despite this non-conformity, first order convergence is preserved with the lowest order Raviart-Thomas(-Nedelec) mixed finite elements. Numerical simulations confirm this result.
ano.nymous@ccsd.cnrs.fr.invalid (Najla Frih), Najla Frih
In this paper, we study the uniqueness of solutions for diagonal hyperbolic systems in one space dimension. We present two uniqueness results. The first one is a global existence and uniqueness result of a continuous solution for strictly hyperbolic systems. The second one is a global existence and uniqueness result of a Lipschitz solution for hyperbolic systems not necessarily strictly hyperbolic. An application of these two results is shown in the case of the one-dimensional isentropic gas dynamics.
ano.nymous@ccsd.cnrs.fr.invalid (Ahmad El Hajj), Ahmad El Hajj
This work is devoted to the numerical simulation of an incompressible fluid through a porous interface, modeled as a macroscopic resistive interface term in the Stokes equations. We improve the results reported in [M2AN, 42(6):961-990, 2008], by showing that the standard Pressure Stabilized Petrov-Galerkin (PSPG) finite element method is stable, and optimally convergent, without the need for controlling the pressure jump across the interface.
ano.nymous@ccsd.cnrs.fr.invalid (Alfonso Caiazzo), Alfonso Caiazzo
Dans ce travail je présente une étude unifiée basée sur l'estimation du maximum de vraisemblance pour des modèles markoviens, semi-markoviens et semi-markoviens cachés. Il s'agit d'une étude théorique des propriétés asymptotiques de l'EMV des modèles mentionnés ainsi que une étude algorithmique. D'abord, nous construisons l'estimateur du maximum de vraisemblance (EMV) de la loi stationnaire et de la variance asymptotique du théorème de la limite centrale (TLC) pour des fonctionnelles additives des chaînes de Markov ergodiques et nous démontrons sa convergence forte et sa normalité asymptotique. Ensuite, nous considérons un modèle semi-markovien non paramétrique. Nous présentons l'EMV exact du noyau semi-markovien qui gouverne l'évolution de la chaîne semi-markovienne (CSM) et démontrons la convergence forte, ainsi que la normalité asymptotique de chaque sous-vecteur fini de cet estimateur en obtenant des formes explicites pour les matrices de covariance asymptotiques. Ceci a été appliqué pour une observation de longue durée d'une seule trajectoire d'une CSM, ainsi que pour une suite des trajectoires i.i.d. d'une CSM censurée à un instant fixe. Nous introduisons un modèle semi-markovien caché (MSMC) général avec dépendance des temps de récurrence en arrière. Nous donnons des propriétés asymptotiques de l'EMV qui correspond à ce modèle. Nous déduisons également des expressions explicites pour les matrices de covariance asymptotiques qui apparaissent dans le TLC pour l'EMV des principales caractéristiques des CSM. Enfin, nous proposons une version améliorée de l'algorithme EM (Estimation-Maximisation) et une version stochastique de cet algorithme (SAEM) afin de trouver l'EMV pour les MSMC non paramétriques. Des exemples numériques sont présentés pour ces deux algorithmes.
ano.nymous@ccsd.cnrs.fr.invalid (Samis Trevezas), Samis Trevezas
This article concerns the variance estimation in the central limit theorem for finite recurrent Markov chains. The associated variance is calculated in terms of the transition matrix of the Markov chain. We prove the equivalence of different matrix forms representing this variance. The maximum likelihood estimator for this variance is constructed and it is proved that it is strongly consistent and asymptotically normal. The main part of our analysis consists in presenting closed matrix forms for this new variance. Additionally, we prove the asymptotic equivalence between the empirical and the MLE estimator for the stationary distribution.
ano.nymous@ccsd.cnrs.fr.invalid (Samis Trevezas), Samis Trevezas
The contact between two membranes can be described by a system of variational inequalities, where the unknowns are the displacements of the membranes and the action of a membrane on the other one. A discretization of this system is proposed in Part 1 of this work, where the displacements are approximated by standard finite elements and the action by a local postprocessing which admits an equivalent mixed reformulation.Here, we perform the a posteriori analysis of this discretization and prove optimal error estimates. Next, we present numerical experiments that confirm the efficiency of the error indicators.
ano.nymous@ccsd.cnrs.fr.invalid (Faker Ben Belgacem), Faker Ben Belgacem
The level set method has become widely used in shape optimization where it allows a popular implementation of the steepest descent method. Once coupled with a weak material approximation, a single mesh is only used leading to very efficient and cheap numerical schemes in optimization of structures. However, it has some limitations and cannot be applied in every situation. This work aims at exploring such a limitation. We estimate the systematic error committed by using the weak material approximation and, on a model case, explain that they amplifies instabilities by a second order analysis of the objective function.
ano.nymous@ccsd.cnrs.fr.invalid (Marc Dambrine), Marc Dambrine
A new theorem is provided to test the identifiability of discrete-time systems with polynomial nonlinearities. That extends to discrete-time systems the local state isomorphism approach for continuous-time systems. Two examples are provided to illustrate the approach.
ano.nymous@ccsd.cnrs.fr.invalid (Floriane Anstett), Floriane Anstett
[...]
ano.nymous@ccsd.cnrs.fr.invalid (Chérif Amrouche), Chérif Amrouche
Nous décrivons le niveau de dégradation caractéristique d'une structure à l'aide d'un processus stochastique appelé processus de dégradation. La dynamique de ce processus est modélisée par un système différentiel à environnement markovien. Nous étudions la fiabilité du système en considérant la défaillance de la structure lorsque le processus de dégradation dépasse un seuil fixe. Nous obtenons la fiabilité théorique à l'aide de la théorie du renouvellement markovien. Puis, nous proposons une procédure d'estimation des paramètres des processus aléatoires du système différentiel. Les méthodes d'estimation et les résultats théoriques de la fiabilité, ainsi que les algorithmes de calcul associés, sont validés sur des données simulés. Notre méthode est appliquée à la modélisation d'un mécanisme réel de dégradation, la propagation des fissures, pour lequel nous disposons d'un jeu de données expérimental.
ano.nymous@ccsd.cnrs.fr.invalid (Julien Chiquet), Julien Chiquet
We present a first version of a software dedicated to an application of a classical nonlinear control theory problem to the study of compartmental models in biology. The software is being developed over a new free computer algebra library dedicated to differential and algebraic elimination.
ano.nymous@ccsd.cnrs.fr.invalid (François Boulier), François Boulier
Topological optimization of networks is a complex multi-constraint and multi-criterion optimisation problem in many real world fields (telecommunications, electricity distribution etc.). This paper describes an heuristic algorithm using Binary Decisions Diagrams (BDD) to solve the reliable communication network design problem (RCND) \cite{ga1}. The aim is to design a communication network topology with minimal cost that satisfies a given reliability constraint.
ano.nymous@ccsd.cnrs.fr.invalid (Gary Hardy), Gary Hardy
We consider the inverse conductivity problem with one measurement for the equation $div((\sigma_1+(\sigma_2-\sigma_1)\chi_D)\nabla{u})=0$ determining the unknown inclusion $D$ included in $\Omega$. We suppose that $\Omega$ is the unit disk of $\mathbb{R}^2$. With the tools of the conformal mappings, of elementary Fourier analysis and also the action of some quasi-conformal mapping on the Sobolev space $\sH^{1/2}(S^1)$, we show how to approximate the Dirichlet-to-Neumann map when the original inclusion $D$ is a $\varepsilon-$ approximation of a disk. This enables us to give some uniqueness and stability results.
ano.nymous@ccsd.cnrs.fr.invalid (Marc Dambrine), Marc Dambrine
Avant d'estimer les paramètres intervenant dans des systèmes dynamiques, linéaires ou non-linéaires, contrôlés ou non contrôlés, il est important d'effectuer une étude d'identifiabilité, c'est à dire si, à partir des données expérimentales, les paramètres étudiés sont uniques ou non. Plusieurs méthodes ont été développées ces dernières années, en particulier une qui est basée sur l'algèbre différentielle. Celle-ci a conduit à un algorithme utilisant le package Diffalg implémenté sous Maple et permettant de tester l'identifiabilité de systèmes d'équations différentielles. Les résultats obtenus à partir de cette étude permettent de mettre en place des méthodes numériques pour obtenir une première estimation des paramètres, ceci sans aucune connaissance à priori de leur valeur. Cette première estimation peut alors être utilisée comme point de départ d'algorithmes itératifs spécialisés dans l'étude des problèmes mal posés : la régularisation de Tikhonov. Dans cette thèse, deux modèles non linéaires en pharmacocinétique de type Michaelis-Menten ont tout d'abord été étudiés. Ensuite, nous nous sommes intéressés à un modèle de pollution décrit par une équation aux dérivées partielles parabolique. Le terme source à identifier était modélisé par le produit de la fonction débit avec la masse de Dirac, de support la position de la source polluante. Le but du travail était de fournir une première estimation de la source polluante. Après avoir obtenu l'identifiabilité du problème continu, nous avons étudié l'identifiabilité d'un problème approché en nous appuyant sur les méthodes d'algèbre différentielle. Celui-ci a été obtenu en approchant la masse de Dirac par une fonction gaussienne et en discrétisant ensuite le système en espace. Les résultats d'identifiabilité ont été obtenus quel que soit le nombre de points de discrétisation en espace. De cette étude théorique, nous en avons déduit des algorithmes numériques donnant une première estimation des paramètres à identifier.
ano.nymous@ccsd.cnrs.fr.invalid (Nathalie Verdière), Nathalie Verdière
In this paper, we present a network decomposition method using Binary Decision Diagram (BDD), a state-of-the-art data structure to encode and manipulate boolean functions, for computing the reliability of networks such as computer, communication or power networks. We consider the \textit{so-called} $K$-terminal reliability measure $R_K$ which is defined as the probability that a subset $K$ of nodes can communicate to each other, taking into account the possible failures of the network components (nodes and links). We present an exact algorithm for computing the $K$-terminal reliability of graph $G=(V,E)$ in $O(|E|.F_{max}.2^{F_{max}}.B_{F_{max}})$ where $B_{F_{max}}$ is the Bell number of the maximum boundary set $F_{max}$. Other reliability measures are also discussed. Several examples and experiments show the effectiveness of this approach \footnote{This research was supported by the \emph{Conseil Regional de Picardie}.}.
ano.nymous@ccsd.cnrs.fr.invalid (Gary Hardy), Gary Hardy
Recently several authors considered finite mixture models with semi-/non-parametric component distributions. Identifiability of such model parameters is generally not obvious, and when it occurs, inference methods are rather specific to the mixture model under consideration. In this paper we propose a generalization of the EM algorithm to semiparametric mixture models. Our approach is methodological and can be applied to a wide class of semiparametric mixture models. The behavior of the EM type estimators we propose is studied numerically through several Monte Carlo experiments but also by comparison with alternative methods existing in the literature. In addition to these numerical experiments we provide applications to real data showing that our estimation methods behaves well, that it is fast and easy to be implemented.
ano.nymous@ccsd.cnrs.fr.invalid (Laurent Bordes), Laurent Bordes
Nous présentons un environnement de génération automatique de simulations entièrement basé sur les technologies XML. Le langage de description proposé permet de décrire des objets mathématiques tels que des systèmes d'équations différentielles, des systèmes d'équations non-linéaires, des équations aux dérivées partielles en dimension 2, ou bien de simples courbes et surfaces. Il permet aussi de décrire les paramètres dont dépendent ces objets. Ce langage est indépendant du logiciel et permet donc de garantir la pérennité du travail des auteurs ainsi que leur mutualisation et leur réutilisation. Nous décrivons aussi l'architecture d'une «chaîne de compilation» permettant de transformer ces fichiers XML sous forme de scripts et de les faire fonctionner dans le logiciel Scilab.
ano.nymous@ccsd.cnrs.fr.invalid (Stéphane Mottelet), Stéphane Mottelet
La première partie concerne l'estimation séquentielle du paramètre de régression pour le modèle de Cox pour des données censurées à droite. Il est ainsi possible de définir des règles d'arrêt garantissant une bonne estimation. Celles-ci conduisent alors à des estimateurs dépendant de tailles d'échantillons aléatoires pour lesquels le comportement asymptotique est le même que celui des estimateurs non séquentiels. Les propriétés démontrées sont étendues au cadre multidimensionnel et illustrées par des simulations. Cette première partie s'achève par l'étude théorique du comportement de la variable d'arrêt dans le cadre d'intervalles de confiance séquentiels. La règle d'arrêt normalisée est alors asymptotiquement normale. La seconde partie porte sur la construction de tests d'homogénéité dans le cadre d'un modèle de durées de vie non paramétrique incluant des covariables ainsi que la censure à droite. Une statistique de test est proposée et son comportement asymptotique est établi.
ano.nymous@ccsd.cnrs.fr.invalid (Christelle Breuils), Christelle Breuils
L'utilisation des méthodes intégrales dans le milieu pétrolier est récente et reste limitée à des problèmes 2D, le puits étant modélisé comme un terme source. Dans ce travail, nous proposons une nouvelle méthode intégrale pour évaluer la performance des puits dans un réservoir stratifié à géométrie quelconque en 3D. Ici, l'écoulement dans le puits est pris en compte par deux types de conditions aux limites, la première linéaire, la seconde non-linéaire et non-locale. Nous avons démontré que chacun des deux modèles (linéaire et non-linéaire) est bien posé. Du point de vue numérique, nous avons développé une nouvelle formulation intégrale, équivalente au modèle linéaire. Les équations intégrales ont été discrétisées par une méthode de Galerkin. D'autre part, nous avons pu tirer profit du problème d'échelle pour faire une approximation filaire du puits. Les tests numériques montrent que cette nouvelle méthode intégrale permet de calculer l'indice de productivité du puits à 1% près.
ano.nymous@ccsd.cnrs.fr.invalid (Valérie Moumas), Valérie Moumas