International Conference «Mathematical and Information Technologies, MIT-2016»

28 August – 5 September 2016

Vrnjacka Banja, Serbia – Budva, Montenegro

Plenary Speakers

 

   Nebojša Arsić

   Full Professor, Faculty of Technical Sciences, University of Priština, PhD

   Kosovska Mitrovica

 

 

 

 

Usage of E-government in the Republic of Serbia

Different electronic governance services are being introduced by public institutions and State authorities in order to facilitate business, accelerate the process of information exchange and reduce the potential costs. These services are currently available only by the level of local government. However, an effort is being made to popularize them and make them widely used by the people whom they serve. The concept of electronic government predicts interactive electronic services tailored to the needs of citizens and businesses, and integrated at all levels of the public sector. In this way electronic governance can provide more efficient, transparent and accountable public services that are adapted to the needs of citizens and businesses. Application of these services requires adoption of appropriate law regulations by state authorities. These legal acts regulate every aspect of application as well as sanctions for possible abuse. This paper outlines the current situation of electronic administration in the Republic of Serbia, presents a strategy for further development and provides a comparison with similar services in the neighboring countries at the level of use of services and the applied security mechanisms. In addition, the paper covers the legal regulations that apply when using these services.

 



 

   Vladimir Barakhnin

   Professor, Institute of Computational Technologies SB RAS, PhD

   Novosibirsk, Russia

 

 

 

 

 

The automation of the process of creation of the metrical handbooks and concordances with the usage of the computer algorithms of the analysis of Russian poetic texts

In this paper we outline the main approaches to automation of the process of statistical analysis of the lower structural levels (meter, rhythm, phonetics, vocabulary, grammar) of Russian poetic texts, and we present the algorithm of the complex analysis of Russian poetic texts in order to automate the process of metric reference books and concordances. The results of this analysis will significantly expand the opportunities of the philologists, examining as the indicated above levels of poetry, as well their semantic and pragmatic features; in addition the philologists can be spared from routine work, the range of analyzed works can be widen by reducing the dependence of the quality of the comparative analysis on the personal knowledge of the researcher.

 



 

   Sergey Chernyi

   Professor, Institute of Computational Technologies SB RAS, PhD

   Novosibirsk, Russia

 

 

 

 

Methods for optimal control of hydraulic fracturing process

Based on the already developed numerical model of hydraulic fracturing the method for optimal control of the process is proposed. It consists of choosing the parameters of rheological laws for fluid, pumping schedule, conditions of fracture initiation (shape of the perforation, its orientation against in-situ stresses) that satisfy the needed location of incipient fracture, linearity of fracture propagation trajectory, uniformity of fracture width distribution along the trajectory, no profile twisting along the fracture trajectory, minimal costs for hydraulic fracturing, maximal volume of produced oil, etc. The choosing input parameters is determined by the genetic algorithm and the direct problem simulation.

 



 

   Allal Guessab

   Professor, Université de Pau et des Pays de l'Adour, PhD

   Pau, France

 

 

 

 

 

On the enrichment of finite-element approximations

In this talk we present a general method for enriching (conforming or nonconforming) finite element approximations, via the use of additional enrichment functions (not necessary polynomials). To this end, we will first give, under certain conditions on enrichment functions, an abstract general theorem characterizing the existence of any enriched finite element approximation. We then establish four key lemmas in order to prove under an unisolvence condition a more practical characterization result. As illustrative applications, using a general class of trapezoidal, midpoint and Simpson type cubature formulas or their perturbed versions, which employ integrals over facets, we will describe how our method may be applied to build new enriched nonconforming finite elements. There are respectively obtained as an enrichment of the Han rectangular element and the well known Wilson's element.

 



 

   Anatoliy Lepikhin

   Institute of Computational Technologies SB RAS, PhD

   Krasnoyarsk, Russia

 

 

 

 

A risk assessment methodology for critical infrastructure

Security, economic and social stability of the country is largely determined by vulnerability of critical infrastructure. Traditional risk-based approach can not be applied to ensure the security of such systems, since even with minimal damage to the probability of a hazardous event can be very large.

The report proposes a methodological approach risk analysis of critical infrastructures, based on the prioritization of threat categories and forms of their manifestations. Risk analysis is based on the Bayesian methodology which can effectively combine qualitative (expert) assessment and quantitative (model) risk calculations.

 



 

   Žarko Mijajlović

   Professor, State University of Novi Pazar, PhD

   Novi Pazar, Serbia

 

 

 

 

Applications of regularly varying functions in the study of cosmological parameters

Most of the cosmological parameters, such as the scale factor a(t), the energy density ρ(t) and p(t), the pressure of the material in the universe,  satisfy asymptotically the power law. On the other hand the quantities that satisfy the power law are best modeled by regularly varying functions. The aim of this paper is to apply the theory of regularly varying functions to study Friedmann equations and their solutions which are in fact the mentioned cosmological parameters. More specifically, in our paper On asymptotic solutions of Friedmann equations (Ž. Mijajlović, N. Pejović, S. Šegan, and G. Damljanović, Applied Mathematics and Computation, 2012) we introduced a new constant Γ related to the Friedmann equations. Determining the values of Γ we obtain the asymptotical behavior of the  solutions, i.e. of the expansion scale factor a(t). The instance Γ < 1/4 is appropriate for both cases, the spatially flat and the open universe, and gives a sufficient and necessary condition for the solutions to be regularly varying. This property of Friedmann equations is formulated as the generalized power law principle.  From the theory of regular variation it follows that the solutions under usual assumptions include a multiplicative term which is a slowly varying function.

 



 

   Gradimir V. Milovanović

   Academician, Research Professor, Mathematical Institute of the SASA

   President of the Scientific Committee for Mathematics, Computer Sciences and Mechanics

   Belgrade, Serbia

 

 

 

 

Nonstandard Quadratures and Applications in the Fractional Calculus

Nonstandard Gaussian and Gauss-Lobatto quadrature formulas are considered, including their numerical construction. Applications of this kind of quadratures in fractional calculus to approximation of fractional derivatives are presented.  Several numerical experiments are given in order to  illustrate and test the behaviour of these quadratures.

 



   

   Zoran Ognjanovic

   Research Professor Mathematical Institute of the SASA, PhD

   Belgrade, Serbia

 

 

 

 

 

Probability Logics

The paper summarizes the results of the authors in formalization of uncertain reasoning. A number of probability logics is considered. Their axiomatizations, completeness, compactness and decidability are addressed. Some possible applications of probability logics are analyzed, e.g., nonmonotonic reasoning, measuring of inconsistent knowledge bases, heuristic approaches to satisfiability etc.

 



 

   Stefan R. Panić

   Chief of Department of Informatics at the Faculty of Natural Science and Mathematics, PhD

   Kosovska Mitrovica

 

 

 

 

Algorithms for signal drawbacks mitigation in fading environment

Signal propagation in the wireless medium is accompanied by various side effects and drawbacks such are multipath fading and shadowing. Mathematical characterization of these complex phenomenons, which describes various types of propagation environments, will be presented. First, various models, already known in the literature, such are: Rayleigh, Ricean, Hoyt, Nakagami-m, Weibull, α-μ, η-μ, and κ-μ fading model, used for the statistical modeling of multipath influence, will be introduced. Then, some of models for statistical modeling of shadowing influence, such are: long-normal and Gamma shadowing model, will be mentioned. Finally, some respect will be paid to composite models, which correspond to the scenario when multipath fading is superimposed on shadowing. Suzuki, Ricean-shadowing and Generalized K composite fading models have been discussed. Further analysis will be extended by introducing correlative fading models, considering exponential, constant and general type of correlation between random processes. Several performance measures related to the wireless communication system design, such are: average signal-to-noise ratio, outage probability, average symbol error probability, amount of fading, level crossing rate and average fade duration, will be defined and mathematically modeled. Basic concepts of several space diversity reception algorithms, such are: maximal ratio combining, equal gain combining, selection combining and switch-and stay combining, will be portrayed also, with emphases put to the reception performance measures evaluation. Necessity and the validity of space diversity reception algorithms usage, from the point of view of multipath fading and CCI influence mitigation, will be shown by first evaluating single-channel receiver performances for few general propagation models and then by presenting performance improvement at the receiver, achieved by appliance of diversity reception algorithms through the standard performance criterions. Also, necessity and validity of the macrodiversity algorithm reception usage, from the point of view of multipath fading and shadowing mitigation through the second order statistical measures at the output of the macrodiversity algorithm receiver, will be considered.

 



 

   Zarko Pavicevic

   Professor, University of Montenegro, PhD

   Podgorica, Montenegro

 

 

 

 

On P-sequences of analytic functions and their applications

 



 

   Boris Ryabko

   Professor, Institute of Computational Technologies SB RAS, PhD

   Novosibirsk, Russia

 

 

 

 

An information-theoretic approach to estimation of the capacity of computers and supercomputers

Currently, there are many approaches to evaluate the performance of computer systems. Basically most of the methods are the use of benchmarks, i.e. a test set of tasks, which helps to determine execution time required to solve them. Comparison of computers is performed on the analysis of these execution times. A disadvantage of benchmarks is the need to have a working model of tested system. This imposes restrictions on the usage and increases the cost of the evaluation and comparison of computers. Besides, it is impossible to use benchmarks in order to evaluate computer during the development phase, when there is no working model. Moreover, the objectiveness of benchmarks is reduced by the fact that they are focused on specific tasks.

A concept of computer capacity was suggested and then it was applied to analysis of computers of different kinds. The computer capacity characterizes the performance of real computers with different CPU clock speed, number of processor cores, memory organization and instruction set of processors, etc. It is important to note that the computer capacity is estimated theoretically based on instruction set of processors and their execution times, including latencies of accessing to different types of memory (cache-memory, RAM, etc.), as well as delays associated to restarting the pipeline, to changing of processor context and with exceptions that occur during the execution of instructions. Note that the basis of the developed approach is the concept of Shannon entropy, capacity of the discrete channel and some other ideas, which included in Informational Theory.
In this report we apply the computer capacity for analysis of modern supercomputers. More precisely, we estimate the computer capacity for three following CRAY supercomputers from the top 500 list (November, 2015): Trinity - Cray XC40, Hazel Hen - Cray XC40, and Shaheen II - Cray XC40, which are the sixth, eighth and ninth in the list . It turned out that our theoretical estimations are close to ones derived from benchmarks. Also we consider how the supercomputer parameters have an influence on the capacity and make some recommendations how to increase the performance.

 



 

   Yuriy Zakharov

   Professor, Institute of Computational Technologies SB RAS, PhD

   Novosibirsk, Russia

 

 

 

 

Mathematical modeling of artificial aortic heart valve

In this paper we propose a mathematical model for describing the viscous inhomogeneous fluid flow in canal with flexible walls and valve. We present the results of the modeling of the valve leaflets dynamics, and the fluid flow inside valve “Yunilain”.

 



© 1996-2017, Institute of computational technologies of SB RAS, Novosibirsk