DSPTech’2015

Proceeding of the 1th International Workshop on

Technologies of Digital Signal Processing and Storing (DSPTech’2015)

Ufa, December 10 - 13, 2015

Volume 2

UDC 004.7

Proceedings of the Workshop on Technologies of Digital Signal Processing and Storing (DSPTech’2015), Ufa, Russia, December 10-13, 2015. Volume 2 Ufa State Aviation Technical University, 2015

ISBN 978-5-4221-0812-1

The Workshop on Technologies of Digital Signal Processing and Storing (DSPTech) is an international conference which aims to promote discussion of the development of research and innovation in the field of information technology and its applications. The conference will discuss a wide range of problems associated with the storage, processing, analysis and visualization of data relating to different scientific and applied areas of human activity. The Workshop was supported by RFBR, research project No. 15-07-21015 G

The papers are included as they are submitted by authors, without editing

© Ufa State Aviation Technical University, 2015

Foreword

These Proceedings contain the full texts of the papers, selected for presentation at the 1 th International Workshop on Technologies of Digital Signal Processing and Storing (DSPTech’2015).

DSPTech’2015 Conference aims to promote discussion of the development of research and innovation in the field of information technology and its applications. The conference will discuss a wide range of problems associated with the storage, processing, analysis and visualization of data relating to different scientific and applied areas of human activity.

The Workshop attracted more than one hundred submitted papers from different countries including both conceptual and experience papers. Both types of submissions were evaluated similarly, in accordance with standards of international forums. The careful reviewing procedure by the program committee resulted in 49 papers being chosen for the presentation in the international track (Volume 2). The next sections – System Engineering, Models, Methods and Algorithms of Data Processing and Storing, Import Substitution and Software and Hardware for Automation and Control, Innovative Technologies and Methods of Economical and Social Data Processing – appear to be gaining attention. Some of the experience papers, especially at the industrial sessions, reflect the results in development/application of real systems, and were considered by the program committee to be of interest to a wider community.

The program Committee would like to extend appreciation to all those who submitted the preliminary versions of the papers or extended for international peer reviewing.

Finally, our thanks go to all the attendees of the Workshop here in Ufa, Russia that boosted the research activity in the Technologies of Digital Signal Processing and Storing.

Mironov Valery Lutov Alexei

Yusupova Nafisa Khristodulo Olga

Symposium Organization

PROGRAM CO-CHAIRS Lutov Alexei, USATU, Ufa, Russia Yusupova Nafisa, USATU, Ufa, Russia Khristodulo Olga, USATU, Ufa, Russia

INTERNATIONAL PROGRAM COMMITTEE

Woern Heinz, University of Karlsruhe, Germany Kovacs George, Hungarian Acad. of Sciences, Budapest, Hungary Corbera Jordi, Cartographic and Geologic Institute of Catalonia Mcmillan Susan, British Geological Survey, Edinburgh, UK Nikolaikin Nikolay, MSTUCA, , Russia Milovzorov Georgy, USU, Izhevsk, Russia Lomaev Gely, IzhGTU, Izhevsk, Russia Verevkin Aleksander, USPTU, Ufa, Russia Enikeev Farid, USPTU, Ufa, Russia Smetanina Olga, USATU, Ufa, Russia Vasilyev Vladimir, USATU, Ufa, Russia Wolfengagen Viacheslav JurInfoR-MSU Institute, Moscow, Russia Gvozdev Vladimir, USATU, Ufa, Russia Kulikov Gennady, USATU, Ufa, Russia Massel Lyudmila, Energy Systems Institute, Irkutsk, Russia Melnikov Andrey, ChSU, Chelyabinsk, Russia Petunin Alexander, USTU-UPI, Yekaterinburg, Russia

GENERAL ORGANIZING COMMITTEE CHAIR Mironov Valery, USATU, Ufa, Russia

Contents

WORKSHOP PROCEEDING CONTENTS

Development of the Formal Model of the Production Process for Organization of Project and Production Management Using Iterative Model Artuhov Alexander V. 1 , Rechkalov Alexander V. 2, Kulikov Gennady G.2, Antonov Vyacheslav V. 2 1The General Director of Joint-Stock Company "United Engine Corporation" 2 Ufa State Aviation Technical University, Ufa, Russian Federation ...... 10

Intelligent Modelling of Manufacturing Supervisory Systems Using Fuzzy Cognitive Maps P. P. Groumpos University of Patras, Department of Electrical and Computer Engineering, Patras Greece ...... 13

System Engineering

The Design of Neural Network Algorithms for Processing Sensor Signals of Aviation Gas Turbine Engine on FPGA S.V. Zhernakov, A.T. Gilmanshin Ufa State Aviation Technical University,Ufa, Russia ...... 20

BIM Technologies in Interdisciplinary Educational Program «APPLIED INFORMATICS IN ARCHITECTURE» G. Zakharova, A. Krivonogov Ural State Academy of Architecture and Arts, Ekaterinburg, Russia ...... 25

Models, Methods and Algorithms of Data Processing and Storing

Counting Cutter Routing Optimization Parameters for Cutting Plan Gived as Flat Graph E. A. Savitskiy1,V. M. Kartak2 1South Ural State University (National Research University), Chelyabinsk, Russia 2Bashkir State Pedagogical University Named after M. Akmullah,Ufa, Russia ...... 31

Information Technology for Data Backup and Recovery Grischuk Konstantin Borisovich, Rizvanov Konstantin Anvarovich Ufa State Aviation Technical University,Ufa, Russia ...... 36

Contents

The Use of Probabilistic and Statistical Methods to Study Students' Opinions about the Quality of the Educational Process Rozanova Larisa F., Markevich Irina A.,Turutina Anastasiya D. Ufa State Aviation Technical University, Ufa, Russian Federation ...... 39

Statistical Analysis of Individual Tasks on Probability Theory K. Kostenko, Y. Katsman Tomsk Polytechnic University, Tomsk, Russia ...... 46

Mathematical Model of Polyominotiling for Phased Array Design A. Fabarisova1,V. Kartak2 Bashkir State Pedagogical University Named after M.Akmullah,Ufa, Russia ...... 50

Formation of Cross-Platform Structural System Model of Business Processes on the Basis of Hierarchy of Grammars f Chomsky D. Shamidanov Ufa State Aviation Technical University, Ufa, Russia ...... 53

Hashing Approach to Erdős-Gyárfás Conjecture A. Ripatti Bashkir State Pedagogical University Named after M. Akmullah, Ufa, Russia ...... 58

Markov Modelling for System Safety Purposes of Power-Plant Abdulnagimov A.I., Arkov V.Yu. Ufa State Aviation Technical University, Russia, Ufa ...... 62

Development of the Intelligence System about the Timberland Which is not Covered with Forest Vegetation for Decision-Making on Reforestation Enikeev Rustem R., Mirzakhanova Albina .B.., Sitdikova Elina O.,Vazigatov Dinar I. Ufa State Aviation Technical University, Russia, Ufa ...... 71

Import Substitution Software and Hardware for Automation and Control

Prospects of Cellular Automata Usage for Middle and Large Cities Traffic Modeling A. Shinkarev Computer Technologies, Control and Radio Electronics, South Ural State University (National Research University), Chelyabinsk, Russia ...... 74

Contents

On Software Implementation of Numerical Methods for Linear Optimization A. Latipova, Z. Ekaterina South Ural State University, Chelyabinsk, Russia ...... 78

The Architecture of the Information and Control Complex for Intelligent Transport Systems L.R. Sayapova, L.R. Shmeleva, A.A. Shmelev, R.M. Nafikova Aviation Technical University...... 84

Automation Catering Activities Enikeev Rustem R., Suvorova Veronica A., Solyeva Anastasiya V., Vazigatov Dinar I., Sitdikova Elina O.,Mirzaxanova Albina B. Ufa State Aviation Technical University, Ufa, Russia ...... 90

Innovative Technologies and Methods of Economical and Social Data Processing

Problems of Application of Formal Methods for Modeling Software Systems Design Process for Systems with Incomplete Information and High Complexity of Subject Domain Zagitov German R., Antonov Dimitri V., Antonov Vyacheslav V Ufa State Aviation Technical University, Ufa, Russia ...... 92

Analysis of Effectiveness of Cross-Platform Software of the Information Environment (on the Example of the University and Enterprise) A. R. Fakhrullina Ufa State Aviation Technical University, Ufa, Russia ...... 101

Formal Domain Model Given the Vague Descriptions of the Object Model Antonov Vyacheslav V., Suvorova Veronica A., Solyeva Anastasiya V. Ufa State Aviation Technical University, Ufa, Russia ...... 105

Production Allocation System for Oil and Gas Companies S. Abramov Tieto Oil and Gas, Perth, Australia ...... 113

Applying of Portal Technologies in Development of Discipline Curriculum E. Biryukova, A. Malakhova Ufa State Aviation Technical University, Ufa, Russia ...... 118 Mobile Remote Monitoring Platform Of The Cardiovascular System

Contents

I.S. Runov, J.O. Urazbahtina Ufa State Aviation Technical University, Ufa, Russia ...... 124

Modern Aspects Of Information War N. Andreyev Ufa State Aviation Technical University, Ufa, Russia ...... 129

Planning Final Budget Companies Enikeev Rustem R., Suvorova Veronica A., Solyeva Anastasiya V., Vazigatov Dinar I., Sitdikova Elina O. Ufa State Aviation Technical University, Ufa, Russia ...... 133

On Social Programs Implemented by the Soc «Bashneft» Means Of Advertising and Public Relations Yagudina Aigul V. Oil and Gas Business Institute, USPTU, Ufa, Russia ...... 138

Development of the Formal Model of the Production Process for Organization of Project and Production Management Using Iterative Model

Artuhov Alexander V. 1 , Rechkalov Alexander V. 2, Kulikov Gennady G.2, Antonov Vyacheslav V. 2 1Tthe General Director of Joint-stock company "United engine Corporation" 2Automated and management control systems, USATU, K. Marx St.,12, Ufa, Russian Federation [email protected]

Keywords: Domain model; semantic model; production system; corporate information system (CIS); integrated information processing; category of sets; attribute translation.

Abstract: The article considers the problem of constructing a formal mathematical and semantic domain model (organization of production activities). The evolution of information systems is considered from the perspective of their integration into the operating system business rules production enterprise by formalizing the last. Up to the present time, there remain many unexplored theoretical problems associated with the modeling of business processes further information support (in conjunction with semantic rules of the regulation on the corporate information systems (CIS). A variant of the domain model using methods that, take into account the vagueness of the descriptions of the model of the investigated object, in accordance with the provisions of systems engineering.

Currently widely used a process approach to the organization of project and production management based on formal models of the life cycle (LC) systems. Thus, the role of standards used at all stages of management, primarily because the standards ensure interoperability of different components with each other. Of greatest interest is the ISO/IEC 15288, which is a framework, i.e. it specifies only General requirements for the implementation of the processes associated with the development and life cycle support of systems. Typically, it is used as a methodological basis for further concretization of business process management. The provisions of this standard are presented in verbal form and it is interesting to build it as a set-theoretic model. Consider the processes of the agreement (let's introduce the designation PS ) that is allocated in ISO/IEC 15288 [1], which consist of the purchase process - PS pr and delivery process is - PS po . In standard there is further detail of these processes, the process of acquisition consists of the purpose of the acquisition is the result of a process of acquisition activity in the 333 process of acquisition PSprprpr {,...,}18. The delivery process consists of the objectives of 11 2 2 2 the delivery process PSpopo {}1 , delivery process PSpo { po17 ,..., po } , activities in the 333 delivery process PSpopopo {,...,}19. There is a formula that determines the relationships between organizations in the form of an ordered set 123123 PS PSpr,,,,,, PSPS popr PS pr PSPS prpoPS po PS po  . (1) Note that this formula determines the relationship between organizations in the form of ordered sets. Thus the agreement processes form a class of objects, for each pair of objects which

PS1 and PS2 specify a set of morphisms Hom(,) PSPS12, for each pair (morphisms), for example g Hom(,) PS PS and fHom PSPS(,) defines their composition PS 12 PS 23 gPS f PS  Hom(,) PS13 PS . That is. the agreement processes form the category of sets. Similarly, consider the processes of the enterprise (typing the name PP ), which include the process of managing an enterprise environment PPspr , that consists of a process objective

10

11 2222 P Pspr s p r{}1 , the result of the process PPsprsprsprspr {,,}123 and activity in the process 333 PPsprsprspr {,...,}16. Pursuing further the same reasoning, we arrive at the formula that defines the relationship between processes of an organization(enterprise) in the form of ordered sets:

PPPPPPPPPPPP sprinvgcsrskch,,,, . (2)

Thus, the processes of the company also form a class of objects, for each pair of objects PP1 and

PP2 which specify a set of morphisms Hom(,) PP12 PP , for each pair (morphisms), for example gHomPPPPPP  (,) 12and fHomPPPPPP  (,) 23 defines their composition gfHomPPPPPPPP  (,) 13. That is the processes of the enterprise form the category of sets. Can perform similar reasoning with all kinds of processes described in ISO/IEC 15288. The system during life passes through certain stages, which will determine the structure of the model life cycle of the object of production [2]. Stage period within the life cycle of the system related to the system state description, or directly to the system. Phases are determined by significant changes in the system, in accordance with the completion of significant stages of its development. The life cycle model may include one or more models of stages and is collected in a sequence of stages that may overlap or be repeated, depending on the scope, size, complexity. Stage of the life cycle form the structural basis for detailed modelling of system life cycles using typical processes of its life cycle. Each stage displays the progress and achievement of the planned stages of the system development throughout the life cycle and gives rise to important decisions regarding inputs and outputs. These solutions are used by organizations to account for the uncertainties and risks that are directly related to cost, schedule and functionality when creating or using the system. Thus, the stages provide the organization with a structure within which enterprise management has high ability to review and control of project and technical processes. Summarizing the results above formalization, we can conclude that there is a display of categories of sets, preserving the internal structure of these categories-sets. Ie the relationship between lifecycle stages and the respective life cycle processes, in the light of the provisions of ISO/IEC 15288 equivalent to the functors that assign a object of one category an object of another category, for example FPPRPPPPR :  , gives each process of the project the process of organizing (the organizing process) FPPRPPPPR () , respectively, to each morphism fHomPPRPPRPPR  (,) 12a morphism FfFPPRFPPRPPRPPRPPRPPR() :()() 12 . When this is done FfFgFfgPPRPPRPPRPPRPPRPPRPPR()()()  .

The characteristics of the process can be represented as a set of pairs ADinii,,1,...  , where Ai is a nonempty set of names of properties (attributes), Di - the set of values of the relevant attributes. The values are broken into classes of objects that interact with each other on the basis of the rules. Let  - the set of these rules. On the set of attributes can be established relations GG G ,  , which are divided into quantitativeG and G qualitative , for which there is a lot of assessment types, for example T={“projects are moving towards achieving our goals,” ”projects conducted in accordance with the relevant Directive, projects are implemented in accordance with the plans, projects remain viable”}. Then any valuation rule can be represented by a tuple   GT, . Thus, the set of informational characteristics of the process A, D , i 1,... n , established relationships GGG , and rules relationship   GT,  ii    can be used for formal definition of the process as the following tuple components:

11

ZADGGTiN ii,,{,},,  . (3) Developed conditions for each stage of life cycle can be formalized and presented in the form of a matrix (analog of the matrix of Zachman), where the columns will indicate the stage of the life cycle of the system(product), and the lines of business processes providing the above mentioned stage. Fill the cells of the matrix models relevant business process from the set of models defined by the above formulas. The matrix, thus obtained, will determine the architecture of a formal semantic model of the organization's activities in cooperation with the surrounding business environment. It can be noted that each row and each column of such a matrix, due to its formal - logical properties to form categories and can be defined as a tuple of sets: object names row categories Xxin|1,..., and column categories Yyin|1,..., , names of relations  iX  iY

R riR| i 1,..., n , which can join these objects. It is easy to show that between two recorded categories, at the intersection of column and row, there is a mapping that preserves the structure of categories is a functor, and the intersection of the row categories and column categories form a category cell. The view of the model in set-theoretic form allows to use an iterative model, which involves the use of iterations at all stages of the life cycle that enables registration and correction of the required indicators at the beginning of each iteration. Combining set-theoretic view of the provisions of this standard and the application of the principle of the organization of the Deming cycle, opens new perspectives for the effective use of the provisions of the methodology of systems engineering.

References 1. GOST R ISO/IEC 15288-2005 Information technology. Systems engineering. The life cycle processes of systems. 2. Antonov V. V., Kulikov G. G., Antonov D. V. «Theoretical and applied aspects of creation of models of information systems»/. G.G. Kulikov, D. V. Antonov//LAP LAMBERT Academic Publishing GmbH & Co.KG, Germany. 2011.

12

Intelligent Modelling of Manufacturing Supervisory Systems Using Fuzzy Cognitive Maps

P. P. Groumpos University of Patras, Department of Electrical and Computer Engineering, Boulder, Rion 26500, Patras Greece (Tel: +30-61-996449; e-mail: [email protected])

Keywords: Fuzzy Cognitive Maps, Modeling, Supervisory Control Systems, Intelligent Manufacturing System

Abstract: The challenging problem of modelling, analyzing and controlling complex systems is investigated using Fuzzy Cognitive Map’s theories. A mathematical description of FCM model is presented; new advanced construction methods and an algorithm are developed and extensively examined. The issue of modelling the supervisor of large complex manufacturing systems is addressed and is modelled using FCM theories. A manufacturing example from the pharmaceutical industries is used to prove the usefulness of the proposed method. Simulation results are given and discussed extensively.

1. INTRODUCTION Most of today systems are characterized as complex systems with high dimension and a variety of variables and factors. It is widely recognized that conventional methods in modelling and controlling modern systems have contributed a lot in the research and on the solution of many challenging control problems. However, their contribution to the solution of the increasingly problems associated with complex dynamical systems has proved to be limited. New methods have been proposed for complex systems that utilize existence knowledge and human experience and will have learning capabilities and advanced characteristics such as failure detection and identification qualities. This is especially the case with today’s manufacturing systems and in particular the Factory of the Future (FOF). In this paper Fuzzy Cognitive Maps (FCM) are proposed for modeling and controlling complex systems. A FCM draws a causal picture to represent the model and the behavior of system. The concepts of an FCM interact according to imprecise rules and the operations of complex systems are simulated. FCM are symbolic representation for the description and modeling of the complex system [1,2,6]. They consist of concepts, that illustrate different aspects in the behavior of the system and these concepts interact with each other showing the dynamics of the system. The human experience and knowledge of the operation of the system is used to develop a FCM, as a result of the method by which it is constructed, i.e., using human experts that know the operation of system and its behavior in different circumstances. These experts have accumulated experience over many years of work on many manufacturing systems and surprisingly their experience has never been mathematically expressed and used to solve many difficult problems of manufacturing systems, FCMs seem to be the answer to this paradox and this will be demonstrated with this paper. The objective of this paper is to focus on the construction and the use of FCM in modeling and analyzing manufacturing complex systems. It will be shown that FCMs are useful to exploit the knowledge and experiences that human have accumulated for years on the operation of a complex manufacturing system. Such methodologies are crude analogs of approaches that exist in human and animal systems and have their origins in behavioral phenomena related to these beings. So, a FCM represents knowledge in a symbolic manner and relates states, variables, events and inputs in an analogous to beings manner. This methodology can contribute to engineers' intention to construct intelligent systems, since as the more intelligent a system becomes, the more symbolic and fuzzy a representation it utilizes [3,4].This

13

attribute, of intelligence, seems to be more and more apparent and needed in today’s manufacturing systems [10].

2. BASIC THEORIES OF FUZZY COGNITIVE MAPS Figure 1 illustrates a simple FCM consisting of five (5) concepts and nine (9) weighed arcs. Fuzzy Cognitive Map models a system as an one-layer network where nodes can be assigned concept meanings and the interconnection weights represent causal relationships among concepts.

Fig 1. A simple FCM drawing

Relationships between concepts have three possible types: positive causality (Wij>0), negative causality (Wij<0) and no relationship (Wij=0). The value of Wij indicates how strongly concept Ci influences concept Cj. A new formulation for calculating the values of concepts at each time step, of a FCM, is proposed: 푛 퐴푖(푡) = 푓(퐴푖(푡 − 1) + ∑푗=1 퐴푗(푡 − 1) 푊푗푖) (1) 푗≠푖 The complete analysis of FCM and the methods used for constructing them are presented in [2]. The full chapter [2] can be obtained from the author by asking for it at [email protected]

2.1 Learning methods for Fuzzy Cognitive Maps The construction of FCM is based on experts who determine concepts and weighted interconnections among concepts. This methodology may lead to a distorted model of the system because human factor is not always reliable. In order to refine the model of the system, learning rules are used to adjust weights of FCM interconnections. The Differential Hebbian learning rule has been proposed to be used in the training of a specific type of FCMs [9]. The Differential Hebbian learning law adjusts the weights of the interconnection between concepts t grows a positive edge between two concepts if they both increase or both decrease and it grows a negative edge if values of concepts move in opposite directions. Adjusting the idea of differential Hebbian learning rule in the framework of Fuzzy Cognitive Map, the following rule is proposed to calculate the derivative of the weight between two concepts. 푛푒푤 표푙푑 푛푒푤 표푙푑 푊′푗푖 = −푊푗푖 + 푠(퐴푗 )푠(퐴푗 ) + 푠′(퐴푗 )푠′(퐴푗 ) (2) 1 Where 푠(푥) = 1+푒−휆푥 Appropriate learning rules for FCM need more investigation. These rules will give FCMs useful characteristics such as the ability to learn arbitrary non-linear mappings, capability to generalize situations for adaptability and flexibility and the fault tolerance capability [4-5]. This is a challenging area for future research investigations.

3. MODELING SUPERVISORS OF COMPLEX SYSTEMS WITH A FCM A defining characteristic of complex systems in their tendency to self – organize globally as a result of many local interactions. In other words, organization occurs without any central organizing structure or entity. Therefore, conventional techniques cannot easily handle this kind

14

of systems. The application of FCM for the modelling of the supervisor of complex systems seems to be a prospective methodology. The hierarchical structure of fig. 2 is proposed to model Large Scale complex manufacturing systems. This is referred as a Two-Level hierarchical structural modelling approach, [8]. At the lower level of the structure lies the plant, which is controlled through conventional controllers. These controllers perform the usual tasks and reflect the model of the plant during normal operation conditions using conventional control techniques. The supervisor of the system is modelled as a FCM. The first attempt to model a supervisor of a scale complex system utilizing the concept of FCM was made early in 2000 under the leadership of the author of this paper by a team of the Laboratory for Automation and Robotics [7-8]. There is an amount of information that must pass from the lower level to the Supervisor-FCM. So an interface is needed, which will process, transform and communicate information from the lower local controllers to the FCM on the upper level. The FCM interacts using equation 1, concepts of FCM will have new values that must be transmitted to the conventional controllers. So, the interface will follow the opposite direction. In this way changes on one or more concepts of the FCM could mean change in the value of one or more elements of the system.

Fig 2. An FCM – supervisor of a complex system

The model of FCM can be expanded to include advanced features, such as fault diagnosis, effect analysis or planning and decision making characteristics. Some of the concepts of the FCM could stand for device failure modes, their effects and causes, a subsystem’s normal or irregular operation, the functionality of the system, the failures, the system mission, and the ultimate function of the overall system. A very interesting quality of FCMs is their ability in predicting and redesigning of the system. This can help the designer in evaluating what, would happen if some parameters of the system have been altered. Another useful characteristic of the FCM is its efficiency in prediction and especially to predict what would be the result of a scenario or what will be the consequences for the whole process if a state changes suddenly. This feature is especially useful for designers of systems to observe the influence of each device separately. With FCM the knowledge and human operator experience is exploited. The human coordinator of a system should know the operation of critical aspects of the whole system and uses a mental model consisted of concepts to describe it. He relates the operation of one subsystem or two different subsystems to a concept or to a concept which stands for a specific procedure. FCM models the supervisor and it is consisted of concepts that may represent the irregular operation of some elements of the system, failure mode variables, failure effects variables, failure cause variables, severity of the effect or design variables. Moreover, this FCM will include concepts for determination of a specific operation of the system and it will be used for strategic planning and decision analysis. The supervisor FCM, will represent vital

15

components of the plant and will reflect the operational state of the overall plant. The development of this FCM requires the integration of several experts’ opinions in order to construct a FCM with diagnosis and predictive capabilities. We need to point out here that conventional control methods cannot be used to model, in an efficient and working way, the supervisor of a complex manufacturing system having all the characteristics been outlined here. This approach is best illustrated with the following example. Five experts (N=5) working on a pharmaceutical industry, which produce a number of drugs from chemical substances, bacteria, yeast, insects, botanics, alcohols etc, were asked to develop Fuzzy Cognitive Maps. Then using the methodology been outlined in this paper an FCM will be developed as a supervisor of the whole plant, which will describe the operation of a process, the final product of the process and the different aspects that determine the quality of the product. Experts developed the Fuzzy Cognitive Map, which is depicted on figure 4. They decided that the most important concept is the quality of the produced product different drug each time. They developed an FCM around the main concept C1, which represents the “product degradation” of the final product. Then, experts determined other concepts of the real system that influence this C1 concept. The experts have recommended the following 10 Concepts: “Product-Drug degradation”, “the internal process variation”, “the poor quality of the input material”, “wear and tear machine parts”, “technical malfunction”, “poor operator settings”, “reschedules the process”, C8 “machine shut down”, “maintenance”, “Research and Innovation”.

Fig 3. Proposed Supervisor – Fuzzy Cognitive Map

Then, the interrelations among concepts were determined with the following logical procedure been described to us by the experts of the given specific problem. The value of concept C1 (“degradation of product”) increases the need to "reschedule the process" which is presented as concept C7. Concept C7 decreases the value of concepts C6 “poor operator setting” and concept C2 “Internal process variation”. This process variation is very important on the degradation of the final product. Concept C4, which stand for “wear and tear machine parts”, has positive influences on concept C5 “technical malfunction”. Concept C5 “technical malfunction” increase the amount of concept C9 “the maintenance” and the amount of concept C8 “the machine shut down”. Concept C9 “maintenance” decreases the amount of the following concepts: concept C5 “technical malfunction”, concept C8 “the machine shut down” and concept C4 “wear and tear machine parts”. Concept C8 “machine shut down” increases the amount of concept C7 “reschedule process” and increases the value of concept C9 “maintenance”. Finally concept C10 “Research and Innovation” been introduced on the manufacturing process increases the amount of concepts C2 “Internal process Variation, C6 “poor operator settings” and C7 “reschedule process”. Then, the five (N=5) experts were asked to assign values on the interconnections among concepts. Five FCMs were constructed, with the same 10 concepts, but with 5 different weights on each interconnection. Each expert worked independently and defined his/her own FCM. Then

16

an algorithm been outlined in [2] was implemented and an augmented Fuzzy Cognitive Map was constructed, which is depicted on figure 4 and it is used as supervisor of the plant. The experts decided on the values of weighted arcs, Wij, which are given in figure 4.

Fig 4. Supervisor- Fuzzy Cognitive Map with weights

Table 1. The values of concepts of the supervisor-FCM for 8 simulation steps

Step C1 C2 C3 C4 C5 C6 C7 C8 C9 C10 1 0.15 0.48 0.20 0.20 0.15 0.40 0.04 0.09 0.60 0.01 2 0.65 0.50 0.30 0.40 0.45 0.50 0.51 0.40 0.53 0.30 3 0.74 0.50 0.30 0.41 0.50 0.50 0.54 0.48 0.62 0.40 4 0.73 0.49 0.20 0.40 0.51 0.50 0.54 0.47 0.64 0.30 5 0.73 0.48 0.20 0.39 0.50 0.40 0.55 0.47 0.64 0.35 6 0.72 0.48 0.20 0.39 0.50 0.40 0.55 0.47 0.64 0.35 7 0.72 0.48 0.20 0.38 0.50 0.40 0.55 0.47 0.64 0.35 8 0.72 0.48 0.20 0.38 0.50 0.40 0.55 0.47 0.64 0.35

4. FUTURE RESEARCH DIRECTIONS FCMs have been applied to a variety of scientific areas with many interesting and very useful results [2-4], [6,9] to mention just a few. The application of FCM may contribute to the effort for more intelligent control methods and for the development of autonomous systems in Intelligent Manufacturing Systems (IMS) [10]. Applying the proposed algorithm to a number of challenging problems of modeling two level hierarchical structures seeking to utilize a minimum number of experts is an open research topic. Learning techniques for FCM [9] is another challenging research area. The way that Decision Analysis (DA) methods could be combined with FCM theory and thus create a new more powerful Decision Support System (DSS). FCMs have not been used for Failure Detection Isolation and Accommodation methods to a satisfactory degree. The FCM theory presented here with the illustrative example could point to some research directions. Finally it is of interest to raise a basic fundamental question: what is the best way to approach the difficult issue of modeling, analyzing and controlling today’s complex manufacturing systems. From the old classical methods or from the new advances techniques which combine a number of interrelated and exciting new scientific fields, such as Neural Networks, Fuzzy systems, evolutionary control and FCMs. This is an issue that has been the subject of extensive investigations during the last 10-15 years and will continue to be a challenging one for the next 10-15 years. 17

5. CONCLUSIONS In this paper the modeling and analysis of manufacturing systems has been investigated using the exciting and promising models of FCM. Fundamental mathematical theories of FCM were reviewed and extensively analyzed. A new algorithm was developed and used to demonstrate the usefulness of the FCM approach in modeling manufacturing systems. FCM theory, a new soft computing approach, utilizes existing experience in the operation of a complex system and combing fuzzy logic and neural networks. For such complex systems it is extremely difficult to describe the entire system with a precise mathematical model. Thus, it is more simple and useful to divide the whole plant in virtual parts and to construct an FCM for each part. The experience of different specialists, who can easily judge the variables and states of a small process and then unify these to construct the final system by integrating the different FCMs into an augmented one, have been utilized. FCMs offer the opportunity to produce better knowledge based on systems applications, addressing the need to handle uncertainties and inaccuracies associated with real manufacturing problems. The issue of modeling the supervisor of a manufacturing system was addressed and analyzed. Then, using the theory of FCM, it was modeled in a hierarchical structure where the plant was controlled using conventional controllers. A simple example from manufacturing field was given demonstrating clearly the usefulness if the proposed approach. The supervisor was modeled with ten (10) concepts and twenty two (22) weighed interconnecting arcs. The concepts of the supervisor – FCM were very interesting features of a manufacturing plant such as: machine shut down, poor operator settings, poor quality input materials, technical malfunction, maintenance and others. The inclusion of all these features as concepts in a supervisor – FCM and ability to run extensive simulation with real data can prove very useful to plant management. Another very interesting feature of this proposed approach is the ability to use several expert opinions, giving the opportunity to predict the degradation of a product. The simulation studies show that after only a few recursive steps the FCM achieves a diagnosis for the desired product. New methods for developing supervisors for manufacturing plants using the theory of FCM are needed and could be the research of future direction in this exciting field.

References 1. Kosko, B. “Fuzzy Cognitive Maps”. Intern. Journal of Man-Machine Studies. 1986; 24:65-75. 2. Groumpos, P. P. “Fuzzy Cognitive Maps: Basic Theories and their Application to Complex Systems”. Glykas M. (ed) Fuzzy Cognitive Maps: Advances in Theory, Methodologies, Tools and Applications. Springer-Verlag Berlin, Heidelberg. 2010, pp. 1- 22. 3. Salmeron, J.L., “Augmented FCMs for Modeling LMC for Critical Success Factors” Knowledge-Based Systems. 2009; 22(4): 275-278. 4. Stylios, C.D., Georgopoulos, V.C., & Groumpos, P.P. “Introducing the Theory of Fuzzy Cognitive Maps in Distributed Systems”. Proceedings of 12th IEEE Intern. Symposium on Intelligent Control, Istanbul, Turkey, 1997, pp. 55-60. 5. Papageorgiou, E.I., Spyridonos, P., Ravazoula, P., Stylios, C.D., Groumpos, P.P., Nikiforidis, G. “Advanced Soft Computing Diagnosis Method for Tumor Grading”. Artif Intell Med. 2006; 36:59-70. 6. Dickerson, J. A., Kosko, B. “Virtual Worlds as Fuzzy Cognitive Maps” Presence. 1994; 3:173-189. 7. Groumpos, P. P., & Pagalos A.V. “About a structural two level controlling scheme for large scale systems”, “Computers in Industry”. 1998; 36:147-154. 8. Groumpos, P. P., & Stylios, C.D. “Modeling Supervisory Control Systems using Fuzzy Cognitive Maps”, Chaos, Solutions and Fractals. 2000; 11(1): 329-336. 18

9. Papageorgiou E. I., Stylios C. D., Groumpos P. P.: “Active Hebbian learning algorithm to train fuzzy cognitive maps”. International Journal of Approximate Reasoning, 2004;37: 219-249. 10. Groumpos, P.P. “The Challenge of Intelligent Manufacturing Systems (IMS) - The European IMS Information Event”. Journal of Intelligent Manufacturing. 1995; 6(1): 66- 77.

19

1st International DSPTech Conference on Technologies of Digital Signal Processing and Storing

The Design of Neural Network Algorithms for Processing Sensor Signals of Aviation Gas Turbine Engine on FPGA

S.V.Zhernakov1,A.T.Gilmanshin2 1 Department of Aviation Instrument Engineering Ufa State Aviation Technical University Ufa, Russia e-mail: [email protected] 2Department of Aviation Instrument Engineering Ufa State Aviation Technical University Ufa, Russia e-mail: [email protected]

Keywords: Neural network,FPGA, Mathematic model, MATLAB, Simulink coder

Abstract: The paper deals with the implementation of intelligent neural network algorithms on programmable logic (FPGA) using the MATLAB built-in utilities environment and development and debugging tools of programmable logic ICs.

1. INTRODUCTION Expanding the scope of application of intelligent algorithms for solving problems of control and diagnosis of complex dynamic objects, including aircraft gas turbine engines. Such algorithms have higher robustness in a not-factor that provides greater efficiency of these algorithms in noisy environments or floating bounce measurement channels in comparison with classical algorithms [1][2].

2. ONBOARD USING OF NEURAL NETWORKS Modern digital control system of aviation gas turbine engine realize engine management in all operational modes, and ensures stable operation of the engine in transient conditions and prevent a variety of emergency situations. Functionally, the system consists of three main modules - the monitoring unit of the measured parameters, on-board monitoring and diagnostic system, and automatic control system [2]. To solve the problem of control and diagnostics of aviation gas turbine engine the method of FDI (Fault Detection and Identification) can be used, which is based on neural network mathematical model of the motor and neuro-fuzzy classifier [2]. This system can detect and classify non-standard modes of operation of the gas turbine engine, measuring channels and actuators in onboard conditions. The structure of this system is shown in Figure 1. Digital hardware module of aircraft gas turbine engine control system usually consists of the following elements - one or more microprocessors (16 or 32-bit), ROM for parameters and for program, RAM, interface controllers, and FPGA, which implements the auxiliary logic modules.

20

System Engineering

Ym Neural-network model

U ε Neural-fuzzy failure F identifier

Gas-turbine engine Y

Fig 1. Relationships and relationship collections

The implementation of these neural network algorithms require significant computational resources, the amount of which depends on the accuracy and speed of data calculation algorithms. The main control unit microprocessor or the FPGA can use as a calculator for these algorithms. Use of the FPGA in this case more favorably due to the fact that artificial single neurons can be performed in the FPGA configurations as individual units that operate in parallel, which can significantly increase the execution speed of the algorithm as compared with the microprocessor, wherein the outputs of all artificial neurons are calculated sequentially. Furthermore, this simplifies the distribution of main microprocessor computation time.

3. ALGORITHMS REALIZATION As indicated in [2], MATLAB environment suitable for the development, debugging and verification of neural network algorithms. In addition, the means of MATLAB allows the description of the developed algorithms to create a VHDL or Verilog code, which is includes in the configuration of the FPGA is created using the development environment. For the design and debugging of a neural network the Neural Network Toolbox (NNTool) package as part of MATLAB used[5]. The first step is to optimize the structure of the neural network. The criterion for optimization is the minimum mean square error and minimum execution time. Initially, select the type of neural network based on the solving problem, such as recurrent network NARX for the mathematical model of a gas turbine engine. Using NNTool package neural network is created type of network is selected with the required number of inputs, outputs and feedbacks and different number of neurons of the inner layers, activation functions, and so on. D. Next, through training networks in the experimental data and the simulation of their work an optimal network architecture and optimal networks training algorithm is found.

21

1st International DSPTech Conference on Technologies of Digital Signal Processing and Storing

Fig 2.Creating a Neural Network

There are 2 options of onboard implementation of neural network algorithms - in the first embodiment, the training of neural networks is made only in ground conditions with service software, and only coefficients of the neural network is transferred to processor module of gas turbine engine control system. In the second embodiment networkuplearning with onboard flight data is possible, then a learning algorithm must be also implemented in the processor module. If the FPGA located in the processor module used for implementation of neural network algorithm, the neural network algorithm should be implemented as a code in Verilog or VHDL language, and then it should be integrated into the overall configuration of the FPGA as a separate module. To generate VHDL-code is convenient to use built-in environment MATLAB - HDL Coder[5]. To do this it’s necessary to transform the neural network from NNTool to model in Simulink, using the gensim command. In the Simulink environment neural network is shown as a system of blocks, each of which performs its function, each block in turn, is presented in the form of blocks of lower level.

22

System Engineering

Fig 3.The general structure of a neural network in the Simulink

Fig 4.Neural network inner layer

Before creating the VHDL-code properties of some units to be changed. To do this, click on the block, right-click, select menu Library link -> Disable link. For example, the Product unit is not supported more than two inputs, and you must select the property “Saturate on integer overflow”.

Fig 5.Neural network output layer

23

1st International DSPTech Conference on Technologies of Digital Signal Processing and Storing

Fig 6. HDL Code generation

To create VHDL code from Simulink models is need to run the HDL Workflow advisor utility. If all model blocks are permissible, setting operation is performed to the end and in the end will be created VHDL code in the form of files with the .vhd extension. Subsequently, the code may be embedded in a general configuration of the FPGA.

4. CONCLUSION The work was shown the process of creating VHDL code for neural network algorithm using built-in mathematical environment MATLAB. The use of these funds to accelerate the software development process for control systems of aviation gas turbine engine to reduce the number of transactions on the transfer of data between the various development tools, and to minimize the probability of errors.

5. REFERENCES 1. Vasilyev V.I., Zhernakov S.V., Frid A.I. “Neyrocompyutery v aviatsii (samolyoty)/ Pod red. Vasilyeva V.I., Ilyasova B.G., Kusimova S.T. Kniga 14: Uchebnoyeposobiyedlyavuzov [Neu-rocompters in aviation (airplanes)/ edited by Vasilyev V.I., Ilyasov B.G., Kusimov S.T. Book 14: Tutorial for higher education]” – Moscow.: Radiotekhnika, 2003. – 496 p. (in Russian). 2. Simon Haykin “Neural networks - A comprehensive foundation”, Macmillan, 1994. 3. “Intellectualniesystemyupravleniyaikontrol’agazoturbinnyhdvigateley [Intellectal systems of gas turbine engines control and check systems]”/ edited by S.T.Kusimov ,B.G.Ilyasov,V.I.Vasil’yev – М .:Mashinostroyeniye, 2008.-549p. 4. Shtovba S.D. “Proektirovanienechetkih system sredstvami MATLAB [Fuzzy system projecting using MATLAB facilities]”. – Moscow.:Telecom, 2007 – 288p. (inRussian). 5. http://www.mathworks.com/

24

System Engineering

BIM Technologies in Interdisciplinary Educational Program «Applied Informatics in Architecture»

Galina Zakharova, Alexandr Krivonogov Department of Applied Informatics, Ural State Academy of Architecture and Arts, Ekaterinburg, Russia [email protected]

Keywords: BIM – Building Information Modeling, BIM-technologies, CAD-systems, Autodesk Revit, information technologies in architectural design, BIM in education

Abstract: The article shows that the digital modeling of buildings is a modern efficient technology in architectural design. Russia lags behind the other countries, but in 2014 the introduction of BIM-technology in the practice of designing fixed at the legislative level. For a more extensive use of BIM-technology it is necessary to develop educational standards for teaching in universities. Department of Applied Informatics in the Ural State Academy of Architecture and Arts prepares a multi-disciplinary experts of IT in architecture, and beginning from 2009 teaches students BIM-technology. The results and examples of the application of BIM-technology in graduation projects related to the real architectural and construction of industrial and public buildings are shown.

1. INTRODUCTION The process of informatization currently affects almost all spheres of human activity. Creative character of architectural design should not cancel, but on the contrary, should make full use of new modern tools of information support in design process. The design goes to a new stage – it is considered from system positions as a part of the entire life cycle of the architectural and construction project, and includes the design, construction and further operation and demolition. Design Automation at different stages is now the norm, and involves the use of CAD- systems to increase the efficiency of the process. Many organizations have long and successfully produce design documentation using automated systems, but often applied separate software packages to solve individual tasks. Today the problem of integration processes into a single information environment, because it gives a significant increase in the efficiency of the process of design and construction in general. One integrated building information model provides a comprehensive description of the object and coordinated work of all groups of participants in the development and implementation of the project. Such information environment is realized by means of modern software within BIM- technology – Building Information Modeling, based on a single digital information model of building. Digital model provides not only three-dimensional image, but also all the information on the building – architectural and design elements, engineering equipment, information for drawing up cost estimates for the construction of the model for the future exploitation of the object. BIM-technology implemented through software systems such as Autodesk Revit, Graphisoft ArchiCAD, Nemetscheck Allplan, Bentley Architecture, Tecla Structure etc. The most widespread platform in universities is Autodesk Revit, the company Autodesk provides a free license for students. BIM technology in recent years has become dominant in the world in design and construction practice, replacing all previously used methods of design. In our country, it only started to be introduced, for example, the gap with the US, UK, Australia, in the construction and use of BIM models is more than 10 years. But the situation is changing, BIM-technology become more widespread. In March 2013 in Moscow began to work the group BIM IPD (Integrated Project Delivery) [1], formed to 25

1st International DSPTech Conference on Technologies of Digital Signal Processing and Storing promote the ideas of information modeling and integrated implementation of projects in the Russian construction industry. The group has set a global task for all the projects financed from the state budget, to make mandatory requirements of their performance in the BIM technology. This will ensure the transparency and validity of the cost of the work, after the delivery of the object state customer will receive not only the object itself, but also its model for use in the operation. According to preliminary estimates, this approach would lead to significant budgetary savings. 29 December 2014 the order №926 of Ministry of Construction of the Russian Federation "On approval of the plan phased introduction of BIM in the field of industrial and civil construction" [2]. The plan provides for the selection, examination and analysis of the "pilot" projects carried out with the use of BIM. It is further assumed changes in the regulatory, legislative and technical instruments, as well as in educational standards by December 2016, and a year later, in 2017 – the beginning of teaching in the use of information modeling in the field of industrial and civil construction and preparation of the experts. This strategy is being discussed at various conferences, for example [3]. In the US the decision of mass promote BIM was taken in 2002-2003. Almost immediately around since 2004 American universities began to teach students the program Revit (often instead of AutoCAD). This is an example of understanding that education must precede widespread practical use of technology. In Russian architectural universities the systematic teaching of BIM-technology is absent – in the educational standards it is not present, and only some enthusiasts who understand the role of innovative technologies introduce it in the learning process. The department of Applied Informatics in the Ural State Academy of Architecture and Arts began to teach Autodesk Revit in the discipline "CAD" in 2009 for students in the specialty 080801 and direction 03.09.03 "Applied Informatics in the Architecture". The learning is certified by the company Autodesk. Many graduates of the department are currently successfully use BIM in their practice and contribute to greater dissemination of BIM-technology in Russia. Further with some examples of works of graduate design we show how BIM-technology are being introduced into the educational process at the Department of Applied Informatics.

2. RESULTS OF DESIGN USING BIM-TECHNOLOGY We describe in this part some examples of effective applying of BIM-technology in foreign and Russian practice. The main attention is devoted to projects and corresponding methods of digital design performed by graduates of the Department of Applied Informatics.

2.1. The effectiveness of BIM-technology and examples of successful implementation Integrated BIM technology leads to high quality projects through compliance with standards, to effective work on the basis of specialized tools and support of all business processes. Due to the access to all the necessary visualization tools the process of project approval becomes more convenient and takes less time. Visualization projects used to demonstrate ideas, and presentation tools are important for better evaluation of design alternatives. A significant advantage of BIM is the coordination of all the sections – engineering systems, architecture, structural elements and linking them together. Information modeling is the only technology that can effectively solve such problems. In order to work effectively in it, the company should design library families for engineering systems, architecture and constructions, and should introduce certain standards and regulations describing the BIM-processes in all aspects of design in BIM: to define the principle for file names, places for storage, fixed area of responsibility of each participant in the process on the basis of the expanded list of tasks for the project. 26

System Engineering

The advantage of BIM is opportunity to take more informed design decisions by calculating the energy efficiency of buildings, on the basis of assessment of manufacturability by means 3D-models and resolve conflicts before construction. An example of successful implementation of new technologies for design of cities in the 21st century due to the successful application of BIM in all stages of design and construction is Shanghai Tower (the architectural project of M. Arthur Gensler Jr. & Associates, Inc., San Francisco) [4]. Without BIM this project cannot be imagined. The parameters of the Shanghai Tower – 121 floor, 632 meters, the area of 570 000 square meters of premises, construction – from 1998 to 2014, cost $ 2.2 billion. In Russian practice the information modeling was introduced in design of complex of buildings called "Lakhta Center" in St. Petersburg on the Gulf of Finland [5]. "Lakhta Center" – is a multifunctional complex comprising 86 storey skyscraper. It will be include office spaces, movie theaters, swimming pools, a planetarium and more. It will be a world of science and technology, comprising a teaching and research center, specialized laboratories, interactive exhibits.

2.2. Application of BIM in the educational program "Applied Informatics in Architecture" The Ural State Academy of Architecture and Arts implements a unique educational program "Applied Informatics in Architecture", which involves the active introduction of computer technology in the educational process. This is a interdisciplinary direction in which the architectural design is combined with deep automation of the design process. Discipline "CAD" is fundamental and includes the consistent introduction of students in a range of software products. In terms of information technology primarily uses popular packages of Autodesk 3DS Max and AutoCAD. Further, the model can be imported into the software environment Unity 3D, which allows to realize an interactive project management. There is the ability to bind the equipment (helmet Oculus Rift) for the generation of virtual reality. This technology is used as an innovative mean to promote the architectural projects. Another application is a virtual reconstruction of historical buildings, restoration of the lost cultural heritage. To work with the terrain and for design of infrastructure software AutoCAD Civil 3D is used. The development of project concepts performs on the platform Infra Works 360 in order to increase the efficiency of data sharing and collaboration. Such systems as Autodesk Vault and TDMS of company CSoft were utilized for cataloging and integration of objects of architecture and construction and collaborative development project. A special place has the introduction of an integrated information technology based on software package Revit, implementing the principle of building information modeling. The package is designed for architects (Revit Architecture), designers of bearing structures (Revit Structure) and development of engineering systems (Revit MEP). It provides three-dimensional modeling of elements of the building and the flat drawing of design elements, organization of joint work on the project from the concept to the production of working drawings and specifications. Revit database may contain information about the project at various stages of the life cycle of a building, from conceptual design to construction, operation and disposal. Navisworks program introduces students to the principles of 4D and 5D-modeling.

2.3. Examples of student’s projects using BIM This section includes graduation projects using BIM. These are works related to real objects – industrial, public and residential buildings. Fig. 1 shows a phased design of innovation techno park for manufacture of modular products in the Chelyabinsk region. It was developed an effective digital model with 27

1st International DSPTech Conference on Technologies of Digital Signal Processing and Storing parameterized model elements, parameters determine the behavior of each element of the model and its relationship with other elements. Revit retains internal dependence of these elements. If an item is changed, the program will detect any associated elements require modification, and in what way it should be done. The result of this technology is significantly increasing of design quality and production time of the project.

Fig 1. Stages of designing of innovative Techno park

The work presented in Fig. 2, is devoted to research the possibilities of software for managing by collaborative development of architectural projects at all stages of the life cycle. As the object the building for experimental workshops of the Central Research Institute for Materials in Yekaterinburg was chosen. The main specialization of the company – research activities in the field of metallurgy and materials engineering and design work. In the beginning of work on the renovation of the object a number of problems have been identified: the integrity of the total territory (up to 1990) was disrupted because of its division into separate sections; the integrity of the operation of power systems was lost as well as technical and ecological safety of the territory; some sections of documentation were lost, and the documentation which was available in the archive did not correspond modern requirements. The initial data for the work were the terms of reference of structural modernization, scanned drawings with the plan of arrangement of technical equipment, taken from the archives, the results of laser scanning, digitized plan of the land. Since the project is sufficiently complex, it was organized teamwork through a single database in a single building information model. Participants of the project submitted to the BIM-manager, who has focused on coordination of activities and the formation of the ultimate standard for working with BIM. The standard specifies who, how and what tools must use in Revit, which files from library to use, how to open files for view, print them, and so on. The result of this work was an effective project of reconstruction of the object.

28

System Engineering

The results of using of standards: all designers worked by the same rules in a single repository; unification of drawings; effective control over the project; improve the overall quality of the project; shortening implementation.

Fig 2. Initial state and the project of reconstruction of the experimental plant workshops

Fig 3. Layer digital model of building for Ministry of Emergency Situations

On the fig. 3 the digital model of trade center in Ekaterinburg, developed for the Ministry of Emergency Situations to educate members of the organization to rescue operations in case of emergencies. The model allows to disassemble the building in layers and to show all communications. 4. CONCLUSION Thus, in the article the effective modern technology of digital modeling of buildings (BIM- technology) and the results of its application in real projects are described.  The introduction of BIM-technology in the practice of architectural design is fixed in Russia at the legislative level and this is the stimulus for its development.  For a more extensive use of BIM-technology it is necessary to develop educational standards for teaching in universities. 29

1st International DSPTech Conference on Technologies of Digital Signal Processing and Storing

 The department of Applied Informatics in the Ural State Academy of Architecture and Arts prepares a multi-disciplinary experts of IT in architecture, and beginning from 2009 teaches students BIM-technology.  The results and examples of application of BIM-technology in graduation projects related to the real architectural and construction industrial and public buildings are shown.

References 1. Korol M. The working groop BIM IPD in the Russia // URL: http://isicad.ru/ru/articles.php?article_num=16381 2. The order of Ministry of Construction // URL: http://www.minstroyrf.ru/upload/iblock/383/prikaz-926pr.pdf 3. The international conference // URL: http://ancb.ru/publication/read/1913 4. Levin D. Is it possible to construct 121 floor of Shanhai Tower without BIM? URL: http://isicad.ru/ru/articles.php?article_num=16402 5. The designer of the grandiose "Lakhta Center" will tell you how to survive in a crisis using BIM // URL: http://isicad.ru/ru/articles.php?article_num=18059

30

Models, Methods and Algorithms of Data Processing and Storing Counting Cutter Routing Optimization Parameters for Cutting Plan Gived as Flat Graph

Egor A. Savitskiy1,Vadim M. Kartak2 1South Ural State University (national research university), Lenin prospekt,Chelyabinsk, Russia [email protected] 2Bashkir State Pedagogical University named after M. Akmullah, Oktyabrskoy Revolutsii st.,Ufa, Russia Ural Federal University named after the first President of Russia B.N. Yeltsin, Mira st., Ekaterinburg, Russia [email protected]

Keywords: Cutter routing optimisation, cutting plan encoding, graph theory

Abstract: This paper represents evaluation method for cutting plan from the side of cutter instrument way optimisation problem. Shown effective algorithm for counting minimal number of incut points for cutting plan. Paper gives examples of cutting plan encoding as a flat graph for several 2-dimensional cutting problems.

1. INTRODUCTION In currently, cutting-packing problems is widely used in practice. The main optimization criterion for this problems is usually space occupied by the workpieces minimization (material saving). However, it is necessary to apply additional criteria of optimality at the next steps of technological preparation of cutting. For example, at the step of cutting tool route construction is necessary to consider the criteria: the idle and cutting way length, the incutpoints number minimization. The article describes the method for evaluating of cutting tool route optimization possibility for given cutting map. The method is based on the idea: imagine cutting out plan as a planar graph and assess the properties of its OE-covering [1]. We introduce some definitions and notation [3].

2. CUTTINGPLANGRAPHPRESENTATION Let plane S be a model of a cutting sheet. The model of cutting plan be a plane graph G having outer faсe f0 on S . The set of graph G components that are non-homeomorphic to a circle let be the points of contiguity of three or more cutting plan fragments, and corresponding fragments be the edges incident to this vertex. Let the component homeomorphic to a circle be a loop. Functions for each edge e  E(G) :

1) v1(e) , v2 (e) be vertices incident to edge e ;

2) fk (e) be a face placed at right when one is moving by edge from vertex vk (e) to vertex v3k (e), k  1,2 ;

3)lk (e) be the edge incident to face f3k (e) and vk (e), k 1,2 ;

4) rk (e) be the edge incident to face fk (e) and . Let for any part of graph J⊆Gthe theoretical-set union of its inner faces be designated as Int(J). If we consider that a cutter had passed by all graph J fragments then Int(J) may be interpreted as a part cut off a sheet. Let any route in plane graph be considered as a part of a graph with all vertices and edges belonging to this route. This allows formalization of claim to a cutter movement path as a

31

1st International DSPTech Conference on Technologies of Digital Signal Processing and Storing condition of lack of intersection of inner faces of any paths initial part in a given plane graph G with edges of its other part.

Definition 1.Let a trail C  v1e1v2e2. . .vk of plane graph has ordered enclosing (or to be

OE-trail) if for any its initial part Cl  v1e1v2e2. . .el , l  (| E |) the condition I n t (Cl )  E   holds. Definition 2.Let the ordered sequence C0  v0e0v0e0...e0 v0 , C1  v1e1v1e1...e1 v1 ,…, 1 1 2 k0 k0 1 1 2 k1 k1 C n1  v n1en1v n1en1...en1 v n1 of edge-disjoint OE-trailscovering graph and such that 1 1 2 kn1 kn1 m1 n1 (m: m  n) Int(Cl )  Cl   l 0  l m  be called cover with ordered enclosing (OE-cover). Definition 3.Let minimal cardinality ordered sequence of edge-disjoint OE-trails for plane graph be called Eulerian OE-cover.

2.1. Graph encoding for rectangle cutting problem Packaging P is complete if the left (bottom) side of each rectangle refers either side (bottom) of the band or the right (top) side of another rectangle (Fig.1).

Fig 1. Sample of (a) complete and (b) not complete packing

In [2] shown that optimal material savings packing P0 is complete and given the method of encoding it in matrix shape.Suppose that for some problem (W ,m, R) is known packing . Perform mentally vertical "cuts", passing through the right side of the rectangle R. Thus, the whole package is divided into vertical stripes. Let kx – the number of vertical bands. Similarly, execute mentally horizontal cuts through the top side of the rectangles. k y –number of horizontal bands. For the sample at Fig.2 we have kx  6 and k y  4 .

Fig 2. Vertical and Horizontal split

x Let's compare the vertical split to couple: 1) Matrix А has size m kx

32

Models, Methods and Algorithms of Data Processing and Storing

x 1, если i - ый прямоугольник пересекает j - ой вертикальной полосой, aij    0, в противном случае. and 2) vectorzx= (z x , z x ,...z x ) , of vertical bands length. 1 2 kx y Similarly, for horizontal split we will have couple of matrix А with size m  k y and vector zy= (z y , z y ,...z y ) .For packing P at the Fig.2 we have 1 2 k y 1 0 0 0 0 0   0 0 1 1 1 1 0 0 0 0   1 1 0 0 x 0 1 1 1 1 0 y А =  А 0 0 0 1 0 0 1 0 0 0 1 1 1 0 0 0 0 1 0 0     1 0 0 0 0 0 0 0 1 1 Obtained matrix Аx and Аy will be called packing matrix, and vectors zx and zypacking vectors. Some properties of cutting route could be defined at the stage of the packing matrices construction. For example, we can count the number of odd vertices in cutting plan which affects to number of incut points. In article [1] also given method of convertation rectangle cutting plan to flat graph. Example of such graph show at Fig.3

Fig 3. Example of graph for rectangle packing

2.2. Graph encoding for figure carving Let's pay attention for next rules of converting figure carving cutting plan into graph format: –the detail outline described as complex geometric line can be represented in the form of a single edge of a planar graph; –united details edges, which could be cuted off using combined cutting technique should be represented by one edge of the graph. In this way, cutting plan shown in Fig.4a can be represented in the form of a planar graph as shown in Fig.4b It can be seen that the edge (v2,v3) corresponds to a line of the combined cutting and edge (v1,v2) corresponds to the complex shape line.

33

1st International DSPTech Conference on Technologies of Digital Signal Processing and Storing

a)

b) Fig 4.Graph encoding for figure carving cutting plan

3. CUTTER ROUTING OPTIMISATION PARAMETRS It is well known that the cost of the cutting process depends mainly on three factors: the length of the path of idling, the path length of the cut and the number of idle passes (incut points) [1]. These values can be estimated using the apparatus of graph theory if nesting is in the form of a planar graph whose set of compliance: –the length of the cutting lines corresponds to the weight of the graph edges; –graph vertices corresponds to the possible incut points; –we have the matrix of distances between incut points (vertices of the graph). In \cite{Pan:11} it is proved that for any set M, representing a matching on the set of vertices of odd degree is possible to construct a connected graph OE-cover, using elements of the set M for transitions between trails. So the number of idle transitions at "cutting" of a connected graph can not exceed half the number of vertices of odd degree Vodd / 2.

Minimal number of incut points for such graphs will be Vodd / 2 or Vodd / 2 1 if graph outer face are not incident to odd degree vertices. For example, in the graph shown in Fig.5, two vertices of odd degree, however, to comply with the ordered enclosing condition you need to start "cutting" with the vertex incidented to outer face. Minimal number of incut points for this graph will be 2.

Fig 5. Graph without odd degree vertices incident to outer face

Using the arguments above, you can create an algorithm for determining the minimum number of incut points for cutting plan in plain graph format.

34

Models, Methods and Algorithms of Data Processing and Storing AlgorithmIncutCount

Input: flat graph G(V , E) as list of edges with functions vk (e) , fk (e) , lk (e), k  1,2. Output:minimal number of incut points. Step 1. Select the connected components of G using a wavefront algorithm for each connected component. The complexity of the step does not exceed O(| V |2 l o g | V |) . Step 2. To determine the degree of vertices and it's incidence to outer face of each connected component by using the functions vk (e) , lk (e) , fk (e) . The complexity of the step does not exceed O(| E |) .

Step 3. Minimal number of incut points is sum of Vodd / 2 and number of connected component not incident odd degree vertices. The complexity of the step does not exceed O(|V |) . End ofIncutCount By using algorithm IncutCount we can estimate the minimum number of incut points that must be done when cutting the plan provided graph G in polynomial time. The actual number and position of the incut points will depend also on the cutting technology. The complexity of algorithm does not exceed O(| V |2 log | V |  | E |) . Thus, if offered several variants for placing items on the plan, and for every variant gived corresponding graph, then we can estimate the minimum required number of incut points, the length of the cut and Idle way using the apparatus of the graph theory. At the next step of technological preparation of cutting we can choose the most comfortable cutting plan.

4. CONCLUSION The article describes the option of using the graph theory in the evaluation of details placement on the cutting plan to optimize the cutting tool path:  Proposed rules for encoding cutting plan as planar graph.  Given a list of cutter routing optimization parameters.  Described algorithm for determining the minimum number of incut points for cutting plan encoded as flat disconnected graph Acknowledgments This work was supported by grant no. 15-37-51010 from the Russian Foundation for Basic Research. References 6. T.A. Panyukova. Resources optimization in the cutting process technological preparation. Applied Informatics, no. 3(39), pp. 20–32, 2012. (in Russinan) 7. V.M. Kartak. The matrix algorithm for finding optimal solutions of two-dimensional strip packing problem. Information technology, no. 2, pp.24-30, 2008. (in Russinan) 8. T.A. Panyukova Cover with Ordered Enclosing for Flat Graphs. Electronic Notes in Discrete Mathematics no.28. pp. 1724. 2007. 9. T.A. Panyukova, E.A. SavitskiyPosible Euler coverings with Ordered Enclosing for multicomponent graph. Proceedings of conference ¡¡Statistics. Modeling. Optimisation.Chelyabinsk (28 Nov – 2 Dec 2011) pp.154-159 (in Russinan) 10. T.A. Panyukova Optimal Euler covers with Ordered Enclosing for Flat Graphs. Discrete Analysis and Operations Research vol. 18, no. 2, pp.64-74, March-Aprel, 2011. 11. T.A. Panyukova, E.A. SavitskiyThe Software for algorithms of ordered enclosing covering constructing for plane graphs. Vestnik UGATU Vol. 17, no. 6 (59), pp. 65-69, Ufa, Russia: Ufa State Aviation Technical University, 2013, ISSN 1992-6502.

35

1st International DSPTech Conference on Technologies of Digital Signal Processing and Storing

Information Technology for Data Backup and Recovery

Grischuk Konstantin Borisovich, Rizvanov Konstantin Anvarovich

[email protected]

Keywords: Backup, Storage of information, Copy, Restore data

Abstract: At the present stage of development of society and information systems increasingly important role it occupies part of the program which is due to various reasons, it may be lost. In different conditions and environments, Backup systems will vary with its architectural features but they are a necessary element of any job security.

1. INTRODUCTION At the present stage of development of society and information systems increasingly important role in less hardware, namely the software part which is due to various reasons, such as the spread of computer viruses, the inept actions of the user and damage to the storage device may be lost. To insure against the loss of important information and related difficulties need to implement Backup system. In different conditions and environments, Backup systems will be different with its architecture and capabilities. Currently, there is software to organize backup systems from firms and manufacturers of different countries. Consider software domestic producers - is, for example, Handy Backup, Acronis Truelmage, Paragon and others. 1) Handy Backup [1] - a versatile tool to backup. The program supports a large number of methods, ways to preserve important information and allows you to store data using your online-service.

Ram Hdd Cpu Time

Add file

initial data Read settings Getting file list to Integrity Log

processing checking backup copy Data

Restore file

Ram Hdd Cpu

Fig 1. Shows the decomposition process of backing up and restoring data

In Handy Backup successfully implemented tool view backups. As you work with documents, their content changes, new folders and files. Depending on their status, changing color. Identical files are displayed in black, different - purple excluded - gray and so on. HandyB ackup can run as a service. Depending on the status of the program, changing its icon in the system tray 36

Models, Methods and Algorithms of Data Processing and Storing 2) Comodo BackUp [2] - a program for file backup and restore backup data. Just as in Handy Backup backups can occur on a local or network drive, FTP- server or on optical media. Supports recording on CD / DVD, email notifications, conducting extended log files, backup of real-time mode, file synchronization, archiving, backup data, password protection, advanced filtering, check the integrity of the backup before her recovery and many other features . Unlike Handy Backup, ComodoBack Up is not able to be launched as a service but has the opportunity to work with cloud services. 3) Acronis Truelmage [3] - a powerful tool to create exact images of hard drives and the individual sections, including absolutely all the data, applications and operating systems that can be restored at any. The main distinguishing features AcronisTruelmage from its competitors is the ease of use, flexibility of settings and an extensive selection of instruments. Acronis Truelmage supports all storage devices, all file systems Windows and Linux, and based on these operating systems. After installing Asgonis TgueImage can immediately start creating backups. Unlike previous systems, backup, Acronis Recovery Manager has a load, when you press F11, he turns to the work and will provide a fully functional recovery environment. TSU & Decide feature allows you to first "test setup" to understand whether to save the changes to the system or to completely abandon these changes. Thanks to technology Acroms Mountlmage, there is access to backup data without reloading, Acronis is fully integrated into the conductor. 4) Analysis of FBackup [4] has shown that it is quick and easy to use utility to backup and restore data. Data can be backed up or copied, maintaining the original look and structure. Among the advantages FBackup - the possibility of a complex search with a given degree of nesting, and select only the necessary files, but this product has a minimal functionality, and in consequence, it can not be used to create exact copies of hard disks or partitions. Program Paragon Backup& Recovery 14 Compact [5] is designed to safely backup and restore PCs and laptops with the operating system Windows. She quickly recover files and folders, the operating system, or even an external hard drive for any chosen medium. Full version provides protection for the entire operating system, critical data and be able to recover their accident. Implemented various schemes up - incremental and differential, allowing reducing the volume of data to be copied. Use granular data recovery and a powerful set of filters allows you to configure the automatic data recovery. It works with both real and virtual devices. Centralized backup management possible after the installation of additional applications Paragon Remote Management. Analysis of the backup and restoration of data has shown that they are a necessary element of any job security. The level of development of this class of solutions allows you to make the backup process fast and convenient for users. Attention is paid not only to the speed of the backup, but to the speed of recovery. Existing information systems for backup and recovery of data are continuously evolving, and include new technologies to work with data, such as data storage in the cloud, using virtual machines and the ability to work with new types of protocols and data storage devices. We apply the method of expert estimations for the analysis discussed above software.

37

1st International DSPTech Conference on Technologies of Digital Signal Processing and Storing

Table 1. Compares the functions of the program

1 2 3 4 5 Weight Sum Backup at online Developer Services 0 1 1 0 0 0,01 0,05 Creating / restoring images of logical drives 1 1 1 0 1 0,2 0,08 Protectingarchives 1 1 1 1 1 0,2 1 compressedarchives 1 1 1 0 1 0,1 0,4 Recording backup to CD / DVD 0 0 0 0 0 0,01 0,03 Backingupthe LAN 1 1 1 1 1 0,1 0,5 Incrementalbackup 1 1 1 1 1 0,1 0,5 DifferentialBackup 1 1 1 1 0 0,1 0,4 Tracking versions of a file 1 1 1 0 1 0,1 0,4 Runningas a service 1 0 0 1 0 0,1 0,2

1. Handy Backup 2. Comodo Backup 3. AcronisTrueImage 4. FBackup 5. ParagonBackup&Recovery References 1. Handy Backup [official website]. URL:http://www.handybackup.ru (data:22.11.2014). 2. ComodoBackup [official website]. URL: https://backup.comodo.com (date: 22.11.2014). 3. Acronis True Image [Official. site]. URL:http://www.acronis.com (date: 22.11.2014). 4. FVackup [official site]. URL:http://www.fbackupp.com/ru/(data: 22.11.2014). Paragon [Official website]. URL:http://www.paragon-software.com/home/br-frherdate: 22.11.2014).

38

Models, Methods and Algorithms of Data Processing and Storing The Use of Probabilistic and Statistical Methods to Study Students' Opinions about the Quality of the Educational Process

Rozanova Larisa F.,Markevich Irina A.,Turutina Anastasiya D. Сalculus mathematics and cybernetics, USATU, K.Marx St., 12, Ufa, Russian Federation [email protected]

Keywords: quality of education, multiple choice models, selection of model, logit-model

Abstract: In the last decade a problem of development of quality assessment approaches including the sphere of education, is a subject of close attention of various interested persons. An important point of quality assurance system is understanding of requirements and expectations of consumers, an assessment of compliance of these requirements to operating rate and all groups of customers satisfaction score. Under the circumstances studying of quality of education becomes more and more actual. Here the opinion of students on quality of education as they belong to such group of stakeholders who are interested in that educational process was combined with the process of formation of the expert, competent of professional activity, is of special interest. Therefore in article approach to creation of probabilistic and statistical model of satisfaction with quality of educational process on the basis of research of students’ opinions is considered.

1.INTRODUCTION Considerable change of educational techniques, improvements in system of professional education creates such situation that educational institutions of the country need to pay more attention to questions of efficiency and quality of the activity. Objectives at the level of the state are stated in the following documents: • "The concept of a development of education of the Russian Federation till 2020" [1], • "Priorities of a development of education in the Russian Federation" [2], • "The national doctrine of education in the Russian Federation" [3]. Authors of regulations on professional education believe that quality education has to satisfy to inquiries trained as they are the main characters in training. Therefore, it is necessary to change the principles of quality management of education in educational institution by means of an assessment of satisfaction of students about quality of training. It is emphasized by the Russian President V. V. Putin in the assignment of May 22, 2014 [4]. The assessment of the done work in the future will be considered in system of indicators of efficiency of activity of the educational organizations of higher education. This message of the president is guided by the law "About Education in the Russian Federation" [5], according to which: • students become full participants of management of the educational organization and have the right for participation in formation of the content of the professional education; • as "quality of education the characteristic of an education system reflecting degree of compliance of the real reached educational results to standard requirements, social and personal expectations is understood".

2. ANALYSIS OF OBJECT OF RESEARCH For an assessment of satisfaction with quality of educational process survey of students of older years was conducted. Selection of respondents was carried out by method of nested (cluster) selection. The students who are trained on the integrated groups of an engineering, economic and information orientation participated in research. The volume of selection is specified in table 1.

39

1st International DSPTech Conference on Technologies of Digital Signal Processing and Storing Table 1. Number of the interrogated students

Education level, Number of respondents (person) direction BACHELOR DEGREE 673 ENGINEERING 249 ECONOMIC 175 INFORMATION 249 SPECIALIST PROGRAMME 414 ENGINEERING 179 ECONOMIC 178 INFORMATION 57 ALL: 1087

At the first stage the motivation of receiving the higher education was studied. The results of poll of students of final years given on fig. 1 showed that they generally consciously chose higher education institution according to the tendencies and are motivated on vocational training.

it is necessary the diploma, differently 17.37% work not to find

Parents (relatives) insisted 2.63%

Acquisition of necessary knowledge 34.88% and competences

The higher education gives the chance 45.13% of career development and prestige

0.00% 10.00% 20.00% 30.00% 40.00% 50.00%

Fig.1. Analysis of motivations of receiving the higher education

At the second stage compliance between satisfaction with quality of educational process and motivation to training was estimated.

40

Models, Methods and Algorithms of Data Processing and Storing

Parents insisted

It's cool

Unsatisfactoril y Acquisition of competences Satisfactorily

Chance to find work Career and prestige Good

Fig.2. Correspondence analysis of satisfaction with quality of education and motivations of education

By results of the analysis, given on fig. 2, it is visible that the main share of students which estimated quality of education on "well" and "perfectly", concerns to those graduates who are motivated on training, that is want to acquire necessary knowledge and competences. The results of research given above mean that there is a coherence between the interests of higher education institution expressed in the curriculum and perception their students. However the overall picture of productivity as domestic higher school generally, and in UGATU is characterized by the depressing percent of the Russian graduates who aren't working in the specialty. So according to Rosstat and various experts this share reaches 60%. By results of poll of students of the final years of UGATU given on fig. 3 it was revealed that no more than 50% of graduates will work by profession, other students or aren't going to work by profession, or yet didn't decide on a further choice. And it occurs in the conditions of qualitative and quantitative deficiency of engineering shots and shots in the field of information and communication activity. In these conditions important activity is the determination of satisfaction of students on various aspects of educational process in higher education institution allowing to reveal weaknesses of activity and purposefully to carry out measures for their improvement. For this purpose at university monitoring of quality of training of specialists is carried out [6,7]. By results of monitoring the probabilistic and statistical model of an assessment of quality of educational process is constructed. On the basis of model problem prolemny points and the directions of development, improvement and optimization of educational process are revealed.

41

1st International DSPTech Conference on Technologies of Digital Signal Processing and Storing

BACHELOR DEGREEWill work as it will in the turn out specialty 45% 41%

Won't I find it work in difficult the to answer specialty 7% 7% SPECIALIST

Will work as it will PROGRAMME in the turn out specialty 41% 48%

Won't I find it work in difficult the to answer specialty 4% 7%

Fig. 3. Comparative analysis of answers

3. DESCRIPTION OF BASIC DATA On purpose clarification of student's opinions on various aspects of educational process the questionnaire including 10 criteria was developed: • structure of an educational program (there are all disciplines which studying, in your opinion, is necessary for conducting future professional activity; there is no duplication of disciplines; there is no violation of logic of teaching disciplines, etc.), • the content of the taught disciplines, • possibility of a choice of disciplines textbooks, • methodical grants, lectures, • computer ensuring educational process, • quality of audiences, premises of chairs, • quality of funds and reading room of library, • quality of educational laboratories and equipment, • availability of teachers of occupation in an interactive form, master classes • availability necessary information concerning educational process. In research of a dependent variable "satisfaction with quality of educational process" which has the following alternatives is: • unsatisfactorily; • well; • well; It's cool.

42

Models, Methods and Algorithms of Data Processing and Storing As the dependent variable is serial with the ranged alternatives, for an assessment of satisfaction with quality of education models of a multiple choice with the ordered alternatives to which models belong are under construction: it is punched, a logit, an extreme [8].

4. CREATION OF MODEL

Research is conducted in some steps (figure 4):

input data

푃푟표푏 < 0,05; Check of the significant |Zrasch| >Ztabl

factor

Check of the |Zrasch|푃푟표푏 < >Ztabl0,05 importance of a

latent variable

On the basis of

Selection of models information criteria of Akayke, Schwartz, |Zrasch| >Ztabl Khan-Kvina

Check of a On the dough hypothesis of relation importance of the pravdobodobiya constructed model

Check of adequacy to the constructed According to Wald's model test

Interpretation of data

Fig. 4. Scheme of carrying out research

1. Check of significant influence of each factor on satisfaction with quality of educational process. It is necessary that the following two inequalities were carried out: Prob< 0,05; |Zrasch| >Ztabl, Zrasch - a calculated value of standard normal distribution. 43

1st International DSPTech Conference on Technologies of Digital Signal Processing and Storing Ztabl - tabular value of standard normal distribution. 2 hypotheses are made: • H0: 훽푖 =0, 푖 =1,10; • H1: 훽푖≠ 0,=1,10. If |Zrasch| > 1,96, а Ztabl=1,96 at a significance value 0,05, the zero hypothesis deviates and is accepted alternative which testifies to the statistical importance of factors. Insignificant factors are at what |Zrasch|<1,96. Insignificant factors have no impact on satisfaction with quality of educational process therefore they should be removed from the received model. 2. Check of the statistical importance of limit values of the latent variable connected with a dependent variable. 2 hypotheses are made: • H0: µ ≠ 0; • H1: µ = 0. If the probability of a latent variable < 0,05, a zero hypothesis deviates and is accepted alternative which testifies to the statistical importance of limit value of a latent variable. 3. Selection the logit, is punched and gompit models we will see off proceeding from a minimum of values of information criteria of Akayke, Schwartz and Khan-Kvina 4. We carry out an inspection of a hypothesis of importance of the constructed model on the basis of the test of the relation of credibility. 2 hypotheses are made: • H0: the model is not significant in general; • H1: the model is significant in general. As Prob(LRstatistic)=0, and it <0,05, a zero hypothesis deviates and is accepted alternative which testifies that the model is significant. 5. An inspection of adequacy to the constructed model is carried out on the basis of the statistical importance of each factor according to Wald's test. 2 hypotheses are made: • H0: the coefficient is not significant and is equal 0; • H1: the coefficient is significant and isn't equal 0. 5. RESULTS OF RESEARCH OF STUDENTS OF INFORMATION PROFILE Students of information profile noted that important factors are the structure of an educational program, the content of the taught disciplines and computer ensuring educational process. • structure of an educational program (there are all disciplines which studying, in your opinion, is necessary for conducting future professional activity; there is no duplication of disciplines; there is no violation of logic of teaching disciplines, etc.); • the content of the taught disciplines (fullness of the studied disciplines, corresponds to professional duties, the person working in the specialty); • computer ensuring educational process.

6. CONCLUSION Carrying out the researches directed on identification of opinions of consumers on quality of services will allow to create objective information base for improvement and updating of administrative decisions. Thus an important role is played by the probabilistic and statistical approach allowing to reveal structural communications between the studied phenomena and factors influencing them. The analysis of opinions of students on quality of educational process provided in article is a basis of improvement of quality of training of specialists and conducts to satisfaction with conditions and results of training. References 1. The concept of a development of education of the Russian Federation till 2020 2014. URL: http://government.ru/media/files/mlorxfXbbCk.pdf 44

Models, Methods and Algorithms of Data Processing and Storing 2. Priorities of a development of education in the Russian Federation. 2014. URL:http://government.ru/media/files/mlorxfXbbCk.pdf 3. The national doctrine of education in the Russian Federation. 2000. URL: http://www.rg.ru/2000/10/11/doktrina-dok.html 4. An order of the president 1148 items 2 of May 22, 2014. URL: http://kremlin .ru/acts/assignments/orders/21112. 5. The act of the Russian Federation of FZ-273 "About education in the Russian Federation". 6. Pavlyuchenko A. V. Providing quality assurances of education in higher education institution with attraction of students / the Young scientist. — 2015. — No. 11. — Page 1443- 1446. 7. S. Ezhov., About. Halliste. Training in a higher educational institution eyes of the modern student: on the example of Institute of technology. / Telescope. — 2012. — No. 96. — Page 22 27. 8. Lackman I.A., Maksimenko Z.V., Grigorchuk T.I., Rezvanova E.R. Optimization of works on collecting problem debt for management of activity of collection division / agency / Euroasian legal magazine. 2015. No. 4 (83). Page 139-142.

45

1st International DSPTech Conference on Technologies of Digital Signal Processing and Storing Statistical analysis of individual tasks on probability theory

Karina Kostenko, Yuliy Katsman Department of Computer Engineering, Institute of Cybernetics, Tomsk Polytechnic University, Tomsk, Russia [email protected]

Keywords: Statistical analysis, Probability theory, Testing, Scatter plot, Sample characteristic, Rank, Median, Cluster.

Abstract: Over the years, improving the quality of basic education programs (PLO) in Tomsk Polytechnic University has received increased attention. One of the main objectives of improving the educational process and the PLO is the optimization of procedures for monitoring the quality of the PLO for their continuous improvement. This work contains the results of the statistical analysis of the quality of the tests for monitoring students’ probability theory knowledge. The analysis showed a significant difference (not parallel) of variants of individual tasks, and on its basis is method for providing parallel tests.

1. INTRODUCTION Today assessment of the quality of test materials that are used to test the knowledge and skills of students is quite time-consuming and difficult task, which is relevant for new disciplines, and for those training in production for several years [1]. Test materials are usually presented in the several variants, so that there is a problem of parallelism that can complicate the assessment of students' knowledge and its objectivity. Usually the analysis of quality control materials much attention is paid to the parallel variants of the task [2, 3]. However, if the use of the modern theory test – Item Response Theory (IRT) [4] for estimate the latent factors required to provide for one test a minimum sample size of 200 to 1000 observations, the classical statistical theory allows us to obtain the estimates of the parameters, limited to a much smaller number of experiments. 2. THE MAIN AIM OF THE STUDY The main aim of this work was a statistical analysis of parallel variants of the individual tasks on probability theory for assesses the quality of knowledge learned by students. 3. THE FORMULATION OF THE TASK On the results of the control work (testing) the minimum score (3 points) was given for the attempt to solve at least one task, the maximum score (15 points) - the right solution for three tasks. All the results of the control work on probability theory were processed in licensed program Statistica. Furthermore, it was required to determine the equivalence of the variants through using Kruskal-Wallis ANOVA test, Median test and Sheffe test. 4. ANALYSIS Before analyzing the parallel of the variants was used the module of the descriptive statistics and was excluded unrepresentative variants (4 and under observation). Further, it was assumed that these options are parallel (equivalent), and then the evaluation of the students should be adequate to their knowledge, rather than the complexity of tickets. Therefore for each option were calculated point and interval estimates that given the random factors assumed approximately equal average scores and variances for each variant. Realistic estimates for each variant are shown in Fig. 1 as a scatter plot.

46

Models, Methods and Algorithms of Data Processing and Storing

Fig 1. Scatter plots for different variants of the test

The present results clearly showed uneven difficulty (not parallel) of different variants of test. Partitioning variants for complexity group was conducted by the method of cluster analysis, k-means and only one variable – points, so to ensure at-dimensional equal number of cases in each group and the homogeneity of observations within the group all variants sorted by the average score:  with complex tasks, where the average score was less than 7.8;  with the tasks of medium complexity, the average score which ranged between 7.8 and 9.3;  with simple jobs, which exceeded the average score of 9.3. o The module descriptive statistics instrumental in obtaining point estimates for all observations and each cluster separately. With this module the following results:  the maximum difference in the estimates for the first and third cluster is less than three points;  almost all point features for the second cluster and the entire set of observations are the same;  for all cases 50% of the second cluster of the results exceeded 8.6 points, at the same time for the first cluster 50% of the results did not exceed 6.6 scores, and for the third cluster count exceeded 50% 10.5 points;  variances for all observations and clusters were considered almost equal (the ratio of the variances of less than 2);  the analysis of the factors skew and kurtosis testified that the distribution of scores in each group are asymmetric and differs substantially from the Gaussian distribution. For test the hypothesis of a significant influence of the factors was conducted univariate analysis package Statistica with the established level of significance α = 0,05. Analysis of the sum of rank in groups (clusters) obtained as a result of the Kruskal-Wallis test confirmed that the maximum score was observed in a cluster with the options had an easy task, and the minimum was observed in the cluster with the options had difficult tasks. Analysis of the results of another type of rank test – the median test, presented in a table shows that:  the top half of the table contains the maximum value of the cluster, which corresponded to tasks with a high level of complexity and they produce minimum estimates;  the bottom half of the table included a maximum value of clusters, which corresponded to tasks with a low level of difficulty and they produce maximum estimates. The hypothesis of the influence of factors, tested during the median test, as well as the analysis of Kruskal-Wallis test showed that the effect of a significant factor. 47

1st International DSPTech Conference on Technologies of Digital Signal Processing and Storing Further by analyzing the Mann-Whitney hypothesis was tested at two different sample homogeneity (clusters). Evaluate the effects of processing are shown graphically in Fig. 2.

Fig 2. Scatter plots for all clusters

The results in Fig. 2 show a significant difference in the characteristics of point and interval for different groups. By analyzing them it was possible to try to answer the question: what a pair of groups of tasks variants can be considered significantly different? To answer this question we compared the average Sheffe test’s method for different pairs of levels of the factors, the analysis of which showed a significant difference between the average scores for various pairs of clusters that proves the validity of the alternative hypothesis of a slight impact factor. 5. CONCLUSION Monitoring the quality of teaching subjects to a large extent determined by the quality control of individual teaching materials (tests). One of the major characteristics of the options is to test them in parallel. Statistical analysis of the monitoring of individual tasks on probability theory showed that variants of test tasks are not parallel. On the basis of the research was to draw definitive conclusions about the quality of the proposed test items: 4 of 39 the available options for individual tasks were excluded due to non-representative sampling, 9 variants contain the problem complex level with minimal received them estimates, 9 variants with the objectives of mid-level and 12 options with the objectives of easy level. Conducted research have shown that the first and second clusters can be expanded by adding new options to exclude from the third cluster versions with two or three easy task, considering the average of points. And replace them with more complex tasks of the first and second clusters, also considering the dialed GPA. This paper shows that even for tasks that are used for a number of years, the task of ensuring the parallelism is relevant. The proposed in the work statistical methods allow successfully solve this problem as demonstrated by the example of the control tasks of the theory of probability. References 1. Katsman Y.Y. “Statistical analysis of individual tasks on probability theory”. In: Bulletin of the Tomsk Polytechnic University (Bulletin TPU), Tomsk, Russia, 2014, pp. 84-90. (style for Paper in a Proceedings). 2. Suen H.K., Lei P.W. “Classical versus Generalizability theory of measurement”. Available at: http://suen.educ.psu.edu/~hsuen/pubs/Gtheory.pdf (accessed on 10.11.2015).

48

Models, Methods and Algorithms of Data Processing and Storing 3. Ilyukhin B.V., Permyakov O.E. “Quality assurance issues and directions in perfecting the system of competitive selection of applicants in the Russian Federation”. Tomsk Polytechnic University, Tomsk, Russia, 2007. (style for Book). 4. Rasch G. “Probabilistic models for some intelligence and attainment tests”. Copenhagen: Danish Institute for Educational Research, Copenhagen, Denmark, 1960. (style for Book).

49

1st International DSPTech Conference on Technologies of Digital Signal Processing and Storing Mathematical Model of PolyominoTiling for Phased Array Design

Aigul Fabarisova1,Vadim Kartak2 1Department of Applied Computer Science, Bashkir State Pedagogical University named after M.Akmullah Oktyabrskoy Revolutsii st. 3a,Ufa, Russia [email protected] 2Department of Applied Computer Science, Bashkir State Pedagogical University named after M.Akmullah Oktyabrskoy Revolutsii st. 3a,Ufa, Russia [email protected]

Keywords: optimization model, polyomino, integer programming, phased array design

Abstract: The problem of polyomino tilingcan be considered asan integer programming model.The mathematical model of tiling with polyomino is described. The approachcan be applied to the phased array design where polyomino- shaped subarrays are used to reduce the cost of the array and to avoid the periodicity of antenna layout. Two cases discussed: tiling with L-shaped trominoes and tiling with L-shaped tetrominoes.Simulations results are presented in order to evaluate performance of the approach.

1. INTRODUCTION This paper describes the problem of polyomino tiling motivated by the military application, i.e. the phased array design. The construction of antenna consists of subarrays. The regularity of antenna structure can cause undesirable radiation. For this reason polyomino-shaped subarrays are used. Moore and Robson argue that the problem of tiling finite region with unrestricted number of copies of each polyomino is known to be NP-complete [1]. It means that there is no exact method for computing the solutions to this problem using a reasonable amount of time. So there is an optimization problem of tiling a plain structure with irregular polyomino shapes.

2. PROBLEM OVERVIEW The problem of using polyomino shaped subarrays in phased array design was described by Mailloux et al. [2]. After that some researches presented their results in solving described problem. One of the many approaches in tiling problem is using heuristic methods like the genetic algorithm. It was shown in Gwee and Lim studies. Their algorithm was tested in the phased array design by Chirikov et al.[3]. Chirikov also have presented their own approach called Snowball algorithm which showed better results. In the case of tiling with L-tromino using the Gwee and Lim approach the peak sidelobe level (SLL) is –27.11 dB for r = 1.3. While the Snowball algorithm presented SLL = –29.1 dB for r = 1.3. Still in his research there is no the criterion that could estimate the irregularity of a structure. Another kind of approaches refers to using mathematical programming. Galiev and Karpovaconsidered the minimal covering problem as the linear integer program [4]. Bosch and Olivieriused integer programming to design sets of tiles for Conway’s Game of Life [5]. Karademirused integer programming in phased array design application [6]. He formulated the irregular polyomino tiling problem as “a nonlinear exact set covering model, where irregularity of a tiling is incorporated into the objective function using the information-theoretic entropy concept”. The phased array antenna simulations presented in the survey include only the case of tiling with L-octomino shape. Our research is aimed on testing the integer programming methods in the problem of polyomino tiling and improving the current results in the phased array design application.

3. MATHEMATICAL MODEL 50

Models, Methods and Algorithms of Data Processing and Storing Let theN×N element structure is givenon the Cartesian coordinate system, where Ox-axis lies on the bottom side of the structure andOy-axis lies on the left side.There is a problem of tiling the structure with one type of polyomino.For the given polyomino the center of the figure called C and k orientationsare settled (Fig.1.).

Fig1. Four orientations of L-tromino. k Let we match every element of thek orientation polyomino to the binary zij ∈ {0,1}.Then k k zij = 1 if the coordinate (i, j) contains the center C of the polyomino, otherwisezij = 0. k So, the objective is to maximize the sum of allvariableszij:

푁 푁 푘 푡 max ∑푖=1 ∑푗=1 ∑푡=1 푧푖푗 (1)

subject to:

1 1 1 푧푖푗 + 푧푖+1,푗 + 푧푖,푗+1 ≤ 1 (2)

2 2 2 푧푖푗 + 푧푖+1,푗+1 + 푧푖,푗+1 ≤ 1 (3)

3 3 3 푧푖+1,푗+1 + 푧푖+1,푗 + 푧푖,푗+1 ≤ 1 (4)

4 4 4 푧푖푗 + 푧푖+1,푗 + 푧푖+1,푗+1 ≤ 1 (5)

In order to eliminate polyominooverlapping a set of constraints should be added to the described model.To the phased array design problem it’s also necessary to eliminate the periodicity of the layout. We propose adding randomlyplaced figures on the structure before optimization with the solver. In this reason additional constraints should be added to the IP model.

4. SIMULATION RESULTS Mathematical models for two cases have been developed: for tiling with L-tromino and L- tetromino. Also the algorithm for tiling a large-scaled structurewas implemented. These mathematical models along with the tiling algorithm have been realized in program using C++ and CBC solver. Obtained tilings were used to simulate antenna performance. Simulation results show that for the case of tiling with L-tromino the peak SLL is –28.05 dB for r = 1.3, which lies between the Gwee-Lim algorithm and Snowball algorithm results presented in Chirikov et al. research [3]. The antenna parameters simulation is shown in Fig.2. Also the both cases show the fullness of structure close to maximum (99.3% for the L-tromino case and 100% for the L- tetromino). It is a good result that could be improved in the future.

51

1st International DSPTech Conference on Technologies of Digital Signal Processing and Storing

Fig 2. Phasedarray antenna parameters simulation.

4. CONCLUSION The mathematical model for polyomino tiling based on integer programming has been presented. This model can be applied to the phased array design. Antenna simulations results are presented. In the case of tiling with L-tromino the simulation of phased array parameters shows good suppression of sidelobes and in both cases the structure fullness is close to maximum.

References 1. Moore C., Robson J. “Hard tiling problems with simple tiles”. Discrete & Computational Geometry. 2001; 26: 573–590. 2. Mailloux R., Sanatarelli S., Roberts T., Luu D. “Irregular polyomino-shaped subarrays for spacebased active arrays”. International Journal of Antennas and Propagation. 2009; Vol.2009, P.9. 3. Chirikov R., Rocca P., Bagmanov V. et al. “Algorithm for phased antenna array design for satellite communications”. Vestnik USATU. 2013; Vol. 17 (N. 4(57)). pp. 159–166. 4. Galiev Sh. I., Karpova M. A. “Optimization of a multiple covering of a bounded set with circles”. Zh. Vychisl. Mat. Mat. Fiz. 2010; 50:4, pp. 757–769. 5. Bosch R. and Olivieri J. (2014). Designing Game of Life mosaics with integer programming. Journal of Mathematics and the Arts, 8:3-4, pp.120-132. Karademir S. “Essays on integer programming in military and power management applications”. Doctoral Dissertation. University of Pittsburgh, 2013 [Online]. Available from: http://d- scholarship.pitt.edu/19341/ [Accessed: 12th October 2015].

52

Models, Methods and Algorithms of Data Processing and Storing Formation of Cross-Platform Structural System Model of Business Processes on the Basis of Hierarchy of Grammars of Chomsky

Dmitry Shamidanov1 1Automated and Management Control Systems Department, USATU, K.Marx St., 12, Ufa, Russia [email protected]

Keywords: Structuring content, information space, business process model

Abstract: In work questions of detection of the general rules of structuring content according to application of meta languages of subject domain of hierarchies of Chomsky are considered. Process of analytical processing of large volume of the unstructured data obtained from various information systems of the enterprise on the example of use of the Business Intelligence systems is considered.

1. INTRODUCTION The data accumulated in the course of functioning of the automated information system can become valuable information only when they are readily available and interpreted in the form of knowledge. Use of intelligent information retrieval systems and means of the content analytics allows to provide quick access to the information resources distributed on various knots of the computer network and presented in various formats. Many control systems of corporate content have the built-in mechanisms of information search, but provide identification of resources and traceability of their communications only within one system. In this regard detection of the general rules of analytical data processing and structuring content, delimitation of area of structuring and identification of necessary precedents on the basis of models of the carried-out business processes is actual. For definition of the purposes of analytical data processing at the same time from several systems it is possible to use subject-oriented natural languages with a certain extent of formalization of syntax to within the dictionary and natural rules of creation of offers (carriers of the corresponding syllogisms) and semantics determined by models of business processes. This system has to be external in relation to other systems and contain a search subsystem and a subsystem of business analytics with the corresponding subject-oriented formal languages. BI technologies which allow analyzing large volumes of information can be an example of realization of similar systems, modeling an outcome of various options of actions and tracing results of adoption of these or those decisions. In this work the question of analytical processing of large volume of the unstructured data obtained from various IS of the enterprise on the example of use of the Business Intelligence systems is considered. The purpose of work is detection of the general rules of structuring content according to the applied meta languages of subject domain of hierarchies of Chomsky. Achievement of a goal requires performance of the following tasks: representation of model of business processes according to levels of hierarchies of Chomsky; creation of the set- theoretic model of content of information space corresponding to Chomsky's hierarchies.

2. THE DESCRIPTION OF BUSINESS PROCESSES ACCORDING TO LEVELS OF HIERARCHIES OF CHOMSKY Chomsky's hierarchy – classification of formal languages and formal grammars according to which they are divided into 4 types on their conditional complexity. It is offered by professor of Massachusetts Institute of Technology, the linguist Noam Chomsky [1].

53

1st International DSPTech Conference on Technologies of Digital Signal Processing and Storing The formal grammar of G according to Chomsky can be presented in the form of the ordered four: G = , (1) where: VT – the alphabet (set) of terminal symbols – terminals; VN – the alphabet (set) of non-terminal symbols – not terminals;

V VT VN – dictionary G, and VT VN   , P – a final set of production (rules) of grammar, P V  V * , S – an initial symbol (source). Here V * – a set of all lines over the alphabet of V, and V  – a set of nonempty lines over the alphabet of V. According to Chomsky, formal grammars are divided into four types: 1. Type 0 – unlimited. Unlimited grammars – grammars with phrase structure that is one and all formal grammars belong to type 0 on Chomsky's classification. Such grammars have no practical application owing to the complexity. 2. Type 1 – context-dependent. Context-dependent grammars and not shortening grammars belong to this type. These classes of grammars can be used in the analysis of texts in natural languages, however at creation of compilers are practically not used owing to the complexity. 3. Type 2 – context-free. Context-free grammars belong to this type. Context-free grammars are widely applied to the description of syntax of computer languages. 4. Type 3 – regular. Regular grammars (automatic) – the simplest of formal grammars belong to the third type. They are context-free, but with limited opportunities. Regular grammars are applied to the description of the elementary designs: identifiers, lines, constants, and also languages of the assembler, command processors, etc. Let's consider the description of business process on the example of the chart in the notation of structural modeling of IDEF0. IDEF0 represents the methodology of functional modeling and the graphic notation intended for formalization and the description of business processes. Thus, the graphic notation defines the alphabet, and rules of creation of model in this notation grammar. From the point of view of Chomsky's grammar, syntax of IDEF can be referred to context-free grammars. The chart in the notation of IDEF0 represents two structures: the first – the focused graph representing formal structure of model and the glossary defining its semantic description. IDEF = (Gr, Gl), (2) where Gr – the focused graph; Gl – the glossary of model of business process. Gr = (V, A), (3) where V – a nonempty set of tops – entrances and exits of business processes; A – a set of various edges – functions of transformation of entrance resources during the week-end. We investigate property of the relation "entrance exit" between chart elements. In this case resources (entrances and exits of functional blocks) act as tops of the graph, and shooters – functions of transformation of entrances to exits. Let's put that tops of the graph are objects of category, and shooters – morphisms. As functional modeling is based on the principle of decomposition, representation possesses the following properties: 1. Associativity: (A11⋅A12) ⋅ A13 ≡ A11 ⋅ (A12⋅A13), i.e. is possible variability at decomposition, but the result does not depend on what functional blocks will be dekompozirovana; 54

Models, Methods and Algorithms of Data Processing and Storing

2. Not commutativity: A1 ≢ A12∘A11, i.e. result of performance of process depends on sequence of affiliated processes; 3. Resources can be both entrances, and exits of functional blocks. The top of the graph of model in the notation of IDEF0 has the unique hierarchical identifier, therefore, elements of this chart are identified, and communications traced. The problem of search is reduced to the choice of the necessary element of the chart on the known way. Let's present process of extraction given information space for the intellectual analysis according to levels of hierarchy of Chomsky (fig. 1).

Fig. 1. Comparison of process of drawing up rules of structuring content to levels of hierarchy of Chomsky

For formation of criteria of structuring content and extraction of the business processes given from models it is necessary to select business rules, to describe them in an attributive form and to present in the form of conditions. The description of business process in a natural language representing process model corresponds to zero level of hierarchy of Chomsky. G = , (4) where VT – the final alphabet of terminal symbols (the description of business process in a natural language); VN – the final alphabet of non-terminal symbols (offer); P – final set of rules of generation (rule of creation of offers); S – initial symbol (source). To the first level – contextual and independent languages – can carry business rules, i.e. the description of business process in language of concrete subject domain, for example, in the form of drawings or sequence of design-technology operations: G1= , (5) where VT1 – terms of instructions, for example, design-technology operations, drawings of concrete subject domain; VN1 – the final alphabet of non-terminal symbols of concrete business process; 55

1st International DSPTech Conference on Technologies of Digital Signal Processing and Storing

P1 – a final set of rules of generation; S1 – an initial symbol (source). Transition from zero level to the first is carried out at the expense of the description of process in terms of concrete subject domain, with application of a concrete slang. It allows reducing the alphabet, to simplify syntax, but to specify semantics of language. The attributive model of process defining the description of the modelled process in the form of models of business processes corresponds to the level of context-free languages. G2= , (6) where VT2 – modeling language elements; VN2 – the final alphabet of non-terminal symbols of language of modeling; P2 – a final set of rules of creation of models; S2 – an initial symbol (source). Transition from context-dependent to context-free languages is carried out due to modeling of business process with use of instruments of structural or object-oriented approach. The business process presented in the form of structural or object model uses the alphabet and syntax of concrete language of modeling that to describe process irrespective of subject domain. Rules on structuring content can be correlated to the third level of hierarchy – the level of instructions. In process of transition to lower level of hierarchy there is a reduction (specification) of syntax of language – the description of process becomes more concrete, the sense of rules is narrowed to concrete instructions. For the third level of hierarchy to which there correspond instructions of regular languages or programming languages, we will define: G3 = , (7) where VT3 – programming language elements; VN3 – the final alphabet of non-terminal symbols of a programming language; P3 – a final set of rules of interaction; S3 – an initial symbol (source). Transition from context-free languages to regular is carried out during the programming and realization of business process with use of the automated systems and high-level programming languages. Each of sets of languages is a subset of the previous set (G3 ϵ G2 ϵ G1) that provides transfer of semantics between hierarchy levels. However, due to reduction of syntax some information can be lost and inaccuracies at realization are brought.

3. EXAMPLE OF STRUCTURING THE STUDIED SUBJECT DOMAINS GIVEN IN ASPECT WITH APPLICATION OF CONCRETE SOFTWARE APPLICATIONS Let's review an example of structuring data on the basis of the IBM Cognos BI system. IBM Cognos Business Intelligence is a complex decision for creation of information and analytical system of scale of the organization. For implementation of the analysis given with use of IBM Cognos BI it is necessary to create requirement in a natural language. The requirement can be created by the head or the analyst. In order that information requirement could be is realized with use of IBM Cognos BI it is necessary to create it in context-free language – to issue in the form of set of metadata and data sources for the analysis. Metadata in BI systems, it is the mechanism by means of which the user shows to BI system how the system of data storage is organized and as to work with these data. Metadata – it not always just information on data source, in some cases, act as metadata internal mechanisms of storage of BI system (for example, OLAP cubes). For example, metadata in Cognos BI is the formalized description of structure of system of data storage.

56

Models, Methods and Algorithms of Data Processing and Storing For representation of information requirement in context-free language for the terms IBM Cognos BI it is necessary to execute several main stages: 1. To create connection to data source; 2. To create the description of data source, i.e. to create metadata; 3. To create and publish a package of metadata on IBM Cognos BI the server; 4. To create the report [4]. The description of data sources happens to use of the Data Source Connections tool, the description of metadata is carried out with use of the built-in graphic-analytical language of the modeling similar to IDEF1X. For creation of metadata in IBM Cognos BI the stand-alone program of IBM Cognos Framework Manager is used. The following stage is the description of a form of representation of results in the form of the report. Using terms of subject domain, the developer of the report forms measurements. After metadata are created, it is necessary to create a metapackage and to publish it on IBM Cognos BI the server. The IBM Cognos BI server, in this case, carries out transformation of the information requirement formulated in a natural language by the analyst translated into context-free language of metadata by the developer of reports in concrete regular language of storage of data. By means of this language requests for extraction of data and reports for their representation are formed. As a result of use of the IBM Cognos BI system for satisfaction of the information requirement formulated in a natural language, the system submits the report formulated with use of context-dependent language. Interpretation of the report then can be executed by the analyst for providing results also in the form of a natural language.

3. CONCLUSION In the course of functioning of a set of the automated information systems at the enterprise a large number of data which can become valuable information only when they are interpreted in the form of knowledge, in the form of reports or in other form collects. The BI systems allow making similar reports for the persons making decisions by transformation of the information requirement formulated in a natural language in the report created with application of context- dependent language of subject domain. Transition between hierarchy levels, in this concrete case, is carried out at the expense of the description of subject domain in the form of metadata and set of storage of data. In work this approach with use of the IBM Cognos BI systems is illustrated. As a result of the description of information requirement with use of the built-in graphic-analytical language the analyst can receive the answer in the form of the parameterized report.

References 1. Hopkroft. Introduction to the theory of automatic machines, languages and calculations [Introduction to Automata Theory, Languages, and Computation] / D. Hopkroft, R. Motvani, D. Ullman. M.: Williams, 2002. 2. Chomsky, N. Aspects of the theory of syntax / N. Chomsky. The lane with English under the editorship of and with V.A. Zvegintsev's preface // M., Publishing Moscow University, 1972. 3. Kulikov, G.G. “Structuring of subject area of technical university using process approach and semantic identification”, (in Russian) / G.G. Kulikov, M.A. Shilina, G.V. Startcev, A.A. Barmin // Vestnik UGATU. 2014. No. 4 (65). pp. 115-124. 4. Kovalenko, K. Modern Business Intelligence (BI) of system on the example of IBM Cognos BI. Available at: http://habrahabr.ru/post/248829

57

1st International DSPTech Conference on Technologies of Digital Signal Processing and Storing

Hashing Approach to Erdős-Gyárfás Conjecture

Artem Ripatti1 1Bashkir State Pedagogical University named after M. Akmullah, Oktyabrskoy Revolutsii st. 3a, 450000, Ufa, Russia [email protected]

Keywords: cubic graphs, cycles, hashing

Abstract: We present an algorithm for exhaustive search k-regular graphs with required property. The algorithm bases on reduction of graph to pseudocanonical form using hashing and storing it in hash table to avoid branching from graphs that are isomorphic to already considered ones. Modification of the hash table for effective storing it on SSD is also described. We apply our algorithm to Erdős-Gyárfás conjecture and verify that there are no cubic graphs having 2m-cycle for n≤54 vertices.

1. INTRODUCTION The Erdős-Gyárfás conjecture asserts that every graph with minimum degree at least 3 contains a cycle whose length is a power of 2. This conjecture was made by Paul Erdős and András Gyárfás in 1995 and is still far from resolution. It’s also open for cubic graphs, i.e. for regular graphs of degree 3. The conjecture is proved only for some classes of graphs, for example, for planar cubic 3-connected graphs [1]. Using computer search Gordon Royle and Klas Markström showed that any example must have at least 17 vertices, and a cubic example must have at least 30 vertices [2]. They also found 4 minimal cubic graphs of order 24 without cycles of length 4 and 8 (but having cycle of length 16). Geoffrey Exoo constructed cubic graph of order 78 without 4-8-16 cycles [3]. He also mentioned unpublished result of Markström, that every cubic example should have at least 54 vertices. In this paper we present an algorithm for exhaustive search graphs with required properties. Using this method we show that any cubic example for the Erdős-Gyárfás conjecture must have at least 56 vertices. The paper has the following structure. In Section 2 we describe the algorithm. Data structure we used is presented in Section 3. Computational results are presented in Section 4.

2. ALGORITHM We are going to generate all possible k-regular graphs with some property P. The property has the following constraint: if a graph G doesn’t possess the property P, then any graph G* having G as subgraph doesn’t possess the property P too. For example, P can be “graph G is planar”, “graph G contains no cycles of length 3 or 7” or “graph G has no more than 20 vertices”. Applying our algorithm to the Erdős-Gyárfás conjecture, we generate 3-regular graphs using property P “graph G has no more than 54 vertices and has no cycles of length 4, 8 or 16”.

2.1. Backbone of the algorithm The algorithm starts with 1-vertice labeled graph. On every step it considers vertex with minimal label that have degree less than k and adds an edge that connects this vertex and either some other vertex with larger label or a new vertex. The algorithm tries all possibilities to insert the edge. If the current graph has no vertex of degree less than k, it means we’ve found an example; the algorithms writes found example onto hard disk and backtracks. After every edge

58

Models, Methods and Algorithms of Data Processing and Storing insertion it checks the current graph for property P. If the graph doesn’t possess the property P, the algorithm backtracks; otherwise it recursively process the current graph in the same manner. Finally, all possible k-regular labeled graphs with property P will be constructed. But we are interested in unlabeled graphs, for every unlabeled graph the proposed algorithm generates all labeled graphs from its isomorphism class. To avoid this we use additional hashing check which we describe below. This check should be inserted into the algorithm immediately after checking graph for property P: if the current graph fails this check, the algorithm also backtracks.

2.2. Hash table check We are going to rearrange vertices of a labeled graph that we currently check in some specific order. Using a procedure that we describe below, we calculate some value (a hash value) for every vertex of the graph and then relabel all vertices in order of increasing of hash values. We write that new relabeled graph is a pseudocanonical form and use it to check graphs for isomorphism. Obviously, two labeled graphs G1 and G2 are isomorphic if they have the same pseudocanonical form. But a labeled graph G may have several pseudocanonical forms in case when some vertices have identical hash values and they could be sorted in different order, i.e. two isomorphic labeled graphs G1 and G2 may produce different pseudocanonical forms. But we suppose that this case is very rare. During performing the algorithm we maintain a hash table where we store pseudocanonical forms of all considered graphs. After every edge insertion and checking the current graph G for property P we build its pseudocanonical form G’ and look for it in the hash table. If the hash table already contains the pseudocanonical form G’, it means that we already processed a graph isomorphic to the current graph G, so we backtrack; otherwise we store the pseudocanonical form G’ into the hash table and recursively process the current graph G. Remark 1: if we process only one graph from its isomorphic class, the algorithm is still correct, i.e. all possible k-regular labeled graphs with property P will be constructed. Indeed, described in the previous subsection algorithm produces all partial states of performing BFS (breadth first search) over all graphs we are going to construct. But BFS fills the whole graph independently on start position. So, every final graph that can be constructed from one partial BFS state, will be also constructed from any another partial BFS state isomorphic to the current. Remark 2: after the new cut some isomorphic classes become unreachable, i.e. the cut reduces search space and size of the hash table. For the case when two isomorphic graphs produce different pseudocanonical forms, we store into the hash table both of them. We also process both of them although they are isomorphic. Again, we suppose it happens very rare, so an overhead will be insignificant. Remark 3: our check has no error of the first kind, i.e. it always guess right if we never considered before any graph isomorphic to the current. It makes the search to be exhaustive.

2.3. Calculating hash values This procedure is similar to well known 1-dimensional Weisfeiler-Lehman Method [4] where we iteratively calculate some values for every vertex. Finally, this value represents a surrounding structure of the graph for every vertex. And relabeling all vertices in order of increasing the values produces the same graph for every graph some isomorphic class with very high probability. Instead of classical approach, we use polynomial hashing to find the hash values of vertices because it easily allows make calculations in parallel manner. The polynomial hashing is well known for hashing sequences of integers and strings (see Rabin-Karp Algorithm [5] and Rabin Fingerprinting Scheme [6]). We believe that exactly our hashing approach could be invented earlier, but we didn’t find any papers about that. 59

1st International DSPTech Conference on Technologies of Digital Signal Processing and Storing

v We iteratively calculate the hash values. Let hi be a hash value of vertex v on iteration i. v For the initial iteration we set h0  de g v . During the following iterations we for every vertex calculate the hash value as polynomial hash using sorted in increasing order hash values of neighbor vertices from the previous iteration. For example, if a vertex v has z neighbors (u1, u2, …, uz, sorted in increasing order of their hash values), for iteration i we have:

hv  (hu1 pz1  hu2 pz2  hu z1 p  hu z ) mod q i i1 i1 i1 i1

A good value for p is a large prime. For q we use 232 or 264, then we may allow overflow of 32-bit of 64-bit integers (on x86 architecture) to get right remainders without division. We do the same number of iterations for all graphs, for example, equal to maximal number of vertices in considered graphs. Remark: after relabeling the graph and getting its preudocanonical form, we completely forget about hash values inside every vertex and store into the hash table only the pseudocanonical form itself (a labeled graph together with all its edges).

3. DATA STRUCTURE During performing the algorithm we need to maintain the hash table, which may have very big size. RAM is very costly for that, and HDD is very slow. So, we decided to use SSD for our purposes. In this section we describe our implementation of the hash table sharpened for storing the data on SSD. It’s well known that while SSD has no restrictions for data reading, is has limits for data rewriting (a performance degradation phenomenon called write amplification). Moreover, SSD stores data as 4Kb blocks, so if we write to disk a small portion of data, we actually rewrite the whole 4Kb block. In our implementation we write to the disk big portions of data (blocks 64Mb of size) and no any part of the hash table on SSD to be rewritten. Our data structure is based on classical hash table with separate chaining. The hash table is split into 3 parts: Buckets table, Buffer and Data on Disk (see Fig.1).

Fig 1. Structure of the hash table

All entries are grouped into chunks 1Kb of size (the best size is 4Kb, but it requires more RAM in which we were limited). Every chunk contains several entries, number of them and a link to the next chunk in chain. For our problem the entries are labeled graphs of maximum degree no more than 3 and order no more than 54; so, every entry has size 162 bytes and we can store 6 entries per chunk together with 22 bytes for number of entries in the chunk and number of the next chunk in the chain. All chunks in the Buffer and Data on Disk are fully filled by entries; chunks in the Buckets table can be not full. Some links are null, it means that the current chunk is the last in the chain. The Bucket table contains exactly S chunks, the Buffer contains up to T chunks, and Data on Disk is limited only by SSD volume. A hash function H maps the set of all k-regular labeled graphs into integer from set of hashes {0,…,S-1}. In our implementation as the hash function we used described above the

60

Models, Methods and Algorithms of Data Processing and Storing polynomial hashing. All entries inside any chunk and all chunks that can be reached passing the links have the same hash. The hash table works as follows. For every find query we calculate hash of the key and consider a chunk in the Buckets table corresponding to the hash value. Then we iterate over all entries of the current chunk and compare them with the key. Checking all entries in the current chunk we consider the next one following by the link and so on, until we either once find the required key or reach the end of chain of chunks. For every insert query we calculate hash of the key and consider corresponding chunk in the Buckets table. If this chunk is filled, we are trying to move it into the Buffer. If the Buffer is filled too (it currently contains exactly T chunks), we drop all these T chunks onto the disk: insert them into the end on Data on Disk. Now, we copy the chunk from the Buckets table into the Buffer (if it was fully filled) and clear the chunk in the Buckets table. Finally, we insert the key into the corresponding chunk in the Buckets table and finish performing the query. To reduce number of read queries to the SSD we also added to the hash table a Bloom filter [7]. I.e. we quickly checked all find queries on the Bloom filter first and if it answered, that required entry is already possible considered, we asked for the find query the hash table.

4. RESULTS The presented above algorithm was implemented using C++. We set values S=222 and T=216, i.e. size of the Buckets table was 4Gb, size of the Buffer was 64Mb. The Bloom filter had 232 bits or 512Mb of RAM. Data on Disk was stored on 120Gb SSD. The program was run on Intel Core i7 3.4GHz 12Gb RAM machine under operation system Windows 7 64-bit. We tested correctness of our implementation searching minimal cubic graphs without cycles of lengths from the following sets: {4,8}, {4,6}, {4,8,10}, {4,6,10}, {4,6,8}, {4,8,12}. Answers for the tests can be found at http://ginger.indstate.edu/ge/Graphs/CYCLES/index.html. We also searched all cubic graphs up to 18 vertices for testing purposes. After eliminating isomorphic graphs from result, we have got right number of the graphs. For small number of vertices we calculated number of unique (non-isomorphic) graphs inside the hash table and found that less than 1% of them are “repeats”. We have run the program for search cubic graphs without 4-8-16 cycles and have verified that there are no cubic graphs without cycles of length 2m up to 54 vertices. During performing the program we have spent about 110Gb of SSD space and about 18 days of time.

References 1. Heckman C.C., Krakovski R. “Erdős-Gyárfás Conjecture for Cubic Planar Graphs”. Electron. J. Combin. 2013; 20(2), P7. 2. Markström K. “Extremal graphs for some problems on cycles in graphs”. Cong. Numer. 2004; 171:2179-2192. 3. Exoo G. “Three graphs and The Erdős-Gyárfás Conjecture”. arXiv. 2013; http://arxiv.org/abs/1403.5636. 4. Douglas B. L. “The Weisfeiler-Lehman Method and Graph Isomorphism Testing”. arXiv. 2011; http://arxiv.org/abs/1101.5211. 5. Karp R. M., Rabin M. O. “Efficient randomized pattern-matching algorithms”. IBM Journal of Research and Development, 1987; 31(2):249–260. 6. Rabin M. O. “Fingerprinting by Random Polynomials”. Tech Report TR-CSE-03-01, 1981. 7. Bloom B. H. "Space/Time Trade-offs in Hash Coding with Allowable Errors". Communications of the ACM 13(7):422–426..

61

1st International DSPTech Conference on Technologies of Digital Signal Processing and Storing

Markov Modelling for System Safety Purposes of Power-Plant

Abdulnagimov A.I., Arkov V.Yu. Automated Control and Management Systems, Ufa State Aviation Technical University, Russia, Ufa (e-mail: [email protected]) Automated Control and Management Systems, Ufa State Aviation Technical University, Russia, Ufa (e-mail: [email protected])

Keywords: system safety, gas turbines, FADEC, Markov modeling, fuzzy logic, fault development processes, hierarchy analysis method.

Abstract: The hierarchical fuzzy Markov modeling of fault developments processes has been proposed for the analysis of an airctaft power-plant system safety. Markov modeling is applied to analyze the dynamics of the power plant condition on the basis of fuzzy logic. The examples of fuzzy rules on the basis of expert knowledge are given. The new indicator “degradation degree” is introduced. The degradation degree allows to analyze the "distance" to a critical situation and the reserve of time for decision- making in-flight

1. INTRODUCTION Reliability, safety and durability represent important properties of modern aircraft, which is necessary for its effective in-service use. Statistically, in the recent years the majority of aircraft incidents are connected with the human factor and late fault detection in plane systems. In this regard, requirements to flight safety which demand development of new methods and algorithms of control-and-condition monitoring/ diagnostic for complex objects raise every year. The analysis of modern gas turbine engines has shown that most faults appears in the engine itself and its FADEC (40-75% for FADEC). The percentage of faults for FADEC depends on the achieved values for no-failure operation indicators of the engine and FADEC. During the development of FADEC, it is necessary to adhere to the principles and methods guaranteeing safety and reliability of aircraft in use to guarantee proper responses in all range of negative influences. Full information on its work is necessary for complete control of a condition of the engine: 1. Reliable detection of a fault cause providing decision-making on a technical condition of gas turbines; 2. Reliable diagnosis and localization of faults and negative influences are necessary for definition of technical condition of gas turbines for the purpose of providing a reconfiguration and functioning of its subsystems (Kulikov G.G., 1988; Arkov VY, et al., 2002). The hardware for condition monitoring of measurement channels in many cases allows to detect only catastrophic (breakage or short circuit) faults, i.e. their stochastic properties on time of the process observed in one object and on a set of objects are not distinguishable. The criteria of warning messages on faults appearance are based mainly on determined logic operations and distinguish between only two conditions: "operational" (fully operational) or "fault". In this paper, hierarchical fuzzy Markov models for quantitative estimation of system safety of gas turbines taking into account the monitoring of cause-effect relations are considered. Transition from two-valued to fuzzy logic for estimation of degradation indexes and the analysis of fault developments for the gas turbine and its FADEC is considered for this purpose. 62

Models, Methods and Algorithms of Data Processing and Storing 2. HIERARCHICAL MODEL OF FAULTS DEVELOPMENT PROCESSES IN GAS TURBINES Complex diagnostics of the power plant is proposed to be carried out on elements and units, using the hierarchy analysis method (Saaty TL, et al., 2001). First, decomposition into independent subsystems of various hierarchy levels is carried out on structural features. Similarly, the power plant and its systems are represented in the form of hierarchy of elements and blocks. This approach enables cause-effect relationships to be identified on the hierarchy structure of a system. In Figure 1, the hierarchical structure of states of the power-plant is shown. The power plant is represented in the form of a hierarchical structure as the complex system consisting of subsystems and elements (units) with built-in test/monitoring functions, according to the distributed architecture. For this purpose, the power plant decomposition might be performed into independent subsystems with various levels of hierarchy on structural and functional features in the following way:  Control and monitoring system (FADEC);  Hydro mechanical system (actuators);  Fuel system;  Start-up system;  Lubricant oil system;  Drainage system, etc. The hierarchy analysis allows to utilize the state model on the basis of faults development which enables the system state to be estimated at each level of the hierarchy. The mathematical model of states is represented as SGFLR ,,,, where  S is state vector,  G is hierarchy of system faults,  F is quantitative estimate of faults,  L is set of fault influence indexes,  R is mutual influence system of faults. The depth of hierarchy G is referred to as h, and h = 0 for the root element of G. For G the following conditions are satisfied: 1. There is splitting of G into subsets of hk, k = 1 … n.  2. From xL k follows that x hk 1, k  1, , n  1.

 3. From xL k follows that xhknk 1,2,, . For every xG there is a weight function such as:  x :[0,1]x  , where  x ()1y  . yx 

The sets of hi are the hierarchy levels, and function ωx is a function of fault priority of  one level concerning the state of the power-plant x. Notice that if xh k 1 (for some level of hw), then ωx can be defined for all hk, if it equals to zero for all faults in hk+1 which do not belong to x (Kulikov G.G., et al., 2012). The hierarchical FADEC model integrates:  functional structure (block diagram);  physical structure;  tree states (state structure) of elements and units;  tree of failures influence indexes. On the hierarchical model, the system of faults interference R with logical operations of a disjunction and a conjunction is applied. Such a system of faults interference allows to analyze

63

1st International DSPTech Conference on Technologies of Digital Signal Processing and Storing the state of all power-plant, both from the bottom up to the top, and from the top down to the bottom and to carry out deeper analysis on various levels of decomposition of the control system using an intermediate state: degradation.

Power-plant State

ω1 ωm

State of Gas Turbine . . . State of Gas Turbine Engine 1 Engine m

ωm1 ωmk ω11 ω12 ω13 ω14 ω15

Start-up FADEC Fuel system Lubricant oil State of other system . . . state state system state systems state

ω11k ω12k ω13k ω14k ω15k State of State of State of monitored circuits of circuits of State of monitored parameters: State of monitored sensors and sensors and parameters presence of parameters actuators actuators cuttings; oil ω11n ω12n overheat in turbine State of pedestals, control-and- State of monitored compressors; oil- condition- parameters filter obstruction; monitoring etc. functions

Fig.1. Hierarchical states structure of power plant with failure influence indexes ωm

The state of an element or a system is proposed to be represented in the form of three parameters { operational, degradation, fault }, see Tab. 1. In the operational state S = 0, while during fault S = 1. The degradation degree range from "0" to "1". Thus, the extreme values "0" and "1" are defined according to the determined logic, which is realized in the conventional FADEC (according to the design specifications for the system). The introduction of this intermediate state of "degradation" expands the informativity of the conventional condition monitoring algorithms.

Table 1. Fuzzy representation of state Oper Degr Fault ational adation 0 < S S = 0 S = 1 < 1

Based on the faults analysis and the hierarchy of states of the system at each level, the degree of degradation of each item or sub-unit is determined (Figure 2). Fault states are classified via degradation degree as "Negligible", "Marginal", "Critical" and "Catastrophic" (SAE ARP 4761). The estimation of the degradation degree is defined on the membership function S which takes values in the range of .If the degradation degree is closer to "1", the distance to a critical situation will be closer. If the analysis of a system showed that the state vector is {0,1 0,6 0,3}, it is possible to ensure that there is a "distance" before complete fault (a critical situation). As soon as the system state will worsen with the appearance of new faults and will give the following state vector {0 0,3 0,7}, then there will be a distance of 0,3 to a system 64

Models, Methods and Algorithms of Data Processing and Storing crash. Thus the most informative indicator will be a tendency of faults appearance (trend), not the existence of degradation itself. Visual trend analysis provides an estimate of time before the critical situations develop and, thus, for early planning of the crew actions (Zadeh LA., 1962). Fault states classification

Catastrophic

1 e e r

… g e d

Critical n o 0,5 i t a

Marginal … d a r g e

Negligible 0 D

Fig.2. Estimation of "degradation" state

Consider an example of correspondence of degradation degree and the operational state. At the degradation degree of 0,25, the system is capable to carry out 75% of demanded functions (50% at 0,5 degradation, 25% at 0,75 and 0% at 1, which is the unavailable state). Such scale allows to define a “threshold” state, below which further operation is not allowed for safety reasons. Using the degradation degree, it is also possible to estimate the distance to a critical situation and the speed of approximation to it (Figure 3).

Sh(x)

Catastrophic

s

e

t a

t Critical S Marginal

Negligible

t , time

Fig.3. Trend of state dynamics during flight

Thus, the hierarchical model of fault developments allows to decompose the power-plant on hierarchy levels for obtaining quantitative estimates of the degradation state and gradual faults. The hierarchy analysis allows to utilize the state model and to estimate the system state at each level of hierarchy. The state is represented in the form of a vector with parameters { operational, degradation, fault }. Depending on the degradation degree it is possible to make an estimation of operability of object and system safety.

3. FUZZY HIERARCHICAL MARKOV STATE MODELS The built-in monitoring system (BMS) is a subsystem of monitoring, diagnosis and classification of faults of the gas turbine and its systems. The fault existence corresponds to a logic state of "1", the absence does to a logic state of "0". Such state classification doesn't allow to establish a "prefault" state, to trace faults’ development, and to define degradation of the system and its elements. For more detailed analysis, the estimation of the intermediate state of degradation is proposed. For this purpose, the use of fuzzy logic is considered. Signals from sensors, and also logic state parameters from BMS will transform to linguistic variables during fuzzification to a determined value arrives to the input of the fuzzifier. Let x is the state parameter of an element (for example, the sensor). It is necessary to define fuzzy spaces of input and output variables, and also terms for FADEC sensors. All signals from sensors and actuators will transform into linguistic variables by fuzzification.

65

1st International DSPTech Conference on Technologies of Digital Signal Processing and Storing This rule base is represented by the table, which is filled in with fuzzy rules as follows (Zadeh LA., 1962): (1) RxAxByT:()IFANDTHENnn111 .

For example, if the sensor is in the operational condition, we have the following membership functions:

[()1;()0;()0].operational1degradation1fault1nnn Given a greater number of possible conditions (for example, large number of elements and units of gas turbines), one can develop a discrete-ordered scale of state parameters (Figure 4).

«Operational» (T1) «Fault» (T3)

«Degradation» (T2)

μT(x)

n

o

e

i

t

e

r

a

g

d

e

a

r

d

g

e D

Number of element’ faults Fig.4 Discrete scale of degradation

For further analysis of the system, enter the faults influence indexes at each level of hierarchy, using a method of pairwise comparison as it is carried out in the hierarchy analysis method. Quantitative judgements on the importance of faults are performed for each pair of faults (Fi, Fj) and these are represented by matrix A of the n×n size. A=(aij), (i, j = 1,2,3). where aij is the relative importance of fault Fi in regard to Fj. The value Aij defines the importance (respective values) Fi of faults in comparison with Fj. Elements aij are defined by the following rules: 1. If aij = , aji = 1/,  ≠ 0. 2. If fault Fi has identical relative importance with Fj, then aij =1, aji=1, in particular aii=1 for all i. Thus, a back-symmetric matrix A is obtained:

1 aa12 1n 1/aa 1 A  12 2n .   1/aa12nn 1/ 1

After the representation of quantitative judgements about the fault pairs (Fi, Fj) in a numerical expression with the numbers aij, the problem is reduced to that n possible faults F1, F2, …, Fn will receive a corresponding set of numerical weights ω1, ω2, …, ωn, which would reflect the fixed judgements about the condition of the gas turbine subsystem. If the expert judgement is absolute at all comparisons for all i, j, k, then matrix A is called consistent. If the diagonal of matrix A consists of units (aij = 1) and A is the consistent matrix, then at small changes in aij the greatest eigenvalue λmax is close to n, and the other eigenvalue are close to zero. Based on the matrix of pair comparison values of faults A, the vector of priorities for fault classification is obtained, along with vector ω satisfying the criterion:

A max ,

66

Models, Methods and Algorithms of Data Processing and Storing

where ω is the eigenvector of matrix A and λmax is the maximum eigenvalue, which is n close to the matrix order 훼) ω. This provides uniqueness, and also that i  1 . i1

FFF12 n F 1  1112111 n   F  A 2  2122222 n  .   max   F  n  nnnnnn12 Note that small changes in aij cause small change in λmax, then the deviation of the latter from n is a coordination measure. It allows estimating proximity of the obtained scale to the basic scale of relations. Hence, the coordination index ( ) ( 1 ) nn max is considered to be an indicator of "proximity to coordination". Generally, if this number is not greater than 0.1 then it is possible to be satisfied with the judgements about the faults importance. At each level hi of the hierarchy for n elements of the gas turbine and its subsystems, the state vector {operational, degradation, fault} is determined, taking into account the influence coefficients of failures: S()(), x x hii n h n i where  ()x is the membership function value of the element xn hni (degradation degree). To determine the element/unit state of the hierarchy at a higher level Sx() hni for the input states of low-level Sx() one stage of defuzzification is performed. The output hni1 value Sx() is presented in the form of the determined vector of state with parameters { hni operational, degradation, fault }. The state estimation begins with the bottom level of hierarchy. The description of a state set obtained by means of fuzzification and deffuzification with the use of the logic operations of disjunction  (summing), and conjunction  (multiplication). The use of the hierarchical representation allows a small amount of "short" fuzzy rules to adequately describe multidimensional dependencies between inputs and outputs. Within a fuzzy hierarchical model, consider fault development processes with the use of Markov chains. Such dynamic models allow to investigate the change of elements’ states in time. Fault development can include not only single faults and their combinations, but also sequences (chains) of so-called "consecutive" faults (Kulikov G, et al., 2010; Breikin TV, et al., 1997). During FADEC analysis, classification, formalization and representation of processes of condition monitoring and fault diagnosis for the main subsystems of gas turbines (control, monitoring, fuel supply etc.) is carried out. These processes are represented in the form of Markov chains which allow to analyze the state dynamics of the power-plant (Breikin, et al., 1997). The transition probability matrix of a Markov chain for modeling faults and their consequences, has a universal structure for all levels of system decomposition: system as a whole (power plant), constructon units, elements. The hierarchical Markov model is built in the generalized state space where physical parameters and binary fault flags are used for the estimation of a state vector of the element, unit and power-plant. The state vector includes three parameters {operational, degradation, fault} which allow to track the fault development and degradation process of the system. During FADEC diagnosis, the area of single faults is mostly considered. The proposed Markov model enables to present the system with multiple faults and their sequences. The top state level of a system reflects in the aggregated form the information on faults at the lower state levels. The elements’ state at the levels of the hierarchy depends on the previous values of state parameters of the elements, values of membership functions and fault influence indexes. For the estimation of transition probabilities between the states of a Markov chain, it is required to calculate relative frequencies of events such as SSij for a given interval of time. In particular, at the top level, the number of events during one flight (Figure 5) can be of interest.

67

1st International DSPTech Conference on Technologies of Digital Signal Processing and Storing

PSSijijProb{} 1 2 3

1. Operational P13 state P11 P12

2. Degradation P21 P22 P23

3. Fault leading to P31 P32 P33 engine stop

Engine restart in flight

Fig.5. Transition probability matrix of power-plant during one flight

The most important events during flight are the emergency turning off of the engine stop (shutdown) and the possibility of its restart. For the probability estimation of such events, it is required to use statistics on all park of the same type engines. For realization of such estimation methods, it is required that flights information was available on each plane and power-plant. Such information should be gathered and stored in a uniform format and should be available for processing. Modern information technologies open possibilities for such research. To analyze fault development processes of one FADEC, it is possible to use results of the automated tests at the hardware-in-the-loop test bed with modeling of various faults and their combinations. In any case, to receive reliable statistical estimates one needs a representative sample of rather large amount of data. Given statistics on all park of engines within several years, it is possible to build empirical estimates of probabilities of the first and second type errors. Consider an example. In Figure 6, the FADEC state estimation with faults is presented on the basis of the degradation degree of the elements. At the 10th level the BMS detected a fault of measurement in the form of (2) break of the first coil of parameter n11 (shaft speed sensor). On the basis of the fuzzy rule R , the parameters of measurement state of n1 in the channel A are characterized by the following three  ()0,2n  T1 11 values  ()0,7n  . T2 11   ()0,1n  T3 11 (2) RnAnAyT:(=)IFANDTHEN1121212 [()0,2;()0,7;()0,1].nnn TTT123 111  ()1n  T1 12 The measurement state in the channel B is defined as  ()0n  , because no faults were T2 12   ()0n  T3 12 detected. At the 9th level, the sensor state of the n is obtained using the multiplication of the vector of state parameters and faults influence indexes:  (nn ) 0,2( ) 1 S( n11 , n  12 )0,60 o TT111112     S( n , nnn )( ) 0,7(  ) 0;0,35. 11  12 d11123 TTD22 4 D       S( n11 , n 12 )0,05 f (nn ) 0,1( ) 0    TT331112 The state of an element of a higher level is calculated by multiplication of the current state to fault influence indexes of the fault elements. At the 8th level the state of two sensors n and T after similar calculations becomes equal {0,78; 0,19; 0,03} that indicated the system degradation in the part of control of fuel consumption.

68

Models, Methods and Algorithms of Data Processing and Storing For the estimation of a state of the fuel consumption control function, the "OR" operation is also used. The state of the actuator of fuel consumption control circuit is characterized by the parameters {0,66; 0,34; 0}. The state of FADEC is characterized by the fault of fuel consumption control function or the state of the guide vanes control function. Using the operation "OR", the state of FADEC is detected as {0,66; 0,34; 0}. In this example, the whole system is considered to be operational, whereas partial degradation is observed, which is not influencing the system operability. At the present time a necessary condition for realization of intellectual algorithms is the complete development of the distributed intellectual control models

focused on control optimization, forecasting and system safety.

1

l e

v {0,66 0,34 0} State of FADEC

e L

{0,66 0,34 0} {1 0 0} A (1)

. 1 A2 (1)

.

State of fuel consumption control function А(B) State of guide vanes control function А(B)

. 8

{0,78 0,19 0,03} {0,66 0,34 0} l e B (1) v B1 (1) 2 e State of sensors n и T State of actuator of fuel consumption control circuit

L . . .

{1 0 0} 9 {0,6 0,35 0,05} {1 0 0} C (1) C2 (0,55) C3 (0,45)

l 1

e Cn

v State of sensor α State of sensor n State of sensor T

e . . . L

D1 (0,5) D2 (0,5) D3 (0,5) D4 (0,5) D5 (0,5) D6 (0,5)

0 1

State of State of State of State of State of State of

l e

v measurement measurement measurement measurement measurement measurement e

L of α (А) of α (B) of n11 (А) of n12 (B) of T (А) of T (B) {1 0 0} {1 0 0} {0,2 0,7 0,1} {1 0 0} {1 0 0} {1 0 0}

Fig.6. Example of hierarchical estimation of state parameters

6. CONCLUSIONS The technique of state parameters determination for FADEC and its systems on the basis of fuzzy logic and Markov chains is proposed. This technique can be used during flight or in maintenance on the ground. The hierarchical fuzzy Markov model allows to decompose the power-plant for a quantitative estimates of degradation state and gradual faults. The analysis of hierarchies allows to utilize the state model on the basis of fault development processes which estimates the power-plant state at each level of hierarchy. The use of the proposed indicator”degradation degree” allows to obtain an objective quantitative estimate of the current state which can be used as, the "distance" to a critical situation and the reserve of time for decision-making in-flight. This indicator is defined on the basis of the discrete-ordered scale and fault influence indexes that allows to determine about 30 % of gradual faults in gas turbine and its systems at the stage of fault development.

References 1. Arkov VY, Kulikov GG, Breikin TV. (2002) Life cycle support for dynamic modelling of gas turbines. Prepr. 15th Triennial IFAC World Congress, Barcelona, Spain, pp. 2135-2140. 2. Breikin TV, Arkov VY, Kulikov GG. (1997) On stochastic system identification: Markov models approach. Proc 2nd Asian Control Conf ASCC'97, pp. 775-778. 3. Breikin TV, Arkov VY, Kulikov GG. (2006) Application of Markov chains to identification of gas turbine engine dynamic models. International Journal of Systems Science, 37(3), pp. 197-205. 4. Kulikov G.G. (1988) Principles of design of digital control systems for aero engines. In Cherkasov BA. (ed.), Control and automatics of jet engines. Mashinostroyeniye, Moscow, p. 253-274. 69

1st International DSPTech Conference on Technologies of Digital Signal Processing and Storing 5. Kulikov G, Arkov V, Abdulnagimov A. (2010) Markov modelling for energy efficient control of gas turbine power plant. Proc. IFAC Conf. on Control Methodologies and Technology for Energy Efficiency, Faro, Portugal. pp. 63-67. 6. Kulikov G.G., Arkov V.Yu., Abdulnagimov A.I. (2012) Hierarhical Fuzzy Markov Modelling for System Safety of Power-plant. Proc. of 4th International Symposium on Jet Propulsion and Power Engineering, Xi’an, China: Northwestern Polytechnical University Press; pp. 589-594. 7. Saaty TL, Vargas LG. (2001) Models, Methods, Concepts & Applications of the Analytic Hierarchy Process. Kluwer Academic Publisher. 8. SAE ARP 4761. Guidelines and Methods for Conducting the Safety Assessment Process on Civil Airborne Systems And Equipment. Aerospace recommended practice; 1996. 9. Zadeh LA. (1962) Fuzzy algorithms, Information and Control, 12(2): 94-102.

70

Models, Methods and Algorithms of Data Processing and Storing

Development of the Intelligence System about the Timberland Which is not Covered with Forest Vegetation for Decision-Making on Reforestation

Enikeev Rustem R., Mirzakhanova Albina .B.., Sitdikova Elina O.,Vazigatov Dinar I.. 1Automated and management control systems, USATU, K. Marx St.,12, Ufa, Russian Federation

Keywords: reproduction of the woods, reforestation, afforestation, deforestation, analysis of data, uniform DB.

Abstract: In work the issue of destruction of the wood is touched. The interrelation of data on the timberland at decision-making on reforestation is described. A conclusion about need of the uniform DB including information of the forest register, the accounting of forest exploitation and data on the timberland.

The wood as it is considered to be, mild planets. If to remember anatomy, then a trial function of lungs is respiratory. But the wood not simply a variety of trees which shares in clarification of air and absorption of carbon dioxide the wood is a larger and composite ecosystem. It is the house for microorganisms, animals and vegetation. And also it is the integral resource for human activity. Therefore it is necessary to protect the woods from destruction. Important feature of forest resources is their renewability. But uncontrolled use of them conducts to deforestation. Deforestation — process of transformation of the lands occupied with the wood in land grounds without wood cover. Deforestation without further sufficient planting of new trees is the frequent reason of deforestation. Also the woods can be destroyed owing to the natural reasons, such as the fire, a hurricane (vetroly), flooding, loss of harmful rainfall in the form of acid rains. Harmful insects and mushroom diseases cause not smaller damage to the woods. Their activity not so is evident, but they also in large quantities turn business wood in sick and not suitable for use of both the person, and other alive organisms. Therefore tracking of process of destruction of the wood is an actual problem today. Deforestation leads to decrease in a biodiversity, decrease of reserves of wood and to strengthening of greenhouse effect because of decrease in volumes of a photosynthesis. It is possible to solve this problem by means of reforestation. Reforestation — the process and actions directed to restitution of forest vegetation with a dominance of tree forest forming species, which are carried out during the particular period. This process is carried out at any factor of deforestation. Reforestation is a compulsory procedure after carrying out continuous and selective cabins on the timberland provided for wood preparation and also at development of the woods. There are three main ways of reforestation: simulated, natural and combined. That reproduction of the woods was carried out in the most short terms by the most efficient in the lesovodstvenny, ecological and economic relations in the ways the knowledge of the complete actual information on the wood lot is necessary. Analyzing all aspects of a condition of the wood lot adoption of the optimal solution on providing necessary pedigree structure, to scheduling of actions for protection and protection of the wood, and also increase of efficiency and quality of wood is possible. In the main source governing the relations in the sphere of forest exploitation, Lesnoy the code of the Russian Federation the whole chapter on reproduction of the woods and afforestation is allocated. In it eight articles which light monitoring of reproduction of the woods, reforestation, afforestation, care of the woods, forest seed farming, and also the reporting. I put

71

1st International DSPTech Conference on Technologies of Digital Signal Processing and Storing the last at the head as at acceptance the decision on reforestation it is necessary to possess actual and complete information on an assessment of change of acreage, occupied with forest plantings and about identification of the lands which are not occupied with forest plantings and demanding reforestation. Let's lower a question of validity of the cabin as at the main task of restitution of a forest cover, this aspect fades into the background, but about which you should not forget. As data on pirate cabins, at a question of restitution of the wood, also demonstrates that the wood lot needs to be restored. And so, the analysis of data is necessary for the approval of the optimum Project of reforestation according to Rules of reforestation. Information has to reflect a real and history for the last periods. Today executive bodies of the power have various databases about the timberland which is oriented on various accounting of forest exploitation: wood purchase and sale, rent of the timberland under different types of use, data on calculated cutting areas (a calculated cutting area – the extreme annual volume of preparation of wood allowed in accordance with the established procedure within theparticular territory and economic section), lawful use of the woods, pirate cabins, etc. The analysis of data on separate sector is obligatory, but maintaining a common information space is necessary as the emerdzhentnost is inherent in our system. And if to estimate data strictly on subsystems (structures), then the decision not always made will be the exact. For example, having estimated data on the wood lot on the territory, we received result that the wood lot, is covered with wood in full. That is the territory is set with particular breed. But, if we begin to estimate data on breed, it will turn out that breed is infected, is ill. And such wood is subject to immediate cutting down as there is infection of the next timberland and young growth. Respectively considering only two aspects about the wood lot, (the area of the wood lot covered with wood and pedigree structure) the decision on reforestation already acritical as it is necessary to study all information in total. According to the accounting of forest exploitation and the timberland 1C:Enterprise 8 is offered to realize a uniform DB on a platform. The applied decisions developed on its basis allow to automate different types of activity of the divisions which do not have communication links, using a uniform technological platform. It is possible to realize the analysis of the strategic and operational requirements given in total for an assessment arising in the course of forest exploitation in all directions by means of the separate module "Reproduction of the Woods and Afforestation" using optimization methods. Screen forms of the module are presented on Fig. 1 and Rice 2. The module will be able to operate with all data which accumulates in a DB, beginning with information from forest areas and local forest areas, finishing engineers on forest exploitation of the Ministry of forestry. Thus driving of forest fund, its actual use will allow to form necessary reports. In the conclusion of article we will sum up the result, uniform space which will unite all or a majority of information on the timberland, will allow to analyze history, and a real, and also it is competent, quickly and optimum to make the decision on reproduction of the woods and afforestation.

Fig. 1 Screen form "Management of Forest Fund".

72

Models, Methods and Algorithms of Data Processing and Storing

Fig. 2 Screen Movement of Lands of Forest Fund form

73

1st International DSPTech Conference on Technologies of Digital Signal Processing and Storing

Prospects of Cellular Automata Usage for Middle and Large Cities Traffic Modeling

A. Shinkarev Computer Technologies, Control and Radio Electronics, South Ural State University (national research university), Lenin ave,Chelyabinsk, Russia [email protected]

Keywords: Cellular Automata, Analysis, Traffic, Modeling

Abstract: The task of road traffic quality improving is considered in this paper. Also the appropriateness of the transport network infrastructure changes and the necessity of the traffic flow mathematical modeling are analyzed. The traffic lights control task is defined as the first-priority one and the cellular automata traffic flow models are considered as applied to this task. Also this article briefly reviewed the cellular automata traffic flow models family main characteristics, their place in the traffic models hierarchy and the main directions of their practical application.

1. INTRODUCTION Nowadays, according to the information from some researches (2001; Banister, 2008; Stanley et al., 2011), a lot of middle and large cities suffer from oversupply of vehicles which in turn leads to traffic jams occurring on the roads. In any event cities start to deal with the problems of roads cap a city in sufficiency when the level of motorization reaches 50-100 vehicles per 1000 citizens. Besides local tasks so living, e.g. increasing roads capacity, it’s also required to satisfy the requirements of the society, in particular it’s important to increase the transportation volume and quality, make it’s as safe and reliable as possible. Standard criteria of traffic quality are level of pollution, level of noise, fuel consumption, and prevention of formation and spreading of traffic jams. These criteria are applicable for Russian and European roads and highways. So living the necessity of satisfaction the criteria above frequently require extensive investments in development of the transport infrastructure. But it’s not always possible and also has high risks in case of an in sufficient level of deep understanding of functioning and development rules of the transport net work current status. Ignoring of the necessity to investigate these rules frequently lead stoun successful design decisions which are extremely hard to fix. Wrong design decisions almost always lead to frequent traffic jams, over loading or under loading of some road parts, rising of an accident rate and also worsening of environmental impact. Making any changes to the transport infrastructure as the control of the current transport networks take may be absolutely indequate if it’s not taken in to account the wide array of traffic flow characteristics and rules of inner and outer factors impact on the dynamic characteristics of the hybrid traffic flow[1]. The traffic flow is polymorph thousand unstable, its control criteria are contradictory and road conditions could be unpredictable because of the weather and the highway area. All these factors seriously complicate theoretical and practical researches in the realm of the traffic flow mathematical modeling. First of all modeling is required because of the following characteristics of the transport systems: 74

Import Substitution and Software and Hardware for Automation and Control

 The maximum flowing creasing by the transport network development leads to its compensation by increasing and redistribution of a demand in new conditions.  Each driver behavior is unpredictable — selections of a path, driving maneuvers etc.  Influence of random factors and fluctuations (road accidents, the weather, etc.) which depend on the seasons, weekends, holidays etc. The traffic flow is adapting to the control actions all the time. That’s why it’s required to have not only the rough expert conclusions but also the detailed modeling results for the decision making about the transport infrastructure development. The alternative to the transport infrastructure development is a control of its current state. The goal of its approach is to increase the mobility without losing of the safety and the traffic flow consistency as far as possible. Finally achieving of these goals may leads to decreasing of the vehicle sutilization or at least to avoiding of conditions that lead to traffic jams. Cities high way sand its crossings may operate the precise maximum number of vehicles during some period of time. When that upper limit is exceeded on some road segment the travel time increasing and the traffic congestion occurs (Yang and Yagar, 1995; Carey and Ge, 2003). The traffic congestion and decreased flow of the transport network are highly critical and inconvenient disadvantages for the motoring public because these disadvantages lead to the wasting of time, decreasing of the work productivity, missing of opportunities, deliveries delaying and increasing of its cost, etc. If we look at the poor capacity of the transport network and frequently happened traffic jams it could mean the low life quality, unstable people mobility and problems with the environment pollution. Thus development of the transport network can’t solve the tasks of the city transport network work quality increasing, the maximum flow increasing and preventing the traffic congestion situations, but, strangely enough makes the thing worse. Of course such tasks can’t be solved locally. It must be a complex solution which aggregates the signal control improvements, development of public transportation, navigation and route selection systems etc. The main aspect that must be taken in to account firstly is a signal control which aim sat decreasing of delays and the level of roads overloading. The side effect of its work is a decreasing of the environment pollution level at the junctions. The traffic lights control the traffic on the roads crossings for avoiding of the potential conflicts between vehicles and also for ensuring road safety. However the signal control leads to delays, vehicles stoppage, acceleration and deceleration near the junctions according to the signaling light. Delays on the junctions create queues which grow very fast when there are lots of incoming vehicles. Accordingly the queue length before the traffic lights is an indicator of its work effectiveness and also is a rough measure of the environment pollution level. The more the queue length the more the number of deceleration-stoppage-acceleration occurrences, that in turn leads to the more intensive exhaust gas emissions comparing to the continuous traffic flow (Pandian et al., 2009). There are the number of researches about a connection between the exhaust gas emissions and the traffic lights parameters, mainly in terms of delays control (Hunt et al. 1982; Hallmark et al. 2000; Unal et al., 2003; Li et al. 2004). At present there are different researches in various directions which purpose is somehow an optimization of traffic lights cycles for junctions and for other roads crossings types. In particular for this task several families of the traffic flow mathematical models are used, including the models based on the cellular automata theory.

2. CELLULAR AUTOMATA TRAFFIC MODELS It is well known that the concept of the cellular automata was proposed by John on Neumann and later was quite successfully applied to the traffic flow modeling. Models based on the theory of cellular automata are a part of microscopic models family, though they are very different from the other models groups of that family. 75

1st International DSPTech Conference on Technologies of Digital Signal Processing and Storing

Considered models are different from the car-following models in a space representation, in cellular automata models the space is discrete (consists of the cells with the same length). In general the cell leng this accepted as an average length of a car, e.g. 7.5 meters. Apart from the space, time is also accepted as a discrete amt and has a step that is often equals to an average driver decision making time which is approximately 1 second. On each step a cell could be empty or occupied by a vehicle. Models could be single-cell when all vehicles have a length equals to 1 cell and multi-cell when vehicles could occupy one or more cells. One of the most valuable advantages of this models family is a possibility to perform a transition to the new state in parallel. All cell soft he automata transit to the new sate in parallel and depend only on the previous state of the model, mainly only from the nearest cells state. In the work[2] a detailed analysis and comparison of the results provided by different, at present fundamental, models of this family for a unidirectional one lane road is presented.

2.1. Place of Cellular Automata Traffic Flow Models in the Traffic Models Tree In the paper[3] the traffic models tree was introduced, in which the cellular automata based models take an important place. Even though this family is one of the most recent ones but it already has quite a lot of fundamental theoretical researches and practical applications. As for fundamental sciences when on the edges of them the new directions of researches were created now we can trace quite the same tendency of mixture of approaches from the different schools of the transport modeling. There quirement of a consolidation of the best ideas from the different research directions is necessary more than ever. And in particular for this need the cell ular automata models could be used the most effectively. These models family is an instrument that allows us to model a traffic flow on the vehicular level quite precise and detailed. And there sults could be used in the models of the higher order with the lesser detailing level. An example of such symbios is could be found in the work [4].

2.2. Application of Cellular Automata Traffic Flow Models Whatever goal we bear in mind for the traffic flow mathematical modeling we need to decide what the priority is: is it the clearness and precision of the prognosis or is it the speed of the simulation process. First of the two groups is an application of the models for the long term planning of the transport networks development. For such kind of tasks the speed of the modeling process is not the main aspect and them a in goals are aprognos is precision, an availability to perform different modeling scenarios and even a stochastic nature of models. Probably the stochastic car-following models or other models with stochastic fundamental diagrams such as the cellular automata models are the best choice for the requirements listed above. The second group is proactive traffic control systems which use modeling for estimation, prognosis and control of the traffic flow state. That is why the speed of the modeling process is them a in priority for such systems. For such models applications it would probably be optimal to use models with a simple mathematical formulation and not difficult fundamental diagrams [3]. To simplify there presentation of the cellular automata based models an author proposed a refactoring approach in the work [5]. Even if we know that the new computers would leave the current ones behind by performance the fast simulation requirement won’t disappeared, e.g. it would be actual for the comparison of different configuration variants for large transport network topologies. Wilson (2008) speaks that now we might expect the final solution of a conflict between the different transport modeling schools. Never the less for different applications the weight of criteria could vary. That’s why in the article [3] in the model tree you may see how many branches were developed for the last decades. Moreover as it was mentioned above it leads to the development of new hybrid models. Even inside the same application it could be necessary to have the detailed modeling results (e.g. for the city’s main roads which must be controlled) and 76

Import Substitution and Software and Hardware for Automation and Control not so detailed modeling results a swell (e.g. for the rest of the roads that don’t require any kind of control). 3. CONCLUSION Nowadays lots of theories make it possible to control transport networks of the different cities. At present the family of traffic flow cell ular automata models used in many practical applications also in hybrid models. Ideas developed by author represented in works [5, 6, 7] may take it sown place in the cell ular automat a models family and also could be help full in practical applications for middle and large cities where transport network and also the motorization level haven’t reached the saturation level. References 12. Semenov V.V. “Mathematical modeling of metropolis traffic flows”, Vol. 34.Keldysh Institute of Applied Mathematics, Moscow, 2004. 13. KnospeW., SantenL., SchadschneiderA., SchreckenbergM.“Empirical test for cellular automaton models of traffic flow”, Vol. 70. Phys. Rev. 2004. 14. vanWageningen-Kessels F.,van Lint H., Vuik K., Hoogendoorn S. “Genealogy of traffic flow models”. Springer Berlin Heidelberg. 2015; 4:445-473. 15. DolgushinD.Yu., Zadoroznij V.N., Kokorin S.V. “Two levels modeling of traffic flow based on cellular automata and systems with queue”. In.Proc. of 5th All-Russian conference, Vol. 1. St. Petersburg, Russia, 2011, pp.139-144. 16. Shinkarev A.A. “Analysis and refactoring of representation of traffic flow models based on cellular automata”. In the World of Scientific Discoveries. 2015; 64:585-595. 17. Shinkarev A.A. “Three-stepped unified representation of traffic flow models based on cellular automata”.In:Proc. of International Research Journal XXXVII conference, Vol. 1, Ekaterinburg, Russia, 2015, pp.126-128. 18. Shinkarev A.A. “Traffic lane changing motivations for traffic flow mathematical models based on cellular automata theory”. Control Systems and Information Technologies. 2015; 61:48-51.

77

1st International DSPTech Conference on Technologies of Digital Signal Processing and Storing

On Software Implementation of Numerical Methods for Linear Optimization

Alina Latipova1, Zagirova Ekaterina2 1Economical-Mathematical Methods and Statistics Department, South Ural State University, Lenina Ave, Chelyabinsk, Russian Federation [email protected], 2Economical-Mathematical Methods and Statistics Department, South Ural State University, Lenina Ave, Chelyabinsk, Russian Federation [email protected]

Keywords: numerical methods, linear programming, interval analysis, optimization, software

Abstract: There are considered different numerical methods of solving linear programming (LP) problems which can be implemented in software for traditional sequential and high-performance computing. This article concerns computational complexity of the given methods and their applications to solving LP problems with interval uncertainty (ILP problems).

1. INTRODUCTION Achievements in mathematical theory, hardware and software gave rise to development of methods for solving linear programming (LP) problems in the second half of 20th [1-4]. Various approaches of solving LP problems have been invented and they can be classified as follows [82]. The first class of LP algorithms is based on approach of finding the optimal solution by searching the boundary of the constraint polytope by using ”pivot rules” to determine the next direction of travel once a basic point is reached. If there is no direction of travel to improve the objective function, then the basic point is the optimal solution. George Danzig proposed the first algorithm using this idea - Simplex Method in 1947. Then variations of Simplex Method appeared: two-phase, dual, revised Simplex Methods (the third variant is implemented in most LP solvers). As a rule Simplex Methods perform well, but is not strictly proved that ”pivot rule” algorithm can run in polynomial time. There are examples of exponential time for Simplex Method (e.g. Klee and Minty, [6]). The second class is Ellipsoid Method algorithms [7]. It was proved that Ellipsoid Method of Khachiyan has polynomial time complexity. The idea of Ellipsoid approach is to calculate ellipse hull for feasible solutions. After that you need to check whether center of the ellipse is a feasible solution. If this condition is not fulfilled, a violating constraint should be found using a seperation oracle. Then, the half of the ellipse is enclosed so that the feasible solutions must lie in another ellipse. Repeating of ellipse division may lead to an exponential time procedure. Khachiyan adapted this method to give a polynomial time solution to LPs and this result became a breakthrough in the theory of complexity. Nevertheless, the algorithm usually takes longer than Simplex Method because of great number of calculations needed for making one iteration. The third class of algorithms for solving LPs are called “interior point” methods [82]. The first step of these algorithms is finding an interior point of the constraint polytope and then algorithm moves inside the polytope to find optimal solution. The first interior point method was proposed by Karmarkar in 1984. His method is not only polynomial time like the Ellipsoid Method, but its modifications (affine-scaling method) also gave good running times in practice like Simplex Method. Still finding an interior point can be very time-consuming.

78

Import Substitution and Software and Hardware for Automation and Control

Let’s consider further Revised Simplex and Ellipsoid Methods for LPs and their implementation in software with parallel computations. 2. HIGH PERFORMANCE COMPUTING FOR EXACT LP Solving LP tasks requires performing various operations with matrices, which can be done by parallel computations. 2.1. Operations with matrices

Computational complexity of matrix operations varies from O(n 2 ) (addition, subtraction, transposing) to O(n3) (multiplication, inversion). Parallel computations allow speeding up for these operations and thus linear optimization algorithm itself [8]. Matrix multiplication Parallel algorithms (Block-Striped Decomposition, Fox’s method, Cannon’s method) are usually based on rowwise, clockwise, checkerboard decompositions of multiplied matrices. These methods allow achieving great speed-up and efficiency characteristics. Matrix inversion The main parallel algorithms are Strassen, Strassen-Newton, Gaussian elimination, Gauss–Jordan, Coppersmith and Winograd, LU/LUP Decomposition, Cholesky decomposition, QR decomposition, RRQR factorization, Monte Carlo Methods for inverse, Sherman-Morrison, etc. Most of them use matrix decomposition with matrix operations (e.g., matrix multiplication). 2.2. Revised Simplex Method This method is implemented on many LP solvers. The main approach of the method is transition from one set of basic variables to another through recalculation of one column of inversed basis matrix and matrix multiplication for finding values of basic primal and dual variables. Thus parallel computations speed-up for this method is high although its theoretical computational complexity is exponential. Constant recalculations of inversed basis matrix may lead to instability of Simplex Method because of great number of operations with fractions (multiplication, addition, division). One way out proposed in [9] is use of integer arithmetic. 2.3. Ellipsoid Method In contrast to the previous one Ellipsoid Method has polynomial theoretical complexity. It consists of five steps [7, 10]:  change of current subgradient of function which requires finding maximum value of array (polynomial complexity O(n) );  calculation of normalized generalized gradient that requires performing matrix operations (transposing, multiplication) (polynomial complexity O(n2) );  calculation centre of Ellipsoid using matrix calculations (polynomial complexity O(n2));  calculation of space dilation operator, the most time-consuming step that requires performing a lot of matrix operations (transposing, multiplication, addition) (polynomial complexity O(n2));  calculation of Ellipsoid volume parameter (polynomial complexity O(1) ). Alternation of fast operations with slow complex calculations makes this algorithm not very suitable for parallel computations (Fork-Join Model of parallelism is used here). So most developers prefer Simplex Method to Ellipsoid Method for implementing in LP solvers using high-performance computations.

79

1st International DSPTech Conference on Technologies of Digital Signal Processing and Storing

3. HIGH PERFORMANCE COMPUTING FOR INTERVAL LP For real-world linear economic models, numerical values of input matrices items are obtained using statistical data and expert estimates, therefore there can be an uncertainty, which is commonly interval. Using of average values may cause ineffectiveness of optimal solution, because uncertainty wasn't taken into account properly. Let A designate an interval matrix with nm size [11] A [A;A] [AC  ;AC  ], (1) where A and A are exact matrices of interval lower and upper bounds of A correspondingly,  is a matrix of with exact entries (radiuses of intervals)

  [i, j ]i(1,n), j(1,m) ,  ij  0 ,

AC is a matrix of interval centers of Ac  (A  A) / 2. Let us designate interval vectors  n1-size

b  [b;b]  [bc ;bc ] ;  m1-size

c  [c;c]  [cc ;cc  ]. Further I consider the solvability (feasibility) problem of interval equations and inequalities. A weak solution of system of interval linear equations [12] Ax  b , (2) is a vector x  Rm which satisfies Ax  b for some matrices A A and bb. The following theorem for weak solutions (2) is proved by W. Oettli and W. Pranger [12]. Oettli-Pranger Theorem. The vector x  Rm is a weak solution for system Ax  b if and only if x satisfies

Acx bc  | x |  . (3) It comes from Oettli-Pranger Theorem that checking weak solvability of linear interval equations system (3) is NP-hard. Exponential complexity of this checking comes from the fact that all possible signs (“  ” or “  ”) for and Acx  bc should be taken into account and there at least 2m combinations of them. Calculation of optimal outer interval estimate of set of weak solutions for (2) is NP-hard. The system of linear interval equations (2) is strongly solvable if every system of exact linear equations Ax  b is solvable, where A A , bb . Conditions under which system of interval linear equations in strongly solvable is presented in [12]. Checking strong solvability of linear interval equations system (2) is NP-hard. A strong solution of linear interval equations system (2) is a vector if it satisfies for all matrices , . Theorem [12]. The vector is a strong solution for system if and only if satisfies

Ac x  bc ,  | x |   0 . (4) This theorem shows that strong solutions exist only in rare cases (condition (4) is very strict). A weak solution of system of linear interval inequalities 80

Import Substitution and Software and Hardware for Automation and Control

Ax  b (5) is a vector x  Rm which satisfies Ax  b for some matrices A A and bb. Gerlach Theorem [12]. Vector x  Rm is a weak solution of for system if and only if x satisfies

Aс x   | x | b (6) It comes from Gerlach Theorem that checking weak solvability of linear interval inequalities system (5) is NP-hard. A strong solution of linear interval inequalities system (5) is a vector if it satisfies Ax  b for all matrices A A , bb . Theorem [12]. System of linear interval inequalities (5) is strongly solvable if and only if the following system is feasible { Ax1  Ax2  b; x1, x2  0 ; x1, x2 Rm }. (7) where two vectors x1, x2 Rm are solutions of system (7). It can be proved that if system (7) has feasible solution (x1, x2) then vector x  x1  x2 is a strong solution for (5). Note that solving exact linear inequalities system (7) has polynomial complexity. So checking strong solvability for interval inequalities unlike interval equations needs polynomial time. This paradox comes from the fact that transition from equations to inequalities changes properties of system. Still system (7) in many cases can be unfeasible. Thus finding weak or strong solutions of interval constraints of LP has no practical value:  outer interval estimate of set of weak solutions can be too wide;  checking different types of solvability is NP-hard. So we need another approach for interval LP problems. An interval linear programming problem (ILP problem) is a family of exact linear programming problems (LP problems) min{ cT x | Ax  b, x  0}, (8) for which input matrices satisfy A A , bb and cc. Let f (A,b,c) designate the optimum for (8) and given matrices (A,b,c) . The range of optimum for ILP (8) [12] has the lower bound f (A,b,c)  inf{ f (A,b,c) | AA, bb, cc} (9) and the upper bound f (A,b,c)  sup{ f (A,b,c) | A A, bb, c c}. (10) Note that the interval f (A,b,c), f (A,b,c) (11) for this range may have infinite boundaries. Let us introduce the auxiliary nonlinear problem [12] T T T  (A,b,c)  sup{(bc p  p | Ac p   p  c}. (12) Let n1 vector y satisfy y  sgn p , (13) i.e. there two alternatives for every element of vector y yi {1;1}, i 1,2,,n (14) So there are 2n combinations for vector y , the set of these combinations can be denoted Y n . If vector is fixed then we can solve LP subproblem for (12)

81

1st International DSPTech Conference on Technologies of Digital Signal Processing and Storing

T T T T T (y)  max{( bc p   (y p) | Ac p   (y p)  c}. (15) Value of ( y) may be infinite. After using all combinations for vector y we can calculate upper bound ( y) this way  (A,b,c)  sup{(y) | yY n}. (16) The lower bound (9) of goal function (if it is not infinite) can be calculated this way f (A,b,c)  min{ cT x | Ax  b, Ax  b, x  0}. (17) Problem (17) is an exact LP problem. Problems (15-16) may be split in the series of exact LP tasks and it can be done independently by parallel processes almost without interchange between them [12]. Theorem [12]. For ILP problem (8) the following statements are equivalent: (1) for any A A , bb, cc the problem max{cT x | Ax  b, x  0} has optimal solution; (2) both lower bound (9) and upper bound (10) of the optimum range are finite; (3) both lower bound (17) and upper bound (16) of the optimum range are finite;; (4) system of inequalities T T n { A p1  A p2  c , p1, p2  0 ; p1, p2 R } is feasible and  (A,b,c) is finite. For every case (1)-(4) the range of optimum is equal to [ f (A,b,c); (A,b,c)] . The proof of this theorem is given in [12] with the algorithm for calculating [ f (A,b,c); (A,b,c)] . This algorithm consists of two main steps:  Calculation of f (A,b,c) by solving an exact LP problem (17).  Calculation of by solving a set of exact LP problems (16). 4. CONCLUSION This article concerns overview of parallel implementation of methods for solving exact and interval LPs:  Classification of methods for exact LPs is given together with their computational complexity estimates.  It is shown how to implement these methods for high-performance computing.  Statement of interval LP problem (ILP) is presented.  The main approach described for ILP is usage of lower and upper bounds to find different types of solutions.  These solutions can be obtained by solving series of exact LP problems with using of parallel computations. References 1. Danzig G. “Linear Programming and Extensions.” University Press, Princeton, 1963. 2. Kantorovich L.V. “Mathematical Methods in the Organization and Planning of Production Management”. Science. 1960; 6 (4): 366-422. 3. Khachiyan L.G. “A Polynomial Algorithm in Linear Programming”. Doklady Akademii Nauk SSSR. 1979; 244: 1093-1096. 4. Karmarkar N. “A New Polynomial-time Algorithm for Linear Programming”. Combinatorica. 1984; 4 (4): 373-395. 5. Chawla S., Darnall M. “Linear Programming. Approximations Algorithms”. http://pages.cs.wisc.edu/~shuchi/courses/880-S07/scribe-notes/lecture09.pdf

82

Import Substitution and Software and Hardware for Automation and Control

6. Klee V., Minty G.J. “How good is the Simplex algorithm?". In Shisha, Oved. Inequalities III (Proceedings of the Third Symposium on Inequalities held at the University of California, Los Angeles, Calif., September 1–9, 1969, dedicated to the memory of Theodore S. Motzkin). Academic Press, New York-London, 1972, pp. 159–175. 7. Arrora S. “The Ellipsoid Algorithm for Linear Programming.” https://www.cs.princeton.edu/courses/archive/fall05/cos521/ellipsoid.pdf. 8. Dongarra J.J., Duff L.S., Sorensen D.C., Vorst H.A.V. “Numerical Linear Algebra for High Performance Computers (Software, Environments, Tools)”. Soc for Industrial & Applied Math, 1999. 9. Panyukov A.V., Golodov V.A. “Parallel Algorithms of Integer Arithmetic in Radix Notations for Heterogeneous Computation Systems with Massive Parallelism”. Bulletin of the South Ural State University, Series: Mathematical Modelling, Programming and Computer Software. 2015; 8(2): 117-126. 10. Bezborodov V.A. “Parallel Implementation of the Ellipsoid Method for Optimization Problems of Large Dimension”. Unpublished B.Sc. Thesis (Scientific Advisor Golodov V.A.). South Ural State University, Chelyabinsk, Russian Federation, 2015. 11. Jaulin L., Kieffer M., Didrit O. and Walter E. “Applied Interval Analysis”. Springer- Verlag London Limited, London, 2001. 12. Fiedler M., Nedoma J., Ramik J., Rohn J. and Zimmermann K. “Linear Optimization Problems with Inexact Data”. Springer Science+Business Media, 2006. 13. Latipova A.T. “On solving optimization problems with inexact data”. In IFAC Proceedings Volumes (IFAC-PapersOnline), 2013; 7 (PART 1): 1234-1239.

83

1st International DSPTech Conference on Technologies of Digital Signal Processing and Storing

The Architecture of the Information and Control Complex for Intelligent Transport Systems

L.R. Sayapova, L.R. Shmeleva, A.A. Shmelev, R.M. Nafikova E-mail: [email protected] Research advisor –Sayapova L.R, senior lecturer Aviation Technical University E-mail: [email protected]

Absrtact: this article deals with the development and design of information and navigation and intelligent systems for remote monitoring and management of vehicles. Presented telematic platform of intelligent transport systems (ITS) unifies hardware and software environment in the implementation of a broad class of projects of regional and departmental specifics. Use of standard open architectures of information systems creates opportunities for their scalability, portability and interoperability, which provides a good basis for the further development of the intellectual system.

1.INTRODUCTION ITS are becoming increasingly widespread throughout the world. The best known are the following international associations: ITS-Europe (ERTICO), ITS-America, ITS-Japan. Russia has also developed the concept of intelligent transport systems (ITS). It is expected that the creation and implementation of national ITS will improve the efficiency of traffic management, to reduce overhead costs for transportation of goods, accelerate the development of national transport, territorial and information infrastructures, to provide a favorable climate for the introduction of services based on Global Navigation Satellite Systems. Thus, the ITS provides a stable, efficient, economical and safe operation of the vehicle by making the active elements of the transport system of intellectual property (adaptive) behavior. Intelligent behavior of active elements of transport systems that are able to change their status, is a consequence of their having normative behaviors and feedback channel, which draws information supplied by a set of measuring instruments and subjects of transport. The system monitors the performance of each vehicle in real time and promptly identify any deviation from the track assignments, mistakes and abuses drivers. Timely information about this you can immediately eliminate the violations and prevent them in the future.

2.THE PRINCIPLE OF FORMATION OF THE TELEMATICS PLATFORM OF ITS From a consumer perspective ITS is an integrated system of information security, which includes a set of integrated information services provided through the use of information resources generated in the process of transport and other economic activities. Bringing together all kinds of information and information-telecommunication services under a single information field - the problem is very urgent and complicated [1], its implementation requires solving a number of problems, which include the following:  the low level of cross-project co-ordination and unification of the projects implemented by separate business entities of different scales (especially in the private sector);  focus on the use of foreign cartographic base, geographic information systems and signals Navstar / GPS;

84

Import Substitution and Software and Hardware for Automation and Control

 the existence of monopoly aspirations ITS component manufacturers, low level of standardization and cooperation;  the presence of artificial inter subjective (including interdepartmental) information barriers, low level of integration of the data;  insufficient attention to the operational data analysis and monitoring of the effectiveness of that creates a sense of lack of effect on the work of ITS, as well as to the issues of forecasting, which leads to a lack of ideas about the future development of the system;  lack of a systematic approach to the development of IT, which must be taken into account: the specificity of the different types of vehicles, the specificity of the regional road network and specific problems of traffic management. As a result, the development of integrated telematic platform ITS faced with the incompatibility of services and software and hardware solutions. The paper proposes an approach to systematically solve most of these problems by creating a unified multi-tier architecture of ITS. The upper level of the system forms an integrated information environment intelligent control the operation of transport. The technical means of information network top-level management to meet the following requirements:  stability and high reliability core network structures;  transmission of video controllers with motorways in real time and the opportunity to intervene in the operation of the system in an emergency;  the possibility of using wireless solutions for reducing the cost of cabling;  the most rapid processing of information from the monitored device to accelerate their cooperation in the ETS;  redundancy of key equipment and systems for power saving efficiency of the entire system in case of failure of its individual elements;  interaction with equipment that does not support network technologies (continuity with existing solutions);  ability to operate the equipment in a wide temperature range. These requirements are largely responsible fiber optic Gigabit Ethernet-based ring of industrial switches supporting D-zerviruemoy ring structure with fast recovery time. Targeting 100-Gigabit Ethernet-interface allows you to create a high-performance network infrastructure that provides fast construction of a scalable computing environment for the accelerated growth of network traffic caused by the transfer of video, wireless service connections and new computing technologies such as cloud computing (cloud computing). The majority of Ethernet-device has support for multiple data rates, using auto-detection (autonegotiation) the speed and duplex for the best possible connection between the two devices. As a result, it is possible to optimize the network in terms of "quality / price", because any time you can go on a faster standard, changing the existing equipment at the higher speed. Other advantages of this topology include:  High reliability due to redundant ring topology, redundant networking, duplicated power supply, able to work in a wide range of temperatures (typically -40 to + 75 ° C) and special protected enclosures for equipment providing protection from corrosive environments;  rapid disaster recovery or replacement of failed equipment (less than 300 ms);  the formation of dynamic reports on the state and (or) issuance of an accident on the signal relay contacts equipment predot-rotation of failure;  the existence of control functions to maintain the health and monitoring of industrial Ethernet-network from a single control center;  A built-in device web server for remote access to the equipment;

85

1st International DSPTech Conference on Technologies of Digital Signal Processing and Storing

 the ability to service network segments with different topologies;  highly customizable network environment (DNS, DHCP, gateways, domains, workgroups, and so on. d.). Thus, the design of the upper level of ITS on the basis of fiber-optic industrial Ethernet- network provides a highly reliable structure is not exposed to electrical noise and traffic, as well as the influence of the external environment. The second level of the ITS includes in its composition computing systems are designed to solve the basic functional tasks. These include:  traffic management system vehicle satellite-based navigation and radar sensing;  system of monitoring of mobile objects and operational personnel with automatic identification;  centers of situational monitoring and prediction of critical situations;  financial monitoring system and cost optimization. Systems of the second level can be implemented on the basis of Ethernet-rings on the switches at speeds of 100 Mb / c with the use of digital radio with all the objects of transport infrastructure and satellite monitoring systems, radar sensing of objects equipped with the following set of technical facilities:  systems and coordinate and time, weather, etc. types of support;  devices, lines and networks and data communications;  remote monitoring tools;  systems and means for collecting, accumulating and processing information;  automated systems and controls;  system and display means and informing. Currently, there are a significant number of protocols 100-megabit Ethernet - 100BASE- T, 100BASE-TX, IEEE 802.3u, 100BASE-T4, 100BASE-FX, which vary according to the type of cable used - from twisted pair to multimode optical fiber, and therefore , along the connecting segment of from 100 m to 32 km. An important feature of this level of ITS is the need to organize a highly efficient wireless communication with remote terminals. Currently, there are a sufficient number of wireless routers that integrates the functions of a wireless access point 802.11n, 802.11g, ADSL- modem multiport switch and Fast Ethernet. These routers as TD-W8961ND, DSL-G804V, supports wireless data transfer rates up to 300 Mbit / s. Methods of data encapsulation, encryption and authentication used by routers allow you to create closed connection through a communications network. As wireless data transmission technology appropriate to choose mobile Internet standards IEEE 802.11 (Wi-Fi), IEEE 802.16 (Wireless MAN - WiMAX) and Mobile - GPRS. The need to use these two different technologies due to the fact that each of them has a number of features that give them an advantage, depending on the nature of the problem being solved. Wi-Fi technology combines the standards IEEE 802.11a, 802.11b, 802.11g and 802.11n throughput capacity of 11 Mbit / s to 300 Mbit / s (in the run up to 600 Mbit / s) and a range of up to 100 meters. In turn WiMAX technology compliant 802.16d, 802.16e and 802.16m. The capacity of these networks is in the range of 40 Mbit / s to 1 Gb / s (for a fixed WiMAX) and up to 100 Mbit / s (Mobile WiMAX). The radius of action covers a range of 1-5 km to 6-10 km. This fixed WiMAX allows us to serve only the "static" subscribers, and mobile designed to work with users to travel at speeds of up to 120 km / h. Prospects for the use of packet data technology for neural networks (GPRS) associated primarily with mobile networks of third generation 3G networks and so-called generation 3.5G. Finally, the third level consists of the following ITS passive and active elements:  transport infrastructure, to be equipped with means of production, measurement, transmission, broadcast and reception of signals;  remote monitoring tools and production measurement; 86

Import Substitution and Software and Hardware for Automation and Control

 elements of information and telecommunication infrastructure of the transport sector;  vehicles and loads to be equipped with means of communication, remote monitoring and telemetry;  remotely operated actuators and display devices - devices, components and assemblies. Most of the following systems and tools used to create a picture of the traffic situation information and optimization of vehicle traffic, in particular due to the synchronization of traffic lights, depending on the flow rate, as well as to prevent accidents through early detection of potentially dangerous situations. To this end, the road infrastructure of the emerging information nodes that collect, store and exchange information with the driver about the status of road conditions. The core of each such information unit is ground controller whose duties include processing of operational information from devices for remote monitoring and control of road traffic by switching traffic according to the algorithm and data on the best alternative route. Synchronization of the whole set of information points is carried out on the upper levels of the ITS. Currently, various modifications of modern traffic controllers to handle any problem traffic regulation. Currently, various modifications of modern road as universal controllers are manufactured with a fixed number of channels, such as traffic universal controller 24 channels 3.2N or 3.3N universal controller traffic on channel 32, and modular, allowing to increase the number of channels c 3 24 during operation. Among controllers with modular architecture for constructing devices are DC-A and DC-C. All traffic controllers have the ability to use special programs that implement algorithms specified traffic control. Model series controllers DKSMN includes a standard electronic control unit, which houses the sub-blocks. Depending on the range established additional sub-blocks extend the functionality of the traffic controllers. In particular, the subunit LVN provides connection remote control unit for traffic police, if necessary, at the intersection of operational management and organization of the "green light" system. Similar functions are performed by the subunit MC, which provides docking with external modems for data transmission on fiber optic lines in a case where the upper levels of the ITS set modes of traffic controllers, which differ from the 32 standard programs of its work. Subunit IDB organizes autonomous operation of peripheral equipment for the neighbor using unlicensed radio frequencies. The main sources of operational information available to the road controller, are remote monitoring, which include motion detectors and video cameras. Motion sensors are detected passage of the vehicle through the road section, and also defines the parameters of transport streams. Normally, motion sensors are divided into communicating detectors that give normalized for the duration of the signals when a vehicle detector in a controlled area, and presence detectors, which give a signal during the entire time of the vehicle in a controlled area. The operating principle of the sensor motion sensors are divided into electromechanical, pneumoelectric, piezoelectric, photovoltaic, radar, ultrasonic, optical, polarization, ferromagnetic, inductive. For best results, use of multiple technologies, such as passive infrared and ultrasound or passive infrared, ultrasonic and radar. At the same time as the most important evaluation criteria are considered the following indicators:  resolution, including the number of detection areas for a single detector;  the measured traffic parameters (amount of movement, the average speed, the percentage occupancy time zone, composition of the stream by category, the average time interval between the vehicles in the area);  the accuracy of measurement;  how to install and configure.

87

1st International DSPTech Conference on Technologies of Digital Signal Processing and Storing

An important role in the problem of road monitoring system plays video detection is designed to automatically detect and archiving traffic offenses. The system allows you to monitor the traffic situation and to recognize the state license plates of vehicles that violate traffic rules. It can also be used to search for stolen vehicles, control of travel on public transport lanes, traffic offenses play a video, etc. The video system consists of a set of video cameras, which includes overview and detailed camera and a computer for evaluating the images in real time. Commonly used camera dome, which have anti-vandal protection and are designed for operation in harsh climatic conditions. Communication environment information nodes ITS due to the increasing demands of its reliability and security is a two-level hierarchical structure. Lower level form the access network, built on the basis of 100 megabit switches. Each access network brings together a set of peripherals located in a limited area. In turn, the access network connected to core network via gigabit switches. These switches, in addition to data transfer functions, provide load balancing by redistributing data streams between traffic controllers and computers second and first level of the hierarchy of ITS. Thus, presented in this paper the concept of integrated telematic platform ITS unifies hardware and software environment in the implementation of a broad class of projects of regional and departmental specifics. Use of standard open architectures of information systems creates opportunities for their scalability, portability and interoperability, which provides a good basis for the further development of the intellectual system.

3.SELECTION OF THE OPTIMAL COMPOSITION OF TELEMATICS HARDWARE PLATFORM INTELLIGENT TRANSPORTATION SYSTEM The process of creating the optimal shape of the telematics platform of intelligent transport system complex includes two interrelated objectives:  the formation of the optimality criterion, adequate design objectives;  selection of the optimal project alternatives from the set of feasible options. The purpose of the design puts a definite imprint on the principle of choosing a set of estimators. They shall contain a number of individual indicators, which allows you to take into account all defining characteristics, adequately reflects the results of the design. In addition, they must provide a ranking of options for degree preferences and put them into conformity quantitative measure of efficiency. Specificity of the problem of estimating the vector criterion is that its decision will certainly subjective. This is due not so much to the subjectivity of selecting a plurality of evaluation functions as the fact that some of the investigated variants may be preferable for one performance, and less preferred for others. Since the basic axiom of evaluation on several indicators affirms the impossibility, in the general case, a rigorous mathematical proof of the existence of the most preferred embodiment, then any one of the number of non-dominated (ie, are less preferred by all parameters at once) can be recognized as the most preferred specific designer in specific conditions. Thus, the task of creating the optimal shape of the telematics platform of intelligent transport systems can be formulated as follows: to select a set of peripherals that will allow to realize the totality of the functions assigned to the third level of the hierarchy of the telematics platform of ITS and would be optimal in terms of vector efficiency criterion. The first condition in the formulation defines the limits within which must be solved optimization problem. And the second - the specifics select the option that meets the specified requirements for performance characteristics developed devices. Among such characteristics are:  the total volume of the product;  The mass of the product;  The price of the product;  Performance evaluation. 88

Import Substitution and Software and Hardware for Automation and Control

Among these indicators is the least formalized performance evaluation. This indicator serves as a peer review of the ability of a set of devices to perform specified functions. As specified functions vary in different devices, then in order to be able to compare them, we have selected a universal hundred point scale. Consider, first, the original set of peripherals required for the implementation of the third level of the hierarchy telematics platform ITS. This set, together with the characteristics It should be taken into account, the more formed the options, the more likely to get the most quality solution. In order to assess the generated variants were as justified, the development of a decision rule should be carried out in strict accordance with the amount of reliable information about the properties of estimators used. The least represented subjective approach, in which the ranking of options for each of the indicators taken into account, as well as the generalized evaluation function, the sum of ranks. In this regard, the best option will be selected using the described method of ranking. As the main criterion for evaluation of the effectiveness of the use of each option, which is on a 100-point scale characterizes such characteristics.

4.CONCLUSION The proposed approach allows systematically solve most of these problems by creating a unified multi-tier architecture of ITS. Considered hardware system monitoring and management of road traffic, which includes traffic controller, traffic detectors, video camera dome 100 megabit switch, gigabit switch. Presented best selection of the optimal composition of telematics hardware platform of intelligent transport systems.

89

1st International DSPTech Conference on Technologies of Digital Signal Processing and Storing

Automation Catering Activities

Enikeev Rustem R., Suvorova Veronica A., Solyeva Anastasiya V., Vazigatov Dinar I., Sitdikova Elina O.,Mirzaxanova Albina B.. 1Automated and management control systems, USATU, K. Marx St.,12, Ufa, Russian Federation [email protected]

Keywords: Public catering, effective managemen,

Abstract: Public catering (catering) - a branch of the national economy, engaged in the manufacture and sale of prepared food and convenience foods.

These businesses include: restaurant, café, bar, dining room, pizzeria, coffee shop, culinary and pastry shop, dumplings, pancakes, as well as various types of "fast food". In our country, significantly increased demand in services and catering is no exception. Effective management of the production activities of the enterprise increasingly depends on the level of information support. Currently, few Russian enterprises for the production of food service have a complete automated program with advanced settings. Task Automation of public catering production process today is very important. This is due primarily to the fact that "Bashbakaleya" as the company is constantly evolving and growing, and the existing power production technology no longer meets the emerging needs for speed and calculate the correct cost and payment of taxes. Previously, six organizations were located apart from each other and produce food for shops that are nearby. But due to the fact that the production capacity was not enough, it was decided to combine these separate entities under one roof and create a pipeline of production of public catering system. The advantage of this system is that all the expensive equipment located under one roof, and established logistics, and it is possible to reduce the number of staff. After reviewing all the programs on the market, it was decided to use the 1C program for automation of catering: Food service. This choice stems from the fact that this program can be fully customized and fine-tuned to the needs of the enterprise. terms of reference and selected software for the implementation of software has been developed to solve this problem. To create a new type of production using 1C: 8 within the catering was written and implemented the code, and obtain necessary forms of documents. The first plant was set up in the program for the production, which produced certain types of products, such as: meat shop, salad shop, etc. Built mimic the proposed document which is presented in Figure 1.

90

Import Substitution and Software and Hardware for Automation and Control

Заявки от магазинов

Отчет Реализация на распределение основании заявок по магазинам ТМ СТ И другие

Склад экспедиция

Выпуск продукции Рецептура (перемещение) Сводный На основании заказа Консолидированный заказ Выпуск (1раз)

Д20-К10 цех виртуальный Перемещение полуфабрикатов Д10-К10 Цех Д43-К20 готю продукция

Удаленные Склад виртуального производства цеха Списание сырья с цехов

Цех1 Цех2 Цех3 Цех4 Цех5 Цех6

Перемещение

Склад сырья Поступление сырья Fig.1 Mimic proposed document

Workshops can be exchanged manufactured goods with each other for further co- production for example in one shop made the dough, one part of the test goes on sale, the second in another department as a semi-finished product for the production of ravioli, while beef is taken from another shop. Accounting ingredients separately for shops and all movement between shops is fixed in the program (enter data production accountant). All documents in the program are conducted through virtual production Warehouse. And all together works as a company with all the ingredients and the displacement of the workshops. All applications unsubscribe from the shops, take a virtual warehouse of production, there will automatically sends a request to each department separately, they must be prepared today. After cooking through the virtual manufacturing finished products has already sent to the shops. Also, when you receive materials in pieces immediately convert in weight. The document fields are added, the range of providers, actual net weight, the amount in schtukah vendor. And now with the arrival of raw materials in the piece is automatically recalculated to kg and compared to the price per share to the price on the contract specifications to the supplier. This also takes into account the humidity factor in the existing bulk solids.

CONCLUSION Modified system provides data related to raw materials, which ultimately allows us to calculate the exact cost of production - is one of the main challenges of catering. By creating a virtual workshop documents between workshops held automatically on the basis of products, thereby reducing about 6 accountants involved in the production. In addition, the information is saved daily transmission time for the preparation of finished goods between shops.

91

Innovative Technologies and Methods of Economical and Social Data Processing

Problems of Application of Formal Methods for Modeling Software Systems Design Process for Systems with Incomplete Information and High Complexity of Subject Domain

Zagitov German R., Antonov Dimitri V., Antonov Vyacheslav V.1 1Automated and management control systems, USATU, K. Marx St.,12,Ufa, Russian Federation [email protected]

Keywords: Inaccuracy, uncertainties, fuzzy sets.

Abstract: In article the formal process modeling methods, their development and characteristics are considered. It is noted that when designing processes often have to deal with the uncertainties of various types and for their solutions are used different methodologies, as well as expert evaluation. Also in article the evolution of the process of building information systems, as well as models for solving design problems are considered.

It’s difficult to get a precise definition of “formal modeling methods”. This is due to 2 reasons: 1. Modeling methods themselves based on software compilation and interpretation, which are also formal. 2. On the other hand, modeling methods themselves using mathematical notation, ways of reasoning and evidence, adopted in mathematics. The rapid development of information technology has brought formal methods of modeling new successes and new problems. So, using a number of modeling notation identified new opportunities and possibilities and at the same time and, consequently, new problems related with modeling[3, 4]. There were a lot of facts, which attested the successful using of the modeling methods, however, a number of contradictions have risen demand theoretical rethinking, that served as the basis for their recognition as methods of cognition. Problems of creation information systems with specified characteristics always belonged to the problems, which are difficult to resolve, including because of the interdependency between business platforms with information at the design stage. In this case the main question about the philosophy of system design is still open: how to get the most focused on achieving the goals information system based on a general description of the global business goals and how to support in this case the declared characteristics in subsequent stages of its life cycle. The external environment exists independently of the system, but the system is able to respond adequately to the information coming from the external environment, and provide information to the external environment which, probably, has influence with its condition and behavior. As a rule, the external environment is a fragment of real world, interacting with a system. In the model, there is uncertainty associated with the inaccuracy of our knowledge about the real object. The presence of this error must be considered in the application of the findings to the real object. Due to the dynamic properties of the subject domain and its structural complexity there is a constant increase of complexity of the information system, even just because of the accumulation of the number of states. The additional difficulties arise in the presence of errors or uncertainty when

92

1st International DSPTech Conference on Technologies of Digital Signal Processing and Storing

parameters are being set on which the result depends[5, 7]. The structure of the vagueness and inaccuracies has a complex architecture; different methodology (Figure 1) used for the treatment of its various types. So uncertainty, describes the real-world objects, called - Physical uncertainty. Since the arithmetic operations to rank scales do not make sense, theory of measurement is a means of supporting the information processing of this type of uncertainties. Probability theory can be a means of maintaining information processing in case of uncertainty of some events. As this theory is based on assumptions, warranty correctness of the conclusions is possible only when there is the possibility of verifying these assumptions. Theory of formal grammars can be a means of supporting information processing in the presence of uncertainty of meaning for the phrases. The theory of fuzzy sets is a method which is used to solve the problem of modeling of real objects and the presence of vagueness or inaccuracies certain type. By increasing the fuzziness and uncertainty probabilistic description are replaced by the use of expert assessments. At the same point estimates of probability distributions are replaced by interval and fuzzy. The dependence of the efficiency of the above descriptions of the uncertainty is shown in Fig. 2.

Fig.1. Information Processing Support Tools.

In constructing the formal domain models excessive certainty is being carried in due to use deterministic methods, which leads to the impossibility of accounting inaccuracies when setting parameters, fuzziness or inaccuracies simply ignored, or replaced by expert assessments. Only a limited number of parameters, which are the most important, are considered to reduce the complexity of the model and a concept of fuzzy accounting as a response to the certain impact is being introduced to account for other parameters. Obviously, the traditional methods of analysis systems are ineffective for using in such cases, and achieving the required accuracy is practically impossible for systems exceeded a certain threshold of complexity. There are many difficulties with qualitative, complex data when using the mathematical methodology and replace with real objects to work with mathematical descriptions of these objects of their methods. Thus, many attributes cannot be measured directly, are shown in numerical form and are correlated with any standard value.

93

Innovative Technologies and Methods of Economical and Social Data Processing

Fig. 2. The dependence of the effectiveness of descriptions of uncertainties.

At the beginning of the twentieth century mathematicians Jan Lukasiewicz and Emil Post[12] multi-valued principles of mathematical logic have been developed, in which the predicate values can not only be "true" or "false”. In 1937, Max Black, the first multi-valued logic applied to the lists as a plurality of objects. In 1965 extension of the concept of Cantor sets was proposed by LotfiZadeh, as a method, simulating the possibility of the human mind, using interpolation to make decisions based on a small number of rules, which led to the transition from the normal characteristic set function to membership function. Complete algebraic system was built, in which introduced the concept of fuzzy set as a "continuum of degrees of belonging", basic relations and operations on fuzzy sets (equality, nesting, addition, union, and intersection) were determined[3]. In 1975, on the basis of this work, E.Mamdani and S.Assilian designed the first controller functioning on the basis of this algebra and applied it in the industry[16]. Thus began the era of industrial fuzzy control. Creating a management system based on algebraic principles of fuzzy sets, consists in displayed describing its inputs to its outputs describing fuzzy sets. These rules are called predicative, and a mechanism that allows them to help reflect the fuzzy sets of inputs to outputs, called fuzzy inference mechanism. The details of this mechanism are defined as the fuzzy conclusions algorithmic, which are formed by a sequence of four basic steps: fuzzification, inference, defuzzification and composition. The introduction of the concept of linguistic variable after, with fuzzy sets as values, created a new formal apparatus to work with the uncertainties and describe the processes of intellectual activity, which is called fuzzy logic. Fuzzy multiple formalisms for us - it is the most natural language of modeling uncertainties, a way of relating some fuzzy objects with others. Set theory, being the foundation of mathematics, however, was unjustified till the end, the main questions of the foundations of mathematics and logic are closely linked to philosophy. The results of Gödel proved the impossibility of creating a universal axiomatic system. Introduction of fuzzy sets has provided a basis for the development of the pledged Zadeh in the 1970-73 years fuzzy logic (Fuzzy Logic) - more flexible approach to discourse analysis and modeling of

94

1st International DSPTech Conference on Technologies of Digital Signal Processing and Storing

complex, humanistic systems, whose behavior is described by linguistic rather than numerical variables, and fundamental element in it is a kind of similarity measure which can be used to pass from one to the other fuzzy objects[3]. As mentioned earlier, the beginning of the practical application of fuzzy set theory can be attributed to 1975, year of construction first fuzzy controller to control a simple steam engine by E.MamdaniS.Assilian. The presence of pairs of opposites in natural languages of all kinds allowed Ch.Osguda suggested the idea about semantic differential and psychosemantic multidimensional space, in which knowledge are projected. D.A.Pospelov published his work[13], in which the foundations were laid for applied psychosemantics and justified the rejection of the classical principles of hard legibility and supplies, underlying the classical set theory and formal systems, proposed the idea of a mathematical theory of gradualness "Similarities - differences", based on fuzzy relationship with a variable level of reflexivity and symmetry. This means that for any two elements m1 and m2 a variety of M existing design procedure of constructing function

0, ifmand12 m distinguishable 01,,vifmand m similar (,)mMmM 12 12 ()v  the coefficient ofconfidence 1, ifmand m the same  12 Similarly there is the characteristic function  ()x with the following values 0, xM   ()x  (xv ), 01,  xMv with the degree of belonging 1, xM  Areas of the most effective application of modern management techniques can be presented on the diagram shown in Figure 3. Information processes play a crucial role in nature, in the economic and social life. As the complexity of the social organism increases the role of information, and with all the increasing rate, also increases. The share of costs for obtaining and processing information in the economy and public spending increases continuously[10]. Getting any information requires mandatory costs which, in accordance with the second cannot be ruled out ever. Sometimes these costs can be so high, that are essential.

95

Innovative Technologies and Methods of Economical and Social Data Processing

Fig. 3 Areas of the effective application of control technologies

Considering the information system from a different perspective, you can get a variety of architectural concepts that are particular aspects of the software architecture. We can consider the objectives and restrictions as elements of the logic of building software system architecture. Considering the previously mentioned conclusions, the symmetry in our representation objectives and constraints, we may formulate a decision, based on them, which is determined by one of those present alternative solutions. Fuzzy decision itself can be regarded as not exactly formulated goal or restriction. And this inaccuracy will take into account the fuzzy response of the system to not recorded parameters. This process can be represented by the algorithm. A business process can be considered as a functional model of a real process and represents the functional sequence of actions, i.e. it can be interpreted as a set of interacting subsystems. You can interpret the business process as a consumer and accordingly converters certain resources. In this case, they can be interpreted as an optimal control objects. However, because of the large dimensions of the resulting model, it is practically impossible to form a business - model of organization, fully taking into account all aspects of the software. It is also impossible to show exactly this business model in the information system architecture and, consequently, to implement a fairly simple algorithm for management. To solve these problems would be best to apply the methods of category theory and fuzzy logic, and therefore, control on the basis of knowledge bases. Consider the evolution of the process of building information systems. We introduce the notation: qi  Q, i 1,..., n - a finite number of states of the information system;

96

1st International DSPTech Conference on Technologies of Digital Signal Processing and Storing

Q - the set of possible states of an information system; wi , - the process of building an information system in every determinate time; W - a variety of possible processes of construction of an information system Considering the above, the transition of information system from one state to another can be represented by the display function F FQWQ: , (3) i.e. f q( w , )iiwill identify a subsequent state of the information system after the implementation phase of construction wi and can be represented by the formula:

qi1  f( qii , w ), i 1,..., n. (4) That is, we have the recurrence formula and the set of states of the system, which forms a class of objects, for each pair of objects qi and q j given a set of morphisms H o m( q , q )ij, for each pair (morphisms), for example, gq Hom(,) q i q j and fq Hom(,) q j q k determined their composition, i.e. states of the system form a category of sets. Summarizing the results, we can introduce the following notation: n - number of objectives, m - number of restrictions, The solution is the intersection of all objectives and restrictions DGGCC11 ... nm   ...  . Membership function for a given set of solutions can be represented by the formula ...... D GGCC11nm

There are some rules-constraints Ci c ij : j I, represented as fuzzy sets W , and membership function can be represented by the formula  ()w . cii The results of the accumulation - combining the results of the application of rules of interaction can be represented graphically, for example, illustrated in Figure 4.

97

Innovative Technologies and Methods of Economical and Social Data Processing

Fig. 4 Accumulation of rules

The coefficient of the relative importance of objectives and constraints: nm  (5) i(0,1), j  (0,1), i  j  1 ij11 Membership functions generally can be represented by the following formula: (...((...())))11nm (6) D GGCC11nm Obviously, the smaller the coefficient of importance specific objectives and constraints, the fuzzy set, becomes blurred, and therefore, the role of deciding decreased [15]. If a fuzzy set defined as the point in the cube, the fuzzy system can be interpreted as a mapping between the cubes [2.8]. In general, introducing the notation: I – family of fuzzy sets, S – fuzzy system, n m IIIIII 12, ,...,nj :  – family of fuzzy sets domain, IIIIII 12, ,...,mj :  – family of fuzzy sets field values, we can say that there is an image of the family of fuzzy sets: S I: I nm . By using fuzzy inference fuzzy matching R can be used, set between the domain I n and m area of values I as a direct product[Error! Reference source not found.] nm ij, (7) R  ((,))R II ij11 ij ij R (,)II - membership function (,)II fuzzy matching R . We can use the inference rule of traditional logic "modus ponens", taking into account the generalization for fuzzy concepts [14] AAIBBI,,, ''nmand may be applied one of the ways of formalizing of conformity: mnmn RA()()max[min(( BA  I  ABA(III ),()),1( )]  (8) or mnnm RAIB()()min[   III AB(),()] (9) If the general attitude R , in order to obtain the result can be used compositional rule of inference BA'' R, where the value B' can be calculated by maximin operation. Given the presence of (1.8) and (1.9) we have the opportunity to get different conclusions, i.e. to select a specific synthesis for a particular domain, for example: BAAABAI''' R  [()()] m (10) or BAAAIIB'''R  [()()] mn (11) Using the research presented in [14] and the information approach from AA Denisov [6], uncertainty already constructed formal system fragment, which has a probability distribution p12, p ,..., pn we can estimate the value of n (12) Hpp1   iilog i1

Also, there is an uncertainty in set of subprograms H 2 ,the behavior of which is not yet known. Considering the process of construction of the system in the form of a finite number of stepsln{0,..., }, we can estimate the total uncertainty HHH12:

98

1st International DSPTech Conference on Technologies of Digital Signal Processing and Storing

H100 H 2  H 0 when l  () start of planning phase  H H1  H 2  H 0 when l (0, n ) ( intermediate state ) (13)  HH120  0whenl  n ( the system is designed ) Obviously, in the beginning of the design, when uncertainty l = 0 is maximum and is equal to H0 . When the formal system fully designed, the description of all its calculations contained in the formal model, the uncertainty is minimal or zero. In each intermediate stage of construction of the uncertainty of the formal system decreased from baseline at HHH0() 1 2 , i.e. the value of the knowledge gained from the study. Thus we can say that the initial uncertainty of the physical system coincides with the uncertainty of its formal model. As the basic models for the solution of the above tasks at different times of the proposed model by John A. Zachman, Robert Barker, D.Henderson, U.Melling, A.Sheer. However, in these approaches do not offer specific mathematical apparatus, which would be applicable to the analysis of the model, as well as to search for the best solutions in the space of possible alternatives [9.11]. One of the first models designed to link the information system architecture with real organization architecture based on their mutual influence was the model by D.Henderson, under an information platform which is meant the sum of adequate computer technology and methods. Under the information architecture and infrastructure is understood the totality of certain architectural components and products selected for the implementation of the basic information organization platform used for the deployment of new information technologies. Because of its conceptual and abstract nature, this model is not widespread among the practical information systems developers. Much more popular is the model proposed by John. Zachmann, which successfully combined the simplicity and conceptually powerful idea of a common information system architecture, the views of users and developers. This model prescribed initial study of the most important substantive aspects of the organization and the formalization of their representation in the graphical notation, understood by all actors in the development process, and prior to the start of construction of the information system itself.

CONCLUSION  complexity of the IC is characterized by the level of complexity of objects and levels of relations between them;  complexity also depends on internal information and information system depends on the number of objects and established relations at various levels;  can divide the information system on data objects (functional modules), describe all their relationship with each other and can declare the functional stability of the system behavior;  it is advisable to implement the replenishment and storage of data in the form of a set of multi-dimensional of OLAP-cubes, with the architecture of the repository on the basis of a multidimensional database. At the same time, handling and replenishment processes of HD with the new data should be independent and be carried out using OLTP technology.

References 1. Antonov V.V. Formalization of the domain with tools supporting standards [Text] / Antonov V.V., Kulikov G.G., Antonov D.V.// Vestnik USATU: scientific journal of Ufa State Aviation. Tehn. Univ. USATU. 16. T. number 3 (48). 2012.- pp 42-52.

99

Innovative Technologies and Methods of Economical and Social Data Processing

2. Antonov V.V. "Theoretical and practical aspects of information systems models [Text] / Antonov V.V., Kulikov G.G., Antonov D.V. // LAP LAMBERT Academic Publishing GmbH & Co.KG, Germany. 2011. 134 p. 3. R. Bellman, Zadeh L. Decision making in vague terms, "Peace: col. scientific. tr. Moscow, 1976, pp 172-215. 4. Burkov V.N. Theory of active systems / V.N. Burkov, D.A. Novikov [Text] // Moscow: ICS RAS - 2003 - 161 p. 5. Vasilyev V.I. Intellectual control systems. Theory and Practice: A Training Manual [Text] / V.I. Vasilev, B.G. Ilyasov. - M .: Radio Engineering, 2009. -392s. 6. Volkova V.N. Fundamentals of the theory of systems and system analysis [Text] / V.N. Volkov, A.A. Denisov // St. Petersburg:. SPbGTU, 2009.-512s. 7. Glushkov V.M. Fundamentals of Paperless Informatics. [Text] // M .: Nauka, 1987. 552s. 8. Zabotnov M.S. Methods of presenting information in sparse data hypercubes [electronic resource]. - Access: www.olap.ru. (reference date 06/11/2014) 9. Kulikov G.G. Intellectual information systems [Text] / G.G. Kulikov, Breykin T.V., Arkov V.Y. - Ufa: USATU, 1999. 129 p. 10. Kulikov G.G. The information-analytical system is based on the knowledge of registration of the population [Text] / G.G. Kulikov, Antonov V.V., Savin A.A. // Vestnik USATU: scientific journal of Ufa state aviation tehn. Univ. 2008. T. 10, № 2 (27). S. 60-67. 11. Kul'ba V.V. and others. The reliability and security of information in the ACS [Text] / Kul'ba V.V., Kovalevsky S.S., Shelkov A.B. - M .: SINTEG, 2003. 500 p. 12. Lukasiewicz J. Aristotelian syllogistic from the standpoint of modern formal logic [Text] // M .: Publishing House of Foreign Literature, 1959. 312c 13. Pospelov D.A. Fantasy or Science: on the way to artificial intelligence [Text] // M .: Nauka, 1982. - 224 p. 14. Ryzhov A.P. Elements of the theory of fuzzy sets and its applications [Text] // Dialog - , Moscow, 2003. 15. Shtovba S.D. Introduction to the theory of fuzzy sets and fuzzy logic // [Electronic resource]. - Http://matlab.exponenta.ru/fuzzylogic/book1/14.php Access (reference date of 20.06.2010) 16. Mamdani, E.H. An experiment in linguistic yn thesis with a fuzzy logic controller / E.H. Mamdani, S. As- silian // International Journal of Man-Machine Studies. - 1975. - Vol. 7, № 1

100

1st International DSPTech Conference on Technologies of Digital Signal Processing and Storing

Analysis of Effectiveness of Cross-Platform Software of the Information Environment (on the Example of the University and Enterprise)

Fakhrullina AlmiraR.1 1Automated and management control systems, USATU, K. Marx St.,12,Ufa, Russian Federation [email protected]

Keywords: software infrastructure, distributed data processing, electronic document management system, educational and work environment, the BPM-system content domain

Abstract: The article describes the principles of construction and use of formal models of management of business processes, content domain used for the organization of information interaction of different groups of users in educational and industrial environments. The development and widespread use of information and communication technologies is a global trend of world development, and scientific and technological revolution of recent decades, absorbing considerable economic resources. For example, accounting documents between the university and the company, shows the effectiveness of a distributed data processing software infrastructure by automating the interaction between the university and businesses.

1. INTRODUCTION Preparation of highly qualified specialists for enterprises - the requirement of time and society. In order for the quality of education in higher education in line with the demands and the needs of employers, it is necessary first of all to the learning process included training in a real production environment of the enterprise with the use of modern information and communication technologies (ICT). Modern ICT allow to form a distributed processing of data in a single information space between the university and the enterprise to accelerate documentation processes and interaction agreements with electronic document management systems (DMS). 2. MODEL OF ACCOUNTING DOCUMENTS IN THE EDUCATIONAL- PRODUCTION ENVIRONMENT When modeling business processes using the BPM-system, represented as a platform in which the interaction of structural divisions of the organization. Rapid response platform business strategy change is due to the design flexibility, performance and control of business processes. When modeling business processes are also used graphics programs, for example, the system Horus Business Modeler (hereinafter Horus), which supports the territorial remote access by Web 2.0 technologies. Horus is a professional tool for modeling and simulation of business processes. The base consists of a repository-based database management systems (Oracle, MySQL, etc.), Which is an object-relational system supports some of the technologies that implement object-oriented approach, that is, providing a distributed data processing management, the creation and use of databases [1] . The database model is stored in a structured manner. Formal elements may be associated with any kind of multimedia documents. The functional range includes reports that give a clear view of the contents of the model (Figure 1). Without the introduction of the DMS in the process of interaction between the university and the enterprise for registration of incoming documents is spent about 12 hours per year, for its consideration takes 14.4 hours per year, the formation of the response documents for the year 101

Innovative Technologies and Methods of Economical and Social Data Processing takes 9 hours, and the registration of outgoing documents for the year is spent 14, 4 hours. More time is spent on the expedition outgoing documents, about 72 hours. Between high school and now increases the number of processes of the agreement, for which you must provide timely information support. In this university and enterprise interaction Thereare a number of major shortcomings, such as [2]: reduced performing the function; a long time to process the documents themselves and execution; low level of confidentiality; reduced corporate culture; long-term adaptation to changes in the market of international standards; problem of preservation of documents; geographical remoteness and others.

Fig. 1 Process model of accounting documents in educational and industrial environments, built in Horus (without DMS)

With all these disadvantages there is a problem of increase of efficiency of interaction of the university and the company, it can be solved by forming a software infrastructure for distributed processing of data and produce a comprehensive analysis of the educational and working environment data, formed in the interaction of the university and the company. For the effectiveness of the interaction of the university and businesses are invited to introduce and adapt the DMS, the example WSSDocs (Figure 2).

102

1st International DSPTech Conference on Technologies of Digital Signal Processing and Storing

Fig. 2 Process model of accounting documents in educational and industrial environments, built in Horus (with DMS) With the implementation of the DMS time spent on the account documentation in educational and industrial environments, greatly reduced, summarized the time spent on accounting records for the year is shown in Figure 3.

140 121.5 120

100 The time spent on the 80 account documentation in 60 educational and industrial 40 28.6 environments, hours per year 20 0 without DMS with DMS

Fig. 3 Diagram of time spent on accounting records for the year

3. CONCLUSION In the sentence, the software infrastructure for the distributed processing of data as an example of the process of accounting documents in educational and industrial environments, developed a model of interaction between the university and the company, which allows to formalize and structure the content in educational and industrial environments. With the implementation of the DMS time spent on the account documentation in educational and industrial environments, greatly reduced: the registration of incoming documents takes 7.2 hours a year, review the registered documentation takes 10.8 hours, 4.2 hours away on the formation of the response documents per year, for registration of outgoing documents is spent 6.3 hours per year. The time spent on the expedition outgoing documents, for the year, DMS does reduced by several hundred

103

Innovative Technologies and Methods of Economical and Social Data Processing times. the difference between the time can also be seen clearly spent on accounting documentation in educational and industrial environments with electronic document management system and without it. This time was reduced by more than 4 times.

References 1.Key Features of Oracle [Online]. Available: http://bourabai.ru/dbt/oracle.pdf. 2.Electronic Document: Advantages andDisadvantages[Online]. Available: http://www.klerk.ru/bezbumag/395047/

104

1st International DSPTech Conference on Technologies of Digital Signal Processing and Storing

Formal Domain Model Given the Vague Descriptions of the Object Model

Antonov Vyacheslav V.,Suvorova Veronica A., Solyeva Anastasiya V. 1Automated and management control systems, USATU, K. Marx St.,12, Ufa, Russian Federation [email protected]

Keywords: Subject domain, semantic model, process monitoring.

Abstract: In article problems of construction of mathematical model of a subject domain from positions of the methods which are taking into account an illegibility of descriptions of model of researched object are considered. The opportunity of representation of business - processes as set cooperating semantical the certain objects is considered.

As a result of universal information many functions of management are transferred under the control complex the information systems using biological and computer technologies of information handling. Information handling in similar systems became an independent scientific and technical direction. For the given systems typically presence of technological sites with the automatic, automated and intellectual management. Construction of information system, transition from one status in another occurs on the certain laws, thus laws of transitions not always can be set in a precise kind, i.e. the system has the behaviour determined by laws [1]. Law of integrity is shown at each status of system due to what at each status of system there can be new properties which can not be removed as the sum of properties of elements. Translation of conditions of a practical task into language of mathematical models always was difficult and frequently resulted in loss of difficultly - formalized qualitative information. Process begins with allocation of objects of a subject domain and revealing of connections between them. We shall consider more in detail process of a choice of objects. In a general view function of a choice can be presented as set of the alternative objects chosen on some condition which in turn can to be submitted as set of data on a status of object and set of rules of a choice. Let  - set of data describing objects, Z - set of objects of a subject domain, zZi  - object from set of objects. It is obvious, that the part of data describing object can be presented as set of its information characteristics xADxiiii ,|  , where Ai - nonempty set of names of properties (attributes) object with number

“i”, Di - set of values of the appropriate attributes, xi - set of information characteristics object with number “i”. The dictionary of elements of allowable values subdivided on classes that allows to present a subject domain as hierarchical structure can be made. Values are broken into classes of objects which cooperate with each other on the basis of rules. Let  - set of rules of a choice then conditions of a choice of object [2] of set of alternatives can be submitted as a train y  , . On set of attributes relations GGG  , which share on quantitative G and qualitative G can be established and the set of types of a choice, for example T = {"conformity", "equivalence", "preference"} is determined. Then any rule of a choice can be submitted by a train   GT, . Thus, set of information characteristics of the object, the established relations and rules of an establishment of relations are the characteristic of object

105

Innovative Technologies and Methods of Economical and Social Data Processing

zxGADGGTiNiiii ,,,,,,,   . Generally the characteristic of each object xi can be described by the appropriate linguistic variable A T,, D , where TTTT jjj,,..., - term - set of jjj jm 12 j  a linguistic variable Aj (a set of linguistic values of attribute), m j - number of values of attribute; Dj - (a j subject scale) base set of attribute Aj . For the description of terms Tkj k m, 1,. . . , appropriate to values

jj j of attribute Aj , fuzzy variables Tkjk D,, C , i.e. value Tk can be used - is described by fuzzy set

j Ck   j ( d ) | d , where dD j km1,..., j . In result as the fuzzy characteristic of object xi the  Ck 

m j fuzzy set of the second level xaa  ()| , where ()()aTT | jjTTj  ixjj  i  xjkkix kj  i  k1 aAji can be taken. Proceeding from given, the subject domain can be presented as the multilevel environment consisting of set of elements of a subject domain, set of functions and the methods working on these elements, set of properties of elements and relations between elements, i.e. as ontology which includes the description of properties of a subject domain and interaction of objects on some formal language having logic semantics. If system difficult, the number of factors is great, the account of all of her characteristics results (component) in extreme complexity. Therefore it is necessary to enter only limited number, and the staying components into model, obviously not entering in model, but taking into account their influence as fuzzy reaction of model to this or that choice of alternative. It is obvious, that algebraic comparison a component is impossible and can be executed with application of methods of fuzzy logic. Thus, the final set of objects Zzz 1,..., n of a subject domain can be used as set of objects clustering. The given set is described by final set of attributes, each of which quantitatively represents some property or the characteristic of elements of a considered (an examined) subject domain. Draft on funds of fuzzy logic allows to approve that for each object in some quantitative scale all values of attributes are measured, that is some vector in which coordinates is put to each object in conformity are quantitative values of the appropriate attributes. Let's characterize the objects Zzz 1,..., n subject clustering by vectors of attributes yyyiii ,..., , where p - number of attributes describing object  1 pi  i with number “i”. Fnn 11 Let FFF 1,..., k - set of fuzzy clusters of a subject domain,  (,...,),...,(,...,)11kk - set of functions of an belonging of objects to fuzzy clusters. Then the sum of degrees of an belonging to k i all clusters for each object i fulfils a condition  j 1. We shall enter the additional restriction, based j1 that fuzzy clusters form an fuzzy covering of set of objects in that and only in the event that for each n i cluster j it is carried out   j 1. After the task of degrees of the belonging satisfying the specified i1 restrictions, we can calculate coordinates of the centres of clusters under the formula

106

1st International DSPTech Conference on Technologies of Digital Signal Processing and Storing

n pi iii 2 (()) jmyy j im11 v  n , where jk1,.. ., . The task is reduced to minimization of criterion function ii2   j ()y i1

kn pi iiij 22 JZyyv()((()())) jm. mim111 Let in process clustering the fuzzy covering is constructed. By consideration of the next object on conformity to each cluster F F F 1,. . . , k  and change of the appropriate functions of an belonging, at k n1 occurrence of a situation when  j 1, it is made a decision on creation of a new fuzzy cluster. j1 Understanding as the alternative decision variant of a choice taken into account a component, we shall designate through X set of alternative decisions. The choice of the most preferable decision, in each concrete case, can be carried out on set of difficult criteria with normalization its component. On the basis of the given set X we shall generate set of the ordered pairs E X X alternative decisions. We shall designate through ( ,xy ) , where x X y X , a pair of alternative decisions. Having designated through ( ,xy ) [0 ,1 ] - function of an belonging of the fuzzy relation of preference of the decision x before the decision y . The fuzzy relation of preference we can determine the formula P E x y , ( , ) . It is obvious, that for everyone P there is a return fuzzy relation PEyx1  ,(,) . We shall determine a degree of the superiority of the decision x above the decision y as (,)(,)(,)x y  x y  y x . It is obvious, that (,)(,)xyyx  . Then the set non-dominated decisions can be determined Xxyund xyX |(,)0, . Taking into account, that us interest only set precisely non-dominated decisions, having excluded the relation of an equivalence (,)0xy , and using proofs given in [2], the decision is reduced to a choice of alternatives on a basis  -level relations of preference, and we can m enter one fuzzy relation of preference equal ii(,)xy, where i - weight factors of importance for i1 fuzzy relations of preference. Thus the decision is reduced to multi-criteria to a task of acceptance of decisions [3], as criteria of efficiency functions of an belonging and finally to calculation of one function on formal algorithm of the decision  -level models of tasks of acceptance of decisions m F( ) ( x , y ) | ii  ( x , y )   act. i1 The model covering information system, can be submitted as metabase which contains the information by each kind of object of the account. On the other hand information system representable as functional system - i.e. as set of functions. Thus, the purposes and restrictions can be given as fuzzy sets. The interrelation between them can be determined by the relation on the Cartesian product [4]. Examining the purposes and restrictions as symmetric elements of the logic circuit, simply enough we can generate on their basis the decision. Business - the processes examined as functional model of real processes, represents the structured description of the given sequence carried out business - operations, that is horizontal hierarchy of functional actions internal and dependent among themselves. Thus the business - process examined as set of consistently carried out chains of operations, can be treated as set of cooperating subsystems, i.e. as discrete dynamic system changeable in space and time. Modelling of information system has the common

107

Innovative Technologies and Methods of Economical and Social Data Processing philosophical basis. The philosophical concept most essential to modelling is the subject domain which can be determined as mentally limited area of a matter of fact or the area of ideal performances subject to modelling, consisting of objects taking place in the certain relations among themselves and having different properties. Definition of a subject domain as parts of the real world or set of classes of real objects subject to modelling, assumes modelling reflection for the purpose studying under the certain corner of sight. This corner of sight itself enters into concept of a subject domain. Therefore it is accepted to count the majority of researchers, that the concept of a subject domain can not be formalized as initial concept. At research of a subject domain the fair quantity of the information which has subjective character can be received. Its performance in natural language contains illegibilities or uncertainty which have no analogues in language of traditional mathematics. At definition of concept of a subject domain it is necessary to take into account the following methodological aspect, by consideration of a subject domain as parts of the validity - ontologic, and by consideration of a subject domain as knowledge about this validity - gnosiological. And as result we have two various classes of models and a task of search of conformity of the given models of the validity: model of the validity and model of knowledge about this validity. The basic problem consists in normalizeangle process of modelling of a subject domain that makes impossible application of mathematical methods of the analysis of properties of models of a subject domain, such as functional completeness and integrity. All this puts a question about consideration of a task of modelling of a subject domain from positions of the methods which are taking into account an illegibility or uncertainty of descriptions of model of researched object. To speak about a subject domain it is meaningful, if it has the certain semantic localization, for example in space and time or functional. Then construction of semantic model is reduced to formalization of logic relations. There are various methods of acceptance of the decisions, representing various ordering of examined decisions based on the same expert appraisals. One of the basic complexities by development of model of a subject domain is, that the number of probable variants of formalization of a subject domain is indefinite. The model adequately should display any subset (variant of formalization of a subject domain), and process of modelling can have any idea, allowing to determine value of any object of a subject domain by realization of any certain sequence of actions. One of such ideas - the method of syntactically - oriented translation based on works N. Chomsky From hypothesis N. Chomsky [5] follows, that the semantic analysis can be shown to syntactic and consists of two steps: recognition of structure and construction of target actions on the basis of this structure. Thus, the mathematical approach [2] allows to be limited to set of chains which can be determined in some exact image. That is we can speak about some formal language given as set. For construction it is necessary to have algorithm which on the given grammar builds a conclusion produced by this grammar. According to [2] such algorithms for any grammar N. Chomsky does not exist. Probably some ways of the decision of the given problem, consisting or in development of algorithm of recognition for each special case, or imposing of restrictions on rules of grammar, allocation of subclasses of grammar for which the algorithm exists. Allocating objects in a subject domain, we receive, that each object is determined by final set of attributes, each attribute has set of allowable values, and we have attribute grammar broadcast according to which rules of calculation of attributes are determined and the algorithm of an attributive tree of a conclusion can be constructed. D. Knut formalized similar ideas, having entered concept of "attributive translation" [2], and any function which is determined on a tree of a conclusion, can be submitted as attribute of any certain site. A subject domain it is possible decompose on elementary objects, each of which is described by set of attributes The domain objects linked by certain relationships that can collectively be represented as weighted edges is partially directed graph. Instead of graphs to represent the structure of the subject area, you can use the language of set theory and lattices of their partitions. Each train of a database is the description of a status of some elementary object. The subset of all trains similar to the given train concerning a chosen measure of similarity, is performance of elementary object. Let some object is allocated and designated by the term. As a rule, the new object is compared to already known objects, and its information model is formed as set of comparison of information models 108

1st International DSPTech Conference on Technologies of Digital Signal Processing and Storing

before known objects. Thus the model of a new subject domain for the given object will be under construction on the basis of a subject domain of that object which became known to the first. As a result of knowledge of a subject domain including the given object, will be structured as set of properties of the first allocated object and a sequence of changes of subject domains of the subsequent objects. It turns out, that earlier allocated objects appear in more preferred position in relation to the subsequent objects of a subject domain as models of these objects are designed by means of change of model of already known objects. Thus the sequence of a choice of objects is will of a case. At attempt of a choice of object which in the best way approaches as the first it is found out, that it is the most convenient to use an image of some idealized, average object of this subject domain which model is replaced with a set of the variables describing objects of a subject domain in its quality. The choice of a set of variables for the description of objects of a subject domain and a choice of allowable values for these variables substantially is any. However, this choice further will determine borders of applicability of her model. Let W a researched subject domain of objects W w= {w ,.1 . . , } n . Frequently at models of reasonings there are fuzzy concepts, however any information which has been written down in any formalized kind and submitted in memory of a computer is precise. Therefore the illegibility of knowledge or relations can be determined by semantics of the information. The properties not included in allocated subject domains can be considered as a separate subject domain with special properties - an environment. Thus, any subject domain can be W considered allocated since it cooperates with an environment. It agrees above told wiii F1 w () where W Fwii() is not functions in the usual sense, and determines only probable statuses of a subject domain of one object on the basis of difference from another. I.e. in this case the question is about possible fuzzy connections between meta object. The model of a subject domain is determined by means of function of performance and family of modelling functions. Let S - model of a subject domain of objects FWSM :  . The model (can be put to each object of a subject domain in conformity on the basis of M M function of modelling F ) rFwiii (). Thus to each object wWi  corresponds. Thus there should be R R a function Frii()which unequivocally determines ri1 on ri , i.e. rFriii1  (). It is obvious, that ri1 W M MW can be determined on a chain wFwiii1  () and rFwiii111 () is received, that rFFwiiii11 (()) . M R On the other hand, exists, at least one more chain rFwiii () and rFriii1  ()according to which, RM after the carried spent substitutions it is received rFFw(())iiii1  . Comparing the received results, we can draw a conclusion, that irrespective of, whether operation in a subject domain is executed all over again, and then display to model of a subject domain is made, or display to model all over again is made, and then in model of a subject domain the appropriate operation is executed, the result will be identical. W R Hence, for each fixed value i it is received homomorphism Fi and Fi . The table of the variables used for the description of objects of a subject domain, usually exists in an implicit kind as the conventional set of characteristics of objects of a subject domain. It does not meet to any concrete object from a subject domain, but only to all subject domain as a whole. To receive from her model of concrete object, it is necessary to fill in this table concrete values of variables. It is possible to imagine the description of a subject domain on the basis of the table of variables as a n-dimensional cube which each measurement meets to one of variables. Redundancy of the given cube since there are not all objects which can be described with the help of a set of the variables chosen for the description of objects of a subject domain is obvious. Thus, it is possible to describe and the restrictions imposed on these objects. All this can be submitted by system of the equations - formalization of model of a subject domain where crossing sets of semantic properties takes place, subject domains of objects, in connection with prospective distribution. The model of a subject domain S can be submitted as metabase which contains the information on each element of structure. On the other hand the subject domain W also can be submitted as set. 109

Innovative Technologies and Methods of Economical and Social Data Processing

Let's enter a designation: P - set of properties determined by connections between elements of the mentioned above sets. Then the interrelation between them can be determined by the relation on the

 prwpPiiii,,:,   Cartesian product PSW . An belonging of an element ziiii p r w ,,  rSwWinii,,1,..., where pi P, r i  S , w i  W , i  1,..., n to the given relation it is interpreted as follows: «the object of model of a subject domain ri contains the information on property pi of object of a subject domain wi ».

Information search of the model of a subject domain ri appropriate to a concrete element in object of a subject domain wi . It is reduced to definition of the relation R S W .

Thus, it is possible to tell about any pair (,):,,1,...,rwRrSwWiniiii  , that wi is relevant ri and a task solution of definition of relevance of elements of sets S and W , is reduced to definition of the relation R S W . Thus for anyone rSi  wWi  rSj  wWj  , i j, 1,...,n  truly, that if wwij at that rrij is, all elements of object of model of a subject domain ri contain in object of model of a subject domain rj and all elements of object of a subject domain wi contain in object of a subject domain w j and ( ,r )ii w R  it is carried out (,)rjj w R . Except for an extreme case when the relation R is the Cartesian product SW , the relation includes not all probable trains from the Cartesian product. It means, that for each relation there is a criterion, allowing to determine, what trains are included into the relation and what are not present. Thus, each relation can put in conformity some logical expression P which is a predicate of the relation R , dependent on the certain number of parameters ( n a local predicate) and determining, whether the train ( ,rw )jj will belong to the relation R that is equivalent to the validity of a predicate PSWR  ,,. The most evident should count the description of a formal language as syntactic diagrams of forms of language representing picktorial representation. The choice of methodology of a cross section analysis directly depends on specificity of a subject domain for which the model is created. Construction of model of system on the basis of her description with use of methodology IDEF0 needs adequate to the purposes of modelling interpretation of the standard. So, performance of system as set of the cooperating interconnected functions that allows to examine functions irrespective of objects which carry out them is possible. All this allows to separate problems of the analysis and designing from problems of realization. Thus the model represents a set of hierarchically ordered diagrams, each of which describes the certain function and consists of the several cooperating interconnected minorant functions connected among themselves both horizontal and vertical ties. It not simply allows to describe structure of functions, but also their interaction giving to set of function system properties. The system can be submitted as set of the processes E which are carrying out transformations of elements of system. We shall consider model of process Ei with N inputs, K outputs, L managements and J mechanisms of abstract business - process.

Let Ii set ii1,..., N  of inputs of process Ei , Oi - set oo1,..., K  of outputs of process Ei , Ci - set

cc1,..., L of managements of process Ei , M i - set mm1,..., J  of mechanisms of process Ei , Fi - set

 ff1,..., P of the interconnected functions transformative inputs in outputs of process Ei . Then process can be submitted in the following formal kind EICOMFi i,,,, i i i i . Thus, interactions of process Ei with process E j are exhausted by the following set of relations:

110

1st International DSPTech Conference on Technologies of Digital Signal Processing and Storing

 The Output - input. GOI - set of displays of outputs O on inputs I .

 Output - management. GOC - set of displays of outputs O on managements C .

 The Output - mechanism. GOM - set of displays of outputs O on mechanisms M .

Let: E - set EE1,..., z  of processes of system, G - set of relations of the processes EE1,..., z  determining constant interrelations and dynamic interactions of subsystems, determined on the following space of values GGGG OI,, OC OM . IDEF provides the additional description of full hierarchy of objects of system by means of formation of a glossary for each diagram of model and association of these glossaries in the dictionary of pointers - S which is the basic storehouse of full hierarchy of objects of system. Taking into account above mentioned, the model of system can be submitted in the following formal kind CEGGGS ,,,,OIOCOM . For example, the output k1 of process “i ” is an input for process “j”:

kkk121 vGeiviOIiji(), . Display can have fuzzy character. Reception of set of allowable variants of displays in this case is possible. All process of construction of information system will be characterized by final number of interactions of business - processes that allows to present information system as controlled invariant on time of the determined system with final number of the statuses qi belonging to given final set of probable statuses qi Q i n , 1 ,. . . , . Process of construction wi , in each determined point of time, will be an element of set of processes of construction W . Then changes of information system at transition from one status in another will be described by function of display FQWQ: , i.e. f(,) qii w there is a subsequent status of information system after performance of process of construction wi , i.e. qi1  f(,) qii w , where in1,. . . , . Thus, there are rules - restrictions CcjIiij :  which are fuzzy set in W with function of an belonging  ()w . I.e. the purpose and rules of interaction are Cii examined as fuzzy sets in the same space. This symmetry allows to not make between them distinction at formation of the decision, fuzzy concepts are formalized as fuzzy and linguistic variables, and an illegibility of actions as fuzzy algorithms. Thus, uncertainty of knowledge at each stage of construction of information system develops of two components: the first is determined by a kind of the constructed part of formal model and the second is the stayed uncertainty of those subsystems of initial physical system which remain not investigated. In a result the task is reduced to a quantitative rating of these sizes and demonstration of their connection with initial uncertainty of physical system. As circuits of information objects of examined model independent business - processes can be determined. Taking into account construction of model as objects, each independent business - process we can examine as the separate (detached) part of information system. Any information interaction between difficult systems is realized consistently at physical, syntactic and semantic levels of interaction. Taking into account, that our system is shared into the information objects incorporated by semantic rules of interaction, it is possible to declare relative completeness of set of taken into account relations between elements of system which determine her behaviour and are a subject of the analysis of functional reliability. Thus, relations between cooperating business - processes can be classified on the basis of mathematical rules precise and fuzzy ligic. The concept of construction of such system reflects actually modern strategy so-called CALS technologies and can be considered as the tool of effectivization and quality since fully complies with spirit and principles of the international standards of series ISO-9000. Taking into account above told, by realization of process monitoring on the basis of formalization of the

111

Innovative Technologies and Methods of Economical and Social Data Processing usual technologies, performance of business - processes as set cooperating vertical and horizontal semantically the certain and formalized objects is obviously possible:  across - independent business - processes, according to organizational parts of divisions;  on a vertical - business - processes which cooperate by semantic rules according to functional assignment. The result is a hierarchical structure with conditional division into "objects" – each of which the possible differentiated approach:  the list of organizational parts and their hierarchy is fixed;  the list of functions which are carried out in the organization, and their interrelations on hierarchy - functional models is determined;  it is determined, how functions, are assigned by organizational parts or «who for what» responds. The list of all functions, gives position about this organizational part which traditionally name Regulations about division. Inside each "object", at introduction of automation, business - processes will easily be transformed in through, and all levels incorporate among themselves in uniform functional system. Thus in a basis of the given system the set of the integrated information models consisting of the life cycle of system and business - processes carried out in its course lays. The uniform integrated model describes object full enough and can be used in a role of a uniform information source for any processes carried out during a life cycle. System information support is carried out in the integrated information environment determined as set of allocated databases.

CONCLUSION 1. The offered method allows, stage by stage allocating objects of a subject domain, establishing contact on a basis semantically the certain attributes to build model of a subject domain, reducing uncertainty of knowledge at each stage. 2. The analysis is made at the top level of abstraction where as information objects of examined model independent business - processes of semantic blocks are determined that determines an opportunity of use of separate methods and algorithms by development and research of the broad audience of information tasks.

References 19. Zadeh L. A. the Foundations for a new approach to the analysis complex systems and decision processes. – In kN.: Mathematics today. M.: Znanie, 1974. P. 5-49. 20. Aizerman, M. A.,F. T. Aleskerov Choice of options. Fundamentals of the theory. M.: Nauka, 1990. 240 p. 21. Pogonin V. A. model of Supervisory control of robots. // Internet journal of Information processes and management, 2006, No. 1. P. 45-57. 22. Beniaminov E. M. Algebraic methods in database theory and knowledge representation. M.: Scientific world, 2003. 184 p. 23. Chomsky N. Language and problem of knowledge // Bulletin of Moscow state University: collected papers. Tr. Moscow, 1996. Vol. 6. P. 157-185.

112

1st International DSPTech Conference on Technologies of Digital Signal Processing and Storing

Production Allocation System for Oil and Gas Companies

Sergey Abramov Tieto Oil and Gas, Perth, Australia [email protected]

Keywords: Production allocation, oil and gas, metering, reconciliation, data validation

Abstract: This paper describes general principles of hydrocarbon accounting which is the important aspect of oil and gas business. Production allocation process is the substantial part of hydrocarbon accounting as it ensures the data being used is consistent and reliable. The production data contain imbalance for various reasons, so the incoming flows should be reconciled against outgoing flows metered with high confidence. It is proposed to implement the information system to support production allocation process execution. Production allocation system should meet the following requirements: configure, store and update allocation network and complex allocation rules; provide integration with low-level production systems; support daily and monthly allocation process run on a schedule; provide information for financial and reporting systems.

1. INTRODUCTION The complex nature of Oil and Gas business requires implementation of different types of IT solutions across its supply chain. The data being generated at all the stages of oil and gas production should be valid and consistent. For the financial and governmental reporting purposes it is important to know the exact quantity of hydrocarbons produced from the field. Precise information on production also helps manage reservoirs (and field in general) better and plan well work activities (and other activities such as equipment maintenance) effectively. Production allocation system serves for the goal of having consistent and reliable production data. "Allocation" is the term widely used in oil and gas business. In essence it is a process for reconciliation of conflicting data, performed in systematic, transparent and reproducible manner. The idea is to determine the imbalance (error) in a system and distribute ("allocate") it back to the conflicting source. This paper describes basics of the allocation process and does not aim to cover all the details about different types of allocations within the hydrocarbon supply chain.

2. ALLOCATION PROCESS If we look at the allocation process in its most simple form, it will be a reconciliation of input flows of hydrocarbons versus output flows. Term “allocation network” is used to describe a directed graph with “nodes” as vertices and “streams” as edges. Nodes represent physical or virtual facilities (e.g., well, processing facility, delivery point) and streams represent the flow of hydrocarbons and non-hydrocarbons (e.g., oil/condensate, gas, water, electricity) between the facilities. Essentially, nodes and streams are mapped to certain business processes and represent generalized view of oil and gas processing, storing and delivery. Fig.1 shows a part of the allocation network where several wells are connected to a delivery point and each well has a theoretical value that is inaccurate and therefore should be allocated. For this purpose the values measured at the delivery point can be used. The oil flows from the wells to a delivery point which is connected to a measurement device. At the delivery point, the reading of the measurement device can be assumed to be accurate (often referred to as

113

Innovative Technologies and Methods of Economical and Social Data Processing

“fiscal” metering). Reconciliation factor is used to adjust (reconcile) inaccurate theoretical values.

Fig. 1.Imbalance between input and output flows

Reconciliation factor is calculated using the formula (1):

 strmsOut.Value Re cFactor  strmOutsetOut (1)  strmsIn.Value strmInsetIn

where SetOut is a set of all outgoing streams, SetIn is the set of all incoming streams.

After the reconciliation factor is calculated, input streams values are adjusted using formula (2):

strIn.AdjustedValue  Re cFactor strIn.Value (2) s s

where S is a set of all incoming streams.

The example at Fig. 1 has Sum(strmsIn.Value) = 150 and Sum(StrmsOut.Value) = 120, which leads to RecFactor = 0.8. In the realistic allocation process we are looking to have the reconciliation factors close to 1. If the factors significantly deviate from 1 (more than 5%), this could indicate poor data quality. The realistic allocation network can be very complex with a large set of facilities and streams. There is usually a requirement from business to allocate by different phases: oil (condensate), gas, water, electricity, fuel and flare gas. The allocation system should also be able to handle both volumetric and mass allocation, and even allocate at hydrocarbon component level (Methane to Decane, usually in Liquefied Natural Gas operations).

3. PRODUCTION ALLOCATION SYSTEM

114

1st International DSPTech Conference on Technologies of Digital Signal Processing and Storing

Allocation process should run automatically (or on demand) on a daily and monthly basis, using data from actual operations to provide the business with reliable data. Allocation methods may change over time, as well as allocation network configuration. Therefore there is a set of requirements for the production allocation system:

 configure, store and make changes to allocation network and complex allocation rules;  integrate with low-level production systems to have the up-to-date production data;  support daily and monthly allocation process;  provide information for financial and reporting systems.

Fig. 2 shows a place of Production Allocation System in the data acquisition and distribution process. Process Control System (PCS) covers the real-time second by second, minute by minute monitoring and control of wells and facilities. The data then gets transferred into Data Historian (DH) where it goes through initial validation. The values at this stage are aggregated into daily masses (volumes), total test volumes etc. Production Allocation System (PAS) then sources the data from DH, performs daily and monthly calculations and provides daily and monthly reports.

Fig. 2. Positioning for Production Allocation System in respect to other systems

As an example we use production allocation system based on Tieto Energy Components solution (simplistic system architecture is shown on Fig. 3). In essence it is a client-server solution with JBoss as the application server, Oracle Database as the database server and Internet Explorer as a thin client. Application Server provides the interface to the database logic and triggers daily data acquisition process and other tasks such as data validation, allocation run and report generation that can be set up and scheduled automatically. Database holds very extensive data model specific to oil and gas business. The model is configured to handle “objects” (e.g., configuration of wells, streams and facilities) and data related to the objects (e.g., daily data for wells and streams), as well as calculation rules and the results of allocation process. Allocation network is built automatically based on objects configuration and can be adjusted through front-end to handle any changes in the physical processes over the field life

115

Innovative Technologies and Methods of Economical and Social Data Processing time. Fig. 4 represents somewhat realistic operations, including oil and gas production wells, water injection wells, manifolds and oil and gas processing, storage and export facilities.

Fig. 3. Production Allocation System - Energy Components

116

1st International DSPTech Conference on Technologies of Digital Signal Processing and Storing

Fig. 4. Allocation network in Energy Components

Calculation Rules that represent allocation logic are configured over front-end using built-in MathML configuration engine, which supports complex sets and formulas. This allows to implement complex allocation rules for oil (condensate), gas and water production, steam and water injection, electricity and fuel gas consumption, including the operations with joint venture agreements when the amount of hydrocarbons produced and electricity and fuel gas used should be allocated between different parties.

Fig 5.Calculation Rules configuration in Energy Components

4. CONCLUSION This paper describes production allocation process in general and proposes the architecture and requirements for the information system that implements allocation logic. In general, production allocation system should satisfy the following requirements:  configure, store and update allocation network and complex allocation rules;  provide integration with low-level production systems;  support daily and monthly allocation process run on a schedule;  provide information for financial and reporting systems.

References 24. Devold H. “Oil and Gas Production Handbook”, ABB Oil and Gas, Oslo, 2008. 25. Korshak A.A., Shammazov A.M. “Osnovyneftegazovogodela”, DizainPoligrafService, Ufa, 2001. 26. http://www.tieto.com/industries/oil-and-gas/energy-components 27. EC Production Configuration Manual version 10.4. 2013 Tieto AS http://eccommunity.portal.tieto.com/archive_folders

117

Innovative Technologies and Methods of Economical and Social Data Processing

Applying of Portal Technologies in Development of Discipline Curriculum

Ekaterina Biryukova, Anna Malakhova AMS, USATU, 12 K. Marx St., Ufa, Russia [email protected]

Keywords: Discipline curriculum, Educational and methodical support of the discipline, Portal technologies

Abstract: The article is devoted to the actual problem of forming the discipline curriculum. Particular attention is paid to the process of forming the section of educational and methodical support of the discipline. For the lecturer decision making support within the process usage of portal technologies is suggested.

1.INTRODUCTION Currently High School is undergoing major changes both in terms of law, science and organization. Turning into a new system of higher education according to the developed Federal state educational standards of a new three-plus generation initiates moving at the essentially new procedures of conducting educational and methodical accounting at the educational institution. There is a need to develop new automated management forms and procedures for conducting educational and methodical accounting in university for implementing basic and additional educational programs. From the university faculty point of view this modifications entail substantial changes in methods and approaches to the implementation of educational activity now based on concepts, such as "competence" and "fund of evaluative resources". Modifications in the structural and content components of basic educational programs (hereinafter – BEP HE) require development of a complete set of documentation for each of the preparing directions, on which the education in the department of the university is provided, within significant increase of the quality requirements to the student’s education. The most time-consuming process for the lecturer of the department is to develop a curriculum for the reading disciplines. Decision-making process in the development of a discipline curriculum, such as selection of difficulty level, format of educational material presentation, methods and resources of the current and boundary control of the received knowledge, skills and experience; rating of the competence level, selection of educational materials and so on requires analyzing and processing of large amount of knowledge. Realization of these processes within the electronic education environment of the department will provide the required level of accumulation, processing, presentation and integration of knowledge, decision-making support for the lecturers of the department, in particular in forming of the discipline curriculums, and for the working groups – in forming the basic professional educational programs for directions and specialties of education. As the resource for such electronic education environment organization may serve the portal of the department. In the article applying of the portal technologies in development of discipline curriculum, in particular in forming the section of educational and methodical providing of the discipline, is considered.

2.DEVELOPMENT OF DISCIPLINE CURRICULUM

118

1st International DSPTech Conference on Technologies of Digital Signal Processing and Storing

In accordance with the article 2 of the Federal Law N 273-FL "Of Education in the Russian Federation" educational program is a set of basic characteristics of education (volume, content, estimated results), organizational and pedagogical conditions and in the cases provided by this Federal Law, forms of certification that is submitted in the form of curriculum, education calendar schedule, curriculums of the academic subjects, courses, disciplines (modules) and other components, as well as evaluating and methodical materials [3]. Basic professional educational program (hereinafter – BPEP) in the educational institution is developed on the basis of appropriate exemplary basic educational programs and should provide achievement of the results of the basic educational programs mastering, established by the appropriate federal state educational standards. On the basis of exemplary programs educational institutions develop their own academic disciplines curriculums. In the discipline curriculums the content of the profile component of educational material in the view of the particular specialty specific is concretized. Organization of the discipline curriculum is carried out by the lecturer fixed to this discipline. Discipline curriculum includes the following sections: 1. Purposes and problems of the discipline development; 2. Place of the discipline in the structure of the BEP HE; 3. Requirements to the results of the discipline’s content development; 4. Content and structure of the discipline; 5. Educational technologies; 6. Evaluation tools for monitoring progress and intermediate certification; 7. Educational and methodical support of discipline; 8. Modern information and communication technologies software; 9. Discipline logistics. After these sections "Sheet of the discipline curriculum submission" and "Additions and changes in the discipline curriculum" are attached to the discipline curriculum for each academic year. In the application to the discipline curriculum the data about the fund of the discipline evaluative resources, which describes in detail the criteria for evaluation of the discipline mastering level, and a list of questions to the tests and exams passing, are described, as well as the sample assessment materials. Section "Educational and methodical support of the discipline" consists of the following subsections: – primary literature; – additional literature; – Internet resources. To study the existing process of this section forming a model in BPMN notation using a software package ARIS had been designed (Figure 1) [2]. To form the section of educational and methodical support of the discipline, the lecturer needs to pick up a list of the books that are suitable in content for this discipline and preparing area. In USATU this task is performed by the module "E-Book providing cabinet" on the USATU library website. Lecturers do not need to register in the system – it’s possible to log in without password entry. After entering the lecturer need to select the database and an appropriate type of the report. In this case, for the lecturer report by the discipline is the most convenient.

119

Innovative Technologies and Methods of Economical and Social Data Processing

Fig 1. BPMN-model of the section "Educational and methodical support of the discipline" forming

120

1st International DSPTech Conference on Technologies of Digital Signal Processing and Storing

After this, a list of all courses taught in USATU is open. The lecturer needs to enter the discipline name in the filter field and click "Apply". Then lecturer needs to select the appropriate discipline in the list and click "Generate report". Then the book providing details for chosen discipline are displayed. The list contains details of the book providing for all discipline directions, courses and specialties, the list of basic and additional literature and also the methodical publications for chosen discipline. Rows are highlighted with different colors depending on the book providing factor: from deep red to deep green. To enter a book in to the section of educational and methodical support of the discipline of the discipline curriculum, the factor of the book providing should not be below 0.25 for a book from the list of basic literature, and 0.01 – for a book from the list of additional literature. After the section of educational and methodical support of the discipline is formed, the discipline curriculum hands over by the lecturer to the responsible person. When all discipline curriculums, assigned to the department, are received the responsible person transfers them to the library department of book providing. In the department of book providing all discipline curriculums are carefully checked. Department of book providing is interested only in the section of educational and methodical support of the discipline and the additions and changes to the discipline curriculum, if it has any literary sources. Department of the book providing first checks that all the books from the list of basic literature are not obsolete. Degree of the basic literature obsolete is set on cycles of disciplines: – General humanitarian and socio-economic – the last 5 years; – Science and math – the last 10 years; – General professional – the last 10 years; – Special – the last 5 years. If all the sources in the list satisfy this criterion, their existence in the library electronic catalog is checked. If all sources are exist in the library electronic catalog, their book providing is checked. If the book providing factors for all sources are in the normal range, the discipline curriculum is approved, signed and cataloged. Otherwise, if one of the criteria does not satisfy the requirements, the discipline curriculum returns to the department for change. If the department of book providing noted obsolete of literature source, lecturer needs to replace the source for a new edition, if such exist. If a new edition does not exist, lecturer needs to select the other literature source for the discipline. After all the notes are fixed, the discipline curriculums are approved, signed and cataloged. If the absence of book in the library electronic catalog, or a low factor of book providing are noted, lecturer also needs to choose other literature source for the discipline. If other sources of literature for the discipline don’t exist, lecturer needs to submit a request for the educational literature to the department of book providing. The requested data is filled with details of the ordered book and the number of copies. From the department of book providing the request is transferred to the department of acquisition, signed and then transferred to the purchasing department. Purchasing department associates with publishing and implements purchasing procedures. After ordered books are received, the discipline curriculums are approved, signed and cataloged.

3.APPLYING OF PORTAL TECHNOLOGIES FOR AUTOMATING THE PROCESS According to the results of the process analysis of forming the section of educational and methodical support of the discipline curriculum, can be concluded that the process is complex and time-consuming. For the competent forming of the section lecturer should have knowledge not only about the literary filling of the taught discipline, but also knowledge about the criteria by which one or another literary source meets the requirements, such as the book providing factor, obsolesce etc.

121

Innovative Technologies and Methods of Economical and Social Data Processing

For the lecturer decision-making support in the process of forming the section of educational and methodical support of the discipline curriculum implementation of the process within the department electronic education environment is proposed. Creating the electronic education environment to meet the new educational standards of three plus generation is the mandatory element of preparing students in a particular field of study in order to achieve the declared level of higher education quality. As a means for organization of such electronic education environment is proposed to use the portal of the department (Figure 2).

Fig 2. The scheme of the integrated automated process Up to date on the base of the automated control systems department portal the support of information exchange between employees, lecturers and students of the department as a part of educational process is provided [1]. Automating the process of forming the section of educational and methodical support of the discipline curriculum with the help of the department portal technologies will significantly simplify the procedure of selecting the primary and additional literature by integrating with an automated library system. Within the formation of the discipline curriculum information about available literary sources will be issued automatically, including the checks on the book providing factor, obsolence etc.

4.CONCLUSION As a part of the research the actual task of forming the discipline curriculum was particularly studied. Special attention was paid to the process of forming the section of educational and methodical support of the discipline. The model of the process in BPMN notation was developed. For the lecturer decision making support within the process usage of the department portal technologies was suggested.

1st International DSPTech Conference on Technologies of Digital Signal Processing and Storing

References 1. Concept for educational processes common information space development and structuring on the basis of CALS / Kulikov G.G., Shilina M.A., Startsev G.V., Antonov D.V. / Proceedings of the 14th international workshop on computer science and information technologies CSIT’2012, Ufa – Hamburg – Norwegian Fjords, 2012. V.1., p. 116-120; 2. Baronov V.V., Kalyanov G.N., Popov Y.I., Ribnikov A.I. Automation of business management. – М.: INFRA-М, 2000. – 239 p.; 3. Federal Law N 273-FL "Of Education in the Russian Federation", December 29, 2012.

123

Innovative Technologies and Methods of Economical and Social Data Processing

Mobile Remote Monitoring Platform of the Cardiovascular System

I.S. Runov, J.O. Urazbahtina USATU, K. Marx St.,12, Ufa, Russian Federation

The fight against diseases of the cardiovascular system has become the main objective of health and medical science, along with diseases such as cancer, AIDS, and psychosomatic disorders. Mortality from cardiovascular diseases in the population of Russia is 57%, with almost 20% of that number are dying at working age. In 90% of cases the cause of death becomes ischemic heart disease or stroke. A deep interest in this issue is determined by the prevalence of cardiovascular diseases, their tendency to increase in young people, in particular children of different age groups, their role in the enormous morbidity and mortality (1-2 place) among all the diseases that gives the problem not only medical but also a social value. It is therefore very important scientific basis and development of effective methods of treatment, rehabilitation and prevention, and increasingly early diagnosis of diseases of the cardiovascular system, even with minimal symptoms (complaints or feeling sick). Many diseases of the heart become apparent only during physical activity, such as exercises, stress management, while eating and even sleeping. Therefore, monitoring of cardiac activity during the day is much better reveals the deviation in its operation than conventional electrocardiography. With this purpose in medicine Holter monitors are used. Holter monitoring - one of the most popular methods for diagnosis of cardiac arrhythmias. It was shown in patients with palpitations and disruption of the heart - to detect arrhythmias and cardiac conduction, with obscure syncope, and partly for the registration of "silent" (without pain), myocardial ischemia, to assess some of the parameters of the pacemaker. However, this method has its drawbacks, which is the presence of artifacts and noise interference, failure to operate devices in real time. Artifacts are typically created autonomous units: the adhesiveness and the quality of the electrodes, the wires connecting the electrodes and the registrar, batteries and registrars themselves [1]. The authors have proposed a set of wearable devices that are part of the system, recording human heart rhythms and treating it. Wearable devices are electrode pad having clamps to secure the Bluetooth module to it. The base electrode - polypropylene foam, comprises a wet gel. Each electrode is provided with a single float to prepare the skin to the application and improvement of the quality of the signal. The electrode has a strong adhesive power. Can be applied to Holter monitoring for up to 24 hours, at the expiration of this period, the electrode surface is cleaned, apply a new layer of adhesive gel and the electrode can be used again. The wet gel reduces the electrical resistance of the skin and ensures high quality of the ECG signal. Peeling float on a transparent protective film facilitates the preparation of the skin. Reliable adhesion properties guarantee good fixation of the electrode during the procedure. The label provides mechanical stability to minimize additional artifacts. It is expedient to use a system comprising a mobile device for daily monitoring using a microcontroller STM32W108.

124

1st International DSPTech Conference on Technologies of Digital Signal Processing and Storing

Fig. 1 Block diagram of the device

Its use allows a 16% reduction in the number of unusable results [1], since the circuit will be used solid elements. The built-in radio frequency module makes it possible to receive and process results in real time. Fig. 1 shows a built-in multiplexer, ADC and amplifier, which significantly reduce the number of external components that will make it possible to reduce the size of the board with respect to the electrode on which it is planned accommodation. The main competitors are STM32W108 CC430F61xx / CC430F513x from Texas Instruments and ATmega128RFA1. All of them belong to the devices of the "system on a chip" or «system on a chip». The common feature of these microcontrollers - is to work with wireless networks, low power consumption.

Table. 1 - Comparative characteristics of microcontrollers

Microcontroller / CC430F61xx/ STM32W108 ATmega128RFA1 Options CC430F513x Type of system system-on-chip system-on-chip system-on-chip 32-bit ARM Cortex- 16-bit RISC 8-bit RISC Architecture M3 Clock frequency 6, 12 или 24 MHz 27 MHz 2,4 GHz RAM 8 Кbyte 4 Кbyte 16 Кbyte ADC 16-channel, 12-bit 8-channel, 12-bit 10- channel 7x7 mm 9x9 mm 9x9 mm Body type 48-pin QFN 64- pin VQFN 64- pin QFN

Comparative analysis of microcontrollers can be concluded that the characteristics of the microcontroller STM32W108 ahead of its competitors: it has a smaller, 16-channel 12-bit analog-to- digital converter. These parameters STM32W108 justify the use of the system for daily monitoring of patients with cardiovascular diseases. Block communication module, in turn, will contain the following layers: 1-pad; 2 controller STM32W108; 3-layer insulation; 4-battery; 5-antenna. It is also calculated to put the entire structure into a porous silicon membrane, to achieve the necessary level of safety during accidental drops and deformation. The porous structure is necessary for high-quality output of excessive heat generated during operation of the controller. Synchronization of electrodes is happening with the mobile device by means of a protocol Bluetooth 4.0. At a signal from the device and the system starts to transmit data in its internal memory, which in consequence of an array of data will be processed and built usual electrocardiogram. The signal from the electrode fixed on the patient's body, is fed to a preamplifier, which is amplified and fed to the ADC within the Bluetooth-module. The central component of the developed device is a chip production STMicroelectronics - Bluetooth 4.0 Single Mode Module, containing within

125

Innovative Technologies and Methods of Economical and Social Data Processing itself built preamp stage ADC. With the module data on standard protocols Bluetooth device transmitted to the receiver. The receiver can act as a smart phone, PDA, Tablet PC and many other devices.

Fig. 2 Block diagram of the system: 1 - electrode, 2 - preamplifier, 3 - ADC, 4 - data processing unit , 5 - Bluetooth transmitter, 6 - receiver, 7 - SoC Bluetooth.

To date, accumulated a wealth of experience in the diagnosis of cardiovascular disease with the use of various techniques. At the same time carrying out experimental studies is limited to the speed of analytical systems and the most efficient machine for experiments becomes mathematical modeling. The model of the heart can predict the development of professional pathology, to evaluate the risk of complications and to develop the necessary treatment tactics. In medical practice, the main models are the geometric and physical. One of the simplest mathematical models of the heart is a kinetic model, the main parameter simulation which is the heart rate. [2] In cardiology, the main element of computer models of the heart acts as a three-dimensional model. The advantage of this model is that it reflects a fundamental change in the functioning of the heart in a dynamic mode. Developed two-part heart model, which is based on quasi-periodic nature of the heart, and the four chambers model, it can be represented as a union of two-chamber. Spot model takes into account the dual-chamber cardiac hemodynamic of the cardiovascular system (Fig. 3). [2]

Fig. 3 Three-dimensional model of the heart There exist developing a mathematical model of the cardiovascular system, in Labview. [3]

126

1st International DSPTech Conference on Technologies of Digital Signal Processing and Storing

Fig. 4 Chamber structure of the model circulation PS - the right heart (ventricle); LA 1 - proximal pulmonary - arterial camera; LA 2 - distal pulmonary - arterial camera; B - vein; PM - left heart (ventricle); A1 - central arterial chamber; A2 - upper body artery; A3 - arteries of the lower body; LP - pulmonary veins Thus, the use of mathematical modeling method is necessary because many of the problems caused by deviations of the heart, it is often impossible to investigate or research it requires time- consuming and high-speed and precision of the technical means. Therefore, in modern medicine, there are many different mathematical models that can simulate the functioning of the heart and cardiac cycles in different types of pathologies. The software part of the development contains a special mobile applications needed to create a radio channel between the electrodes and the mobile device, their timing and the start of work on the same measure. Mobile application - a program to work on a mobile device such as a smartphone, tablet, media player, etc. The application processes the input from the sensor signals and outputs them to the display device in the form of a diagram, as the software will be able to assess the built ECG and on this basis to provide the owner of the primary analysis, give advice on lifestyle, diet, sleep, exercise, well in advance to inform on necessary medication, sending a notification to the system interface of the operating system of the device. Depending on the degree of cardiac complications number of electrodes varies, given the fact that different types of problems inherent in various diseases of the cardiovascular system require varying degrees of monitoring. What happens next depends on the scenario, in which the equipment is used. The data will be stored in the internal memory for later data transmission channels GPRS, EDGE, HSDPA, or by means of other 3G or 4G communication technology for storage, analysis, and other necessary systems activities, cloud storage, and the server central Cardiology regional center for information professionals. Cloud storage - a model of online storage, in which data is stored on multiple distributed servers on the network provided for use by customers, mainly third party. In contrast to the model data storage on its own dedicated server purchased or leased specifically for such purposes, or the amount of any internal structure of the server to the client, in general, it is not visible. The data is stored and processed in a so called cloud, represents from the point of view of the client, one large virtual server. Physically these servers can be located remotely from each other geographically, up to the location on different continents. The software application is designed so that in case of a critical situation, the behavior of the heart, the patient will be immediately sent to a number of signaling notification if the patient ignores them, the data about this arrives at the server Cardiology Clinic, where subsequently make a number of phone calls and, if necessary leaves ambulance. Location patient in this situation will be received by the satellite coordinates of the location system GPS, which feature in our time almost all smartphones, tablets, etc. Interaction of application with GPS sensor is carried out by means of the internal libraries of the operating system devices, such as Android. 127

Innovative Technologies and Methods of Economical and Social Data Processing

The composition of Android includes a library API, which is easier to develop programs that use the hardware resources of devices. This means that it is not necessary each time to create a special version of the program for different devices. You can create an application on a platform of Android, which will work on any device compatible with it. The development environment for Android includes an API for working with navigation devices (in particular, GPS-navigator), camera, sound system, network connections, Wi-Fi, Bluetooth, accelerometer, touch screen, and power management. Android supports the work with maps, which means you can create navigation applications that effectively use the benefits mobile devices running Android. This platform allows the program to include the Google Maps service interface and been ensured vides full access to maps, you can operate the software and, if necessary, provide commentary, using rich graphics library Android. Navigational services platform works with GPS technology and positioning of the base stations of GSM networks from Google, through which established the current location of the device. These services allow you to ignore the features of a technology, and you specify a minimum set of settings (for example: the accuracy or value) and choose the right technology. In addition, the platform ensures that your navigation software will work no matter what kind of tech support a particular device. To connect the card with the navigation service is included in the Android API for forward and reverse geocoding, which allows you to locate on the map coordinates of a given address or determine the address for a specific position on the map. The purpose of geolocation services - determination of the physical location of your device. For access to geolocation services responsible system service Location Manager. To get started with it, get a copy of the type LOCATION_SERVICE using the method getSystemService, as shown in the following code: String serviceString = Context.LOCATION-SERVICE; LocationManager locationManager; locationManager = (LocationManager)getSystemServiceServiceString); Before using LocationManager, you need to add one or more tags that describe the user credentials (uses-permission), to the manifest application to get access to the hardware responsible for geolocation services. The following code is added to the powers of the high and low accuracy. The application, which has the power to high precision, automatically access to information and low accuracy. GPS requires the authority to use these high accuracy, while network sources (telephone networks, Wi-Fi) can do and low accuracy. You can get the last location fixed Location Provider, using the method get LastKnownLocation, but before passing it as a parameter the name of the data source. In the following example, we get the latest fixed location, taken at the source of GPS: String provider = locationKanaqer.GPS-PROVIDER; Location location = locationKanaqcr.getLastKnownLocation(provider); So it is an easily accessible location on the patient. The use of this device will allow continuous monitoring of cardiac activity of the patient without causing him discomfort that arises when using wired technologies. The main advantage of this platform is the ability to connect the electrodes to any mobile device and their further exploitation. At the same time it will be possible to consult the patients living in remote areas, which is especially important for the Republic of Bashkortostan. The development of such devices is the main directions of the government's program of Public Health.

References 1. Makarov LM Holter monitoring. 2 e edition - M.: ID "Medpraktika - M." - 2003. 2. Mathematical models of quasi - dimensional hemodynamic / B. B. Koshelev and etc .M.: MAX Press, 2010. 3. Development of a mathematical model of the cardiovascular system in a medium LABVIEW / 4. C. H. Makoveev C. B. Frolov, M.: 79, 2008.

128

1st International DSPTech Conference on Technologies of Digital Signal Processing and Storing

Planning Final Budget Companies

Enikeev Rustem R., Suvorova Veronica A., Solyeva Anastasiya V., Vazigatov Dinar I., Sitdikova Elina O. 1Automated and management control systems, USATU, K. Marx St.,12, Ufa, Russian Federation [email protected]

Keywords: Planning, budget.

Abstract: Then fill all the available fields. Then he spends the document and all the data is written to the register information. As a result, we created a lot of documents on available facilities and relevant articles. Further, all data from the posted documents fall into the accumulation register. Already on the basis of this register are a variety of reports, such as "the general budget for the period", "budget report service". Also creates a report for a certain period for the planned activities in the context of the entire store chain.

In conditions of market relations developing effective management of production activities of enterprises increasingly depends on the level of information support of its individual departments and services. Currently, few Russian organizations and enterprises have the management and accounting, to the information contained therein has been used for operational management and financial analysis. The way out of this situation in a market is the organization of the enterprise of an effective system of financial planning of the budget. Starting to plan, leaders, usually start to more clearly understand the goals and objectives of the enterprise, effectively adjust budgets. Planning accuracy while significantly increased. The budget of the enterprise are reflected planning and monitoring results as planned, the expected and actual data and actual performance deviation from the plan. On this basis, a strategy for effective development of the enterprise in the conditions of competition and instability, analyzed and controlled by the enterprise as a whole. Therefore, the budget is an important tool in the development of management measures to achieve the objectives of the enterprise. "Bashbakaleya" Consider the actual task of automation as an example of the outcome of the budget planning process of the enterprise. Today the company is constantly evolving and growing, and the current budgeting techniques in Microsoft Excel no longer meets the emerging needs of speed information by employees of the financial department and the Department of Commerce, as well as the required accuracy, and does not allow one to determine in which vehicles need services and departments for proper operation. Terms of reference and selected software for the implementation of software has been developed to solve this problem, based mimic the proposed document in Fig.1.

129

Innovative Technologies and Methods of Economical and Social Data Processing

Формирование бюджета

м е я и ь н т е а н т л с о о п п а т Служебная записка З а р т а з Начальники отделов Финансовый отдел Рабочая группа бюджета У Кр твержде нтроли ние бюд рует за жетов с оплату явки на и от лужб формир делов за ует рее И утвер явок н стр ждение а оплат превы сумм у шения б юджета од ох д Специалист й ы м ИС Финансовый Финансового ае Управляющей Генеральный д и директор отдела ж сбытовых директор о ет сетей ру и м ор Ф

Департамент торговли

Специалист службы эксплутации магазинов

Fig.1 Mimic the proposed document.

In connection with the above, it was decided to create a new document "Enter budget" using 1C: 8 as part of the overall automation of enterprises in order to achieve the objectives. More has been written code and developed the necessary forms of documents. It invited all heads of services and departments of the enterprise constantly filling a document containing the column "budget planning" and "pessimistic plan," as well as a graph with currency. Where the budget is introduced in both rubles and dollars. At the same time as the implementation of the planned payments are automatically filled in the "actual amount". On the basis of this document compiles reports where necessary specified list of cost items and their targets, pessimistic, and the actual amounts based on currency differences. Also formed a consolidated report across the enterprise network of shops to compare the relationship of existing costs. How the program works. Initially, department or service fills your budget. To do this, head of the department or service creates an original document for your service (Figure 2 and Figure 3).

130

1st International DSPTech Conference on Technologies of Digital Signal Processing and Storing

Fig. 2. The form for filling "Enter budget".

Fig. 3. The form for filling "startup budget."

Then fill all the available fields. Then he spends the document and all the data is written to the register information. As a result, we created a lot of documents on available facilities and relevant articles. Further, all data from the posted documents fall into the accumulation register. Already on the basis of this register are a variety of reports, such as "the general budget for the period", "budget report service" (Figure 4). Also creates a report for a certain period for the planned activities in the context of the entire store chain (Figure 5).

131

Innovative Technologies and Methods of Economical and Social Data Processing

Fig. 4. Form selecting the type of report

132

1st International DSPTech Conference on Technologies of Digital Signal Processing and Storing

Modern Aspects of Information War

Nikolay Andreyev 1 Associate professor of the Ufa state aviation technical university department of ADP equipment and information protection of nican56@ mail.ru

Keyword: information security, information society, information antagonism, information war, information technologies, information weapon.

Abstract: Information security of Russia - a status of security of the state and society, and as a result of the personality from internal and external threats in the information sphere the major problem requiring constant attention in the present. Militarization of global information space and strengthening of race of information arms poses serious threat to an international peace, safety, global and regional stability.

Informatization in all spheres of public administration is characteristic of scientific and technical progress. The development and broad application of information and telecommunication technologies which were a global tendency of world development of the last decades also gives not only the amplest opportunities of increase of efficiency of activity of people during the work with information, but also create an array of problems, information security became basic of which. Even more often we see the young people who are constantly submerged in the different gadgets connected with the Internet in information transferred on them. Not without reason on roads there are road signs of "Zombie" on whom people with enthusiasm communicating with gadgets and not seeing anything around are represented. In the Draft of the Information security doctrine of Russia of 2015 definition "Information security of the Russian Federation - a status of security of the personality, society and state from internal and external threats in the information sphere which allows to provide constitutional rights, freedoms, worthy qualities and a standard of living of citizens, sovereignty, territorial integrity and a sustainable development of the Russian Federation, defense and safety of the state" [1] is given. It is stated in many regulations in recent years. Respectively and it is possible to ensure safety of information only protecting it, entering an antagonism with those who wants to get access to it without having on it the rights or that it is worse to change or destroy it. According to I. N. Panarin [2] it is necessary to distinguish information antagonism (fight) in wide (in all spheres) and narrow word sense (in any sphere, for example in information). Information antagonism (fight) - the form of fight of the parties representing use special (political, economic, diplomatic, military and others) methods, ways and means for impact on information space of the resisting party and protection own for the benefit of achievement of goals. Most likely, information antagonism arose from the moment of emergence of information processes and use of information technologies [3]. The main objectives of information antagonism became - control of the information sphere, impact on information of the opponent for undermining its fighting capacity with the corresponding protection against third-party influence, increase of overall effectiveness of armed forces of the country on the basis of widespread introduction of military information functions. The subject of information antagonism became so important in 133

Innovative Technologies and Methods of Economical and Social Data Processing modern Russia because in Russia information society [4] actively forms. And in such society a basis of development of the person, the organizations and in general the states become information technologies and abilities of management of such society and first of all state, and it which would ensure its safety, safety of Russia [5] are required. Theory of information antagonism per se received the origin in the analysis of military operations. Information antagonism in wars is a strategic form of fight of the parties in which the special ways and means influencing information environment of the opponent and protecting own for the benefit of achievement of strategic objectives of war [6] are used. In modern war effectively it is already impossible to reach a victory without the information weapon representing the devices and means intended for drawing the maximum loss in the course of information antagonism. The information weapon develops also promptly, as well as information technologies. Unlike shock precision weapons, the information weapon - sistemorazrushayushchy, that is putting the whole fighting, economic or social systems [7] out of action. The fact that its participants try to get advantage over the opponent in the information sphere, but to win not physically or to completely destroy it is characteristic of information war. But effects of information wars can be so global and long-term, as results of real wars. So, defeat in cold information war cost life to such state as [8]. There are two types of information attacks direct and indirect. Direct or direct information attack is used for defeat of specific object and first of all crucial for the state, usually put before direct conducting traditional military operations. Examples of such information attack on information infrastructure, crucial objects can serve "A storm in the desert" 1991 in Iraq, events in Yugoslavia, Iran, Tskhinvali, Ukraine, Syria. Indirect information attacks happen constantly it is the misinformation extended in the basic through mass media, social networks, etc. As example of indirect information attacks Olympic Games in Sochi, offshores in Panama, etc. For this reason the special attention is paid to questions of information antagonism in recent years including the Russian scientists and specialists in this area (again catching up with the West): I. S. Ashmanov, N. L. Volkovsky, G.V. Grachev A. Greshnevikov, I. K. Melnik, V. A. Lisichkin, I. N. Panarin, G. Pocheptsov, S.P.Rastorguyevym, V. Slipchenko, L. A. Shelepin, A. I. Fursov and many others. Components of information antagonism in spite of the fact that they constantly are improved, appear new that is connected with development of information technologies, the following extended in the basic through mass media is:  the psychological operations directed to use of information for impact on the argumentation of law enforcement agencies and the opponent's population, and first of all to the country leaders;  the information attacks directed to distortion, information juggling without visible change of its entity, giving of required coloring to it;  the misinformation or false information which is specially provided to the opponent, for example about specific events, forces, intentions, arms (these are the most ancient parts but which are intensively developing and presently, including with active use of mass media, network resources, the Internet);  the radio-electronic antagonism in the beginning influencing only a radio communication, and further the different systems based on use of radio electronics and the first 134

1st International DSPTech Conference on Technologies of Digital Signal Processing and Storing stage of radio-electronic investigation (the first application during war of 1914, suppression of a radio signal);  physical destruction - can be part of information war if aims at impact on coupling elements, further on information systems of communication (actively it began to be used during war of 1941-1945, destruction of wire communication, destruction of staffs of the opponent together with communication centers, it was effectively used by the USA in Yugoslavia);  information and network influence of the opponent (army, the population, infrastructure, the industry, elite) allowing to suppress without use of traditional arms only on the basis of use of the modern information technologies allowing to work in information space of the opponent (the colonel of the U.S. Air Force John Uorden of "Operation on the basis of effects", strategy of setetsentrichny wars);  providing at the same time measures of the information security, protecting information aimed at providing integrity, availability and confidentiality of own information. In the Information security doctrine it is emphasized that "the national security of the Russian Federation essentially depends on information security support". And in the Draft of the Information security doctrine of 2015 it is noted that the status of information security of Russia is characterized by building by foreign countries of opportunities for use of information and technical influences, including by impact on critical information infrastructure of the Russian Federation [9] for achievement of the military and political goals. Conducting technical investigation concerning the Russian government bodies, the scientific organizations and the enterprises of defense industry complex continues to amplify. For example: as a result of Russophobic information war of Ukrainians configured against Russians, revived Banderovites and neo-nazism, having created the state of Ukraine which is brought up in the last 20 years in hatred to Russia. Zbigniew Brzezinski the ideologist of collapse of the USSR, and the adviser to U.S. Presidents, the author of the book "Great Chessboard", notes: "The new world order will be under construction against Russia, on ruins of Russia and at the expense of Russia". In recent years, as the separate parameter for monitoring, experts of Russia entered a so-called "index of aggression" - the number of negative publications on one neutral. In normal time such index can make two-three negative publications on one neutral. But at peak of information war in some days it reached 70 negative publications. As indisputable leaders in aggression practically all 2015 Germany, the USA and France acted, and Germany was ahead of the others [10]. Scientists and politicians note that militarization of global information space and strengthening of race of information arms poses serious threat to an international peace, safety, global and regional stability. Use by special services and under control public organizations of the certain states of information and communication technologies as the instrument of information and psychological influences for undermining sovereignty and violation of territorial integrity of other states, destabilizations of an internal political and social situation in different regions of the world becomes more active. Religious, ethnic, human rights and other organizations and structures are involved in this activity. Due to these development of the international regulations regulating similar questions is necessary. About it Russia constantly acts at different international venues, offering different concepts, since 1998, "The convention on ensuring the international information security" can be an example of the last. In the Convention article 5 indicates the basic principles of ensuring the international information security, and the head "Main measures of prevention and permission of the military conflicts in information space", "The main measures to counteraction of use of information space in the terrorist 135

Innovative Technologies and Methods of Economical and Social Data Processing purposes", tell "The international cooperation in the field of the international information security" about the most important questions of modern information space. But, unfortunately, the USA constantly blocks initiatives in this area, developing and increasing the information troops. In this regard and it was necessary to Russia, catching up with the West and aiming to ensure the safety, to develop similar divisions in law enforcement agencies. Information wars of the future are already browsed also in the present that is connected with saturation by electronic gadgets. Smartphones, tablets, electronic cards, etc. At them undoubtedly the mass of advantages, but is enough also shortcomings and the first of them our dependence on them and a possibility of supervision over the owner. Distribution of free Wi-Fi gives huge advantages in obtaining information, but also allows to make continuous information impact on the person. The wars which took place in recent years proved significant superiority the Internet in comparison with other mass media. The Internet affected that in China "The gold board" as network protection with family registration at connection to a network was created. Not for nothing already and the Russian specialists and hackers speak about need of transparency of connection to the Internet. Relying on the Chinese experience, the chairman of the Investigative Committee of the Russian Federation, Alexander Bastrykin in special interview for "Power" - "About ways and methods of fight against extremism in Russia" notes that it is advisable to decide also on tsenzurirovaniye limits in Russia of a wide area network the Internet as this problem causes heated debates in the light of activation of defenders of the rights for freedom of receiving and distribution of information now. Thus, in the modern world questions of information antagonism and, therefore, information security become the most important both for the personality, and for the state, but in practice many heads of law enforcement agencies at the average level, in Russia, especially in regions do not understand importance of information security.

References 1. http://www.narodsobor.ru/events/analytics/29821-igor-ashmanov-o-texnologiyax- sovremennyx-setevyx-vojn#.VvNA-uKLTIV 2. Panarin I. N. of mass media, promotion and information wars. M.: Generation, 2012. 3. "Information technologies - processes, methods of search, collecting, storage, processing, providing, distribution of information and ways of implementation of such processes and methods" - the Federal law of the Russian Federation No. 149-FZ 2006 of. 4. Kastels M. Information era: economy, society and culture: The lane with English under nauch. O. I. Shkaratan edition. - M.: GU HSE, 2000. "Society in which information became the main source of labor productivity and the power". 5. The decree of the Russian President of 31.12.2015 No. 683 "About Strategy of national security of the Russian Federation". 6. Panarin I. N. of mass media, promotion and information wars. M.: Generation, 2012. 7. V. Slipchenko Information antagonism in contactless wars of WWW.I-U.RU . 8. Igor Ashmanov about technologies of modern network wars 23.02.2015 9. See for example: "The main directions of a state policy in the field of safety of automated control systems for production and technological processes of crucial infrastructure 136

1st International DSPTech Conference on Technologies of Digital Signal Processing and Storing facilities of the Russian Federation" (Are approved as the President of the Russian Federation D. Medvedev 03.02.2012, No. 803) "critical information infrastructure of the Russian Federation (further - critical information infrastructure CUES) - set of automated control systems QUO and the information telecommunication networks providing their interaction intended for a solution of problems of public administration, ensuring defense capability, safety and a law and order, violation (or the termination) which functioning can become the reason of approach of heavy effects"; The order of FSTEC of Russia of 14.03.2014 No. 31 "About the approval of requirements to ensuring information protection in ACS production and technological processes on QUO, potentially dangerous objects, and also the objects posing the increased hazard to life and human health and for surrounding environment". 10.

137

Innovative Technologies and Methods of Economical and Social Data Processing

On Social Programs Implemented by the Soc «Bashneft» Means of Advertising and Public Relations

Yagudina Aigul V. Oil and Gas Business Institute, USPTU, 1 Kosmonavtov St., Ufa, Russian Federation

Keywords: Stock oil Company (SOC)«Bashneft», Volunteer movement, charity project, «Kind Heart», media, advertising.

Abstract: The first problem is scanty of awareness of volunteering activity inBashneftCompany$ the second problem is make the image Bashneft Company close in on for mass people.

PJSOC Bashneft – one of the oldest enterprises of the Russian oil industry, which is producing «black gold» from 1932. «Bashneft» - dynamically developing Russian vertically integrated oil company, demonstrating successful performance in terms of oil production, processing and marketing of primary products [1]. However, in addition to its core business, the company «Bashneft» is actively engaged in social policy, and one of the areas here is a charity and volunteering [2; 3]. Since 2009, in the framework of charitable assistance company has been allocated over 7.5 billion rubles. For over 6 years, «Bashneft» patron of the Ufa urban social rehabilitation center for minors, which is home to about 50 children. Over the years, in projects of volunteer activities of employees included more than two dozen children's homes, shelters, social rehabilitation centers for juveniles and rehabilitation branches children with disabilities throughout the country and in several regions of the country. There are also projects to support veterans, protection of the environment. In addition, «Bashneft» is financing the construction and repair of a number of social and infrastructural facilities in the places of presence of business [2]. Need donated to help those in need and those who suffer, of course, characterized by sensitive and caring people, and such employees in «Bashneft» have always been. However, the decision to create a volunteer movement within the Company did not come immediately. But understanding that if one person is willing and able to give enjoys to another - he must do it, it was a strong motivation. An active volunteer in the Company started in 2011, when the Company was invited to all the staff at the request to join the project upgrading memorable places immortalized veterans of the Great Patriotic Warof the Republic of Bashkortostan. Then on Saturday as part of the Day of Remembrance and Mourning left more than 100 people. That is why it is understood that the company has a lot of people willing to come together for a common good cause. In August 2012, on the initiative of employees of the Corporate Communications Department of PJSOC «Bashneft» has arisen volunteer movement «Kind Heart», which is coordinated by the Department for the moment. [4] «Kind Heart» - a way to express their humanity in charity work. Volunteer can become each employee, regardless of position, seniority or experience. Each year, volunteersorganize themed activities for children living in orphanages and social shelters undergoing treatment in rehabilitation centers of the Republic of Bashkortostan; as well as to «Bashneft» Veterans in need of human attention and participation. The main principle of «Kind Hearts» - help from person to person, heart to heart, without intermediaries. Therefore, any employee of the Company who wants to help and support children on their own to buy a toy or a school backpack child from children's home, gets in shape 138

1st International DSPTech Conference on Technologies of Digital Signal Processing and Storing for the young hockey player or a boxer, trainer gives a stroller or for a disabled child. [5] Since the targeted assistance - first volunteer learns that wants to get as a gift a child, it is vital that it now and then makes the most pleasant - gives the child a piece of their care and faith in the dream. Volunteer movement «Kind Heart» with evolving every year, involves all new members and increases the area of his attention. At the moment, there are six traditional shares, which hold annual «Kind Heart». 1. «The Gift of Santa Claus» in which each employee is «Bashneft» can help children from orphanages, shelters and rehabilitation centers believe in a fairy tale, a child getting the most treasured gift that his New Year's present to himself Santa Claus. 2. Another project for children entitled «The Gift to the School» allows employees to «Bashneft» to congratulate the children on the Day of Knowledge, providing all the same children all the necessary supplies by September 1 for school. 3. Caring for people of the older generation, in different years worked at the enterprises of the «Bashnef» companies, including veterans of the Great Patriotic War - translates into the project «Kind Hearts to the veterans company». This charity event «Bashneft», where volunteers give their attention to the elderly, visiting them and helping them in economic matters, because it is sometimes more expensive than any gift, twice a year - for the New Year and Day of the Victory in the house of each veteran at the table going to the children, and a new generation of oil workers - volunteers of the Company. That's what made the strengthening of the connection between generations, transfer experience and knowledge to young employees, the future of the oil industry. Volunteers often take in guests and their children. 4. Environment - this is another aspect, which is a party to circumvent «Bashneft»cannot. Therefore mandatory spring event for the Company's employees is a Saturday. Volunteers cleaned parks, children's camps, orphanages and shelters. 5. One of the shares held in «Bashneft», really is vital. It is vitally important for those whose lives depend on blood. In the days of the action on the territory of «Bashneft» is set, the mobile station is a mobile blood transfusion Republican blood transfusion station in Ufa, and any interested employee may donate blood and possibly save someone's life. This action is carried out several times in a calendar year. 6. SOC«Bashneft» is interested in the health of employees, so in the framework of the World Day of quitting to campaign in favor of a healthy way of life, reminding his staff that they have an opportunity to engage in virtually any sport on your own sports base «Bashneft». As well as actively participate in the event that the Company does not shy away, often winning prizes at national competitions among teams of different enterprises. In addition to these annual planned shares, «Bashneft» are a variety of themed events, such as the Oilmen's Day, New Year, as well as various meetings, conferences, congresses, meetings, where there is always need help in the organization, we need people who can be in the wings - and it is also the volunteers. Since the Company's critical feedback, each employee can offer ideas about areas of concern and new projects of the volunteer movement. In the few years that the company «Bashneft» volunteer movement there is, there has been significant progress. The number of active volunteers, taking part in events - more than 700, and thousands of people are involved in charitable collection for volunteering projects and for the treatment of children with cancer [6]. And now many employees it is difficult and even impossible to imagine the company without volunteering. Once you try to do a good deed, I want to continue to provide all possible assistance. Therefore, in this case the main thing - to start. Recently«Kind Hearts» is got in the New Year's action «Gift of Santa Claus». The author is an intern in the Corporate Communications Department, who coordinates the volunteers and trusted with control of supervisors of all departments «Bashneft», participating in the action.

139

Innovative Technologies and Methods of Economical and Social Data Processing

Got in control over the collection of funds and the purchase of gifts for the children, as well as the fact whether all the gifts were purchased and delivered to children, whether in providing props and vehicles for a Christmas party in the sponsored organizations, and supervised the execution timing of the action clearly needed. From the point of view of the organization and implementation of the action for the six years to develop a clear strategy for its implementation, so volunteers know their job well and understand what and how they need to do. But one flaw is still there. The fact that «Bashneft» engaged in good deals, knows only a narrow circle of people, mostly involved in this. And the general public sees «Bashneft» status as a state-owned oil company, successfully developed the leading oil producer in Russia, to conclude contracts with partners in the oil business, but this is an incomplete image of the Company. In order to know the mass audience «Bashneft» as a socially responsible company, the image of «Bashneft» should be brought to the people. In our opinion, to begin work in this direction is necessary to position the company in the media. That is to invite journalists from various television channels, representatives of the print media and Internet portals, such as volunteer action «Gift of Santa Claus» in a social rehabilitation center for minors in the city of Ufa, or in a branch of the Center of rehabilitation of children and adolescents with disabilities in Tatyshlinsky district (which is also in charge of «Bashneft»). In order not to create a crowd and arrest excessive attention of the public and not to the contrary, have a negative, effect, only two - three representatives of the media. Then we should to interact with specific journalists, specific publications and TV channels to find exactly those who will broadcast the information to the masses competently and accurately, is constantly engaged in lighting projects of the volunteer movement «Kind Heart» and social policy «Bashneft» in general. Objective: To co-operate with those journalists on a regular basis, as volunteer shares in the Company are held regularly. The next step in the approximation of «Bashneft» the image of the people will be a program for the acquisition of bonus storage card loyalty. At the moment, this program was successfully launched in the Republic of Udmurtia, and now began to be implemented in our country. Since the start of the program has already taken place (1 December 2015) [7], specialists hype bonus cards should be on advertising and PR. The city hung advertising banners «We know ours!» but this is not enough - you can take a series of commercials with known Ufa and Bashkortostan individuals who have already purchased the card and tell the conditions of the action and the emotions from its use. Show these commercials on local television to both permanent and potential customer’s gas station «Bashneft» not only have an idea about the new bonus system, but would like to have the bonus card. «Bashneft» can put information stands in major shopping centers in Ufa, where employees «Bashneft», developed by this program will be able to explain to everyone the rules of engagement, to answer all the questions, and customers will be able to purchase a card. In addition, The Company can spend New Year's rally at the gas station «Bashneft»: handing out to everyone who comes to fill up a room, and then within a day to spend draw gifts - which may include products with the symbol of «Bashneft» (T-shirts, notepads, key chains, flash drives, etc. ) and the main prize will be 3 bonus cards with the already credited to 1000 bonus points.

CONCLUSION Thus, the image of the Company "Bashneft" needs constant adjustment, responding to the challenges of the time, and on PR and advertising professionals have the prospect to do it. And there are already some experience of such work on the fuel and energy complex enterprises [8; 9]. And from foreign experience is of special interest activities for the development of Toyota's corporate culture. [10] However, before acting in this direction, you need to enlist the support of

140

1st International DSPTech Conference on Technologies of Digital Signal Processing and Storing the Company's Board of Directors, which gave to this work, a strong impetus and increased responsibility for its results.

References 1. PAO ANK "Bashneft" / PAO "Bashneft" .1995-2015 [Website] - URL: http://www.bashneft.ru./company/ 2. The social and charitable projects / PAT ANK "Bashneft", 1995-2015. URL: http://www.bashneft.ru./development/social/ 3. "Bashneft" will direct 20 mln. Rubles for social projects in the Orenburg Region / PAT ANK "Bashneft", 1995-2015. February 26, 2015. The URL: http://www.bashneft.ru./press/news/7837/ (reference date 30/11/2015). 4. Personnel Policy / PAT ANK "Bashneft", 1995-2015. The URL: http://www.bashneft.ru/development/personnel/?sphrase_id=480615 5. volunteering employees ANK "Bashneft '" Kind Heart "/ PAT ANK" Bashneft ", 1995-2015. The URL: http://portalbn.bashneft.ru/volunteers/Pages/about.aspx (reference date 30/11/2015). 6. From the "Good Heart" children / Your Neftekamsk.RF. The URL: http://nefttv.ru/news/ot-dobrogo-serdca-detjam.html (reference date 30/11/2015). 7. We know our // Bashkir neft.- 2015.- № 20.- C.3. 8. Shaipov SA Features PR-Technologies Corporation Energy / Shaipov SA - M: "Advertising Theory and Practice.", # 1, 2011 [Website] -URL: http: // Grebennikon .ru / article- e73i.html 9. G. Horash National features of PR in Energy / [Website] -URL: http://www.advlab.ru/articles/article301.htm 10. Jeffrey Liker, Michael Hoseus. Corporate culture Toyota. Lessons for other companies / Jeffrey Liker, Hoseus M. M .: Alpina Publishers,, 2011.

141

The 1th International Workshop on

Technologies of Digital Signal Processing and Storing

International Scientific Issue Volume II

Edited 25.11.2015 Edition 70 cop. Ufa State Aviation Technical University USATU Editorial-Publishing Office Ufa, K. Marx St., 12

Международная конференция «Технология цифровой обработки и хранения информации» Том 2 Международное научное издание

Подписано в печать 25.11.2015. Тираж 130 экз. (доп.тираж). Заказ №94. ФГБОУ ВПО Уфимский государственный авиационный технический университет 450000, Уфа-центр, ул. К.Маркса, 12