EUROPEAN POLYTECHNIC INSTITUTE, KUNOVICE
PROCEEDINGS
NINTH INTERNATIONAL CONFERENCE ON SOFT
COMPUTING APPLIED IN COMPUTER AND
ECONOMIC ENVIRONMENTS
ICSC 2011
January 21, Hodonín, Czech Republic
Edited by:
Prof. Ing. Imrich Rukovanský, CSc.
Prof. Ing. Pavel Ošmera, CSc.
Ing. Jaroslav Kavka
Prepared for print by:
Ing. Andrea Kubalová, DiS.
Martin Tuček
Printed by:
© European Polytechnical Institute Kunovice, 2011
ISBN 978-80-7314-221-6
NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING
APPLIED IN COMPUTER AND ECONOMIC ENVIRONMENTS
ICSC 2011
Organized by
THE EUROPEAN POLYTECHNIC INSTITUTE, KUNOVICE
THE CZECH REPUBLIC
Conference Chairman
H. prof. Ing. Oldřich Kratochvíl, Ph.D., Dr.h.c., MBA
rector
Conference Co-Chairmen
Prof. Ing. Imrich Rukovanský, CSc.
INTERNATIONAL PROGRAMME COMMITEE
O. Kratochvíl – Chairman (CZ)
W. M. Baraňski (PL)
K. Rais (CZ)
M. Šeda (CZ)
J. Baštinec (CZ)
J. Brzobohatý (CZ)
J. Diblík (CZ)
P. Dostál (CZ)
U. Chakraborthy (USA)
M. Kubát (USA)
P. Ošmera (CZ)
J. Petrucha (CZ)
I. Rukovanský (CZ)
G. Vértesy (HU)
I. Zelinka (CZ)
A. M. Salem (ET)
A. Borisov (LT)
M. Wagenknecht (GE)
ORGANIZING COMMITEE
I. Rukovanský (Chairman)
P. Ošmera
A. Kubalová
I. Matušíková
Z. Pospíšilová
J. Kavka
M. Zálešák
T. Chmela
J. Míšek
M. Orviská
J. Šáchová
M. Tuček
M. Balus
P. Trnečka
Session 1: ICSC – Soft Computing a jeho uplatnění v managementu,
marketingu a v moderních finančních systémech
Chairman: Prof. Ing. Petr Dostál, CSc.
Session 2: ICSC – Soft Computing – tvorba moderních počítačových
nástrojů pro optimalizaci procesů
Chairman: Ing. Jindřich Petrucha, Ph.D.
Oponentní rada
Doc. Wlodzimierz M. Baraňski, Wroclaw University of Technology, Wroclaw, PL
Prof. Ing. Petr Dostál, CSc., Vysoké učení technické, Brno, ČR
Doc. RNDr. Jaromír Baštinec, CSc., Vysoké učení technické, Brno, ČR
CONTENTS
THE MESSAGE FROM THE GENERAL CHAIRMAN OF THE CONFERENCE .................................... 7
SECTION 1
MAINTANCE OF FIELD POINT IN GEODESY VIA GIS .......................................................................... 11
DALIBOR BARTONĚK ............................................................................................................................................. 11
USAGE OF GIS IN ARCHAEOLOGY ............................................................................................................ 17
STANISLAVA DERMEKOVÁ, DALIBOR BARTONĚK ................................................................................................. 17
POSSIBILITIES OF USING THE OPEN PLATFORM „ANDRIOD“ IN INFORMATION SYSTÉM OF
EUROPEAN POLYTECHNIC INSTITUTE, LTD......................................................................................... 25
JURAJ ĎUĎÁK ........................................................................................................................................................ 25
AUTOMATED COLLECTION OF TEMPERATURE DATA...................................................................... 31
GABRIEL GAŠPAR1, JURAJ ĎUĎÁK2 ....................................................................................................................... 31
INVENTORY AND INVENTORY MANAGEMENT SYSTEMS................................................................. 37
LUKÁŠ RICHTER, JAROSLAV KRÁL ........................................................................................................................ 37
ECONOMIC ORDER QUANTITY MODEL AND ITS UTILIZATION ..................................................... 43
LUKÁŠ RICHTER, JAROSLAV KRÁL ........................................................................................................................ 43
REGRESSION TREES IN SEA-SURFACE TEMPERATURE MEASUREMENTS ................................. 49
SAREEWAN DENDAMRONGVIT, MIROSLAV KUBAT, PETER MINNETT .................................................................... 49
USE OF GENETIC ALGORYTHMS IN ECONOMIC DECISION MAKING PROCESSES................... 55
JAN LUHAN, VERONIKA NOVOTNÁ ........................................................................................................................ 55
BUSINESS WAR GAME AS A KIND OF BUSINESS SIMULATION ........................................................ 61
KAROLÍNA MUŽÍKOVÁ .......................................................................................................................................... 61
GIS IN MUNICIPALITY ADMINISTRATION.............................................................................................. 67
IRENA OPATŘILOVÁ, DALIBOR BARTONĚK ............................................................................................................ 67
CZECH SPACE TECHNOLOGY “KNOW-HOW” ENTERING THE INTERNATIONAL SPACE
STATION............................................................................................................................................................. 75
MAREK ŠIMČÁK ..................................................................................................................................................... 75
FIRST RESULTS OF CELLULAR LOGICAL PROCESSOR USED IN GENETICALLY
PROGRAMMING PROCESS ........................................................................................................................... 81
PETR SKORKOVSKÝ ............................................................................................................................................... 81
COMPLEX CHARACTERIZATION OF FERROMAGNETIC MATERIAL'S DEGRADATION BY
MAGNETIC ADAPTIVE TESTING................................................................................................................ 89
GÁBOR VÉRTESY1, IVAN TOMÁŠ2 .......................................................................................................................... 89
SECTION 2
OSCILLATION OF SOLUTION OF A LINEAR THIRD-ORDER DISCRETE DELAED EQUATION 95
JAROMÍR BAŠTINEC, JOSEF DIBLÍK, ALENA BAŠTINCOVÁ ..................................................................................... 95
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
5
COUNTABLE EXTENSIONS OF THE GAUSSIAN COMPLEX PLANE DETERMINATED BY THE
SIMPLEST QUADRATIC POLYNOMIAL .................................................................................................. 103
JAROMÍR BAŠTINEC, JAN CHVALINA ................................................................................................................... 103
SOLUTIONS AND CONTROLLABILITY ON SYSTEMS OF DIFERENTIAL EQUATIONS WITH
DELAY............................................................................................................................................................... 115
JAROMÍR BAŠTINEC, GANNA PIDDUBNA .............................................................................................................. 115
THE CALCULATION OF ENTROPY OF FINANCIAL TIME SERIES.................................................. 121
PETR DOSTÁL, OLDŘICH KRATOCHVÍL ................................................................................................................ 121
TO PERFORMANCE MODELING OF PARALLEL ALGORITHMS ..................................................... 125
IVAN HANULIAK 1, PETER HANULIAK 2 ................................................................................................................ 125
OPTIMIZATION DATA STRUCTURES IN PARALLEL PRIME NUMBER ALGORITHM. ............. 133
ANDREJ HOLÚBEK ............................................................................................................................................... 133
PROGRAMMING METHODS ....................................................................................................................... 137
FILIP JANOVIČ, DAN SLOVÁČEK .......................................................................................................................... 137
COMPARISON OF AN ARMA MODEL VS. OF SVM AND CAUSAL MODELS APPLIED TO WAGES
TIME SERIES MODELING AND FORECASTING.................................................................................... 143
MILAN MARČEK1 MAREK HORVATH3 DUŠAN MARČEK1,2,3 ................................................................................. 143
PROCESSING OF UNCERTAIN INFORMATION IN DATABASES ...................................................... 151
PETR MORÁVEK1, MILOŠ ŠEDA2 .......................................................................................................................... 151
FROM THE RING STRUCTURE OF HYDROGEN ATOM TO THE STRUCTURE OF GOLD ......... 159
PAVEL OŠMERA ................................................................................................................................................... 159
NEURAL NETWORK MODELS FOR PREDICTION OF STOCK MARKET DATA ........................... 171
JINDŘICH PETRUCHA ............................................................................................................................................ 171
OMNI-WHEEL ROBOT NAVIGATION, LOCATION, PATH PLANNING AND OBSTACLE
AVOIDANCE WITH ULTRASONIC SENSOR AND OMNI-DIRECTION VISION SYSTEM............. 175
MOURAD KARAKHALIL1, IMRICH RUKOVANSKÝ2 ................................................................................................ 175
ASYMPTOTIC PROPERTIES OF DELAYED SINE AND COSINE ........................................................ 183
ZDENĚK SVOBODA ............................................................................................................................................... 183
GAME THEORY.............................................................................................................................................. 189
MARIE TOMŠOVÁ ................................................................................................................................................ 189
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
6
THE MESSAGE FROM THE GENERAL CHAIRMAN OF THE CONFERENCE
Dear guests and participants at this conference.
Ing. Oldřich Kratochvíl,
h. prof., Ph.D., Dr.h.c.,MBA
Prof. Ing. Imrich
Rukovanský, CSc.
Let me welcome you at the ninth International Conference on Soft Computing
Applied in Computer and Economic Environment ICSC 2011. During last ninth
years the every year conference has appeared to become an important meeting for
introduction the latest knowledge and results of collaborating universities and
work places involved in modern optimizing methods and tools of soft computing
such as fuzzy control, evolutional algorithms, usage of neuron web etc. We
witness that the papers from this conference are cited by our academics at their
papers at many international conferences abroad. The most important of these
conferences have been World Congress on Engineering and Computer Science,
WCECS 2009, San Francisco, CA, Oct. 20-22, 2009, 29th International
Symposium on Forecasting, Hong-Kong, China, June 24-26, 2009, and 30th
International Symposium on Forecasting in San Diego, USA, June 20-23, 2010,
and further.
In this way we make our school penetrate into the subconscious of broader
professional public.
Due to this reality the writers not only from the Czech Republic, but also from Russia, the Slovak
republic, Poland, Hungary, USA,, Ukraine, read papers at our conference.
As in every year the papers are divided into two groups. In the first one the focus is on the soft
computing and its application in marketing management and in the modern financial systems the
other one concentrates on modern computer tools for the process optimizing.
Dear guests, I believe that this anniversary ninth ICSC 2011 will support the further depth of
contacts and information exchange among the collaborating universities and other institutions both
at home and abroad in the area of the development of the modern optimizing methods and
application opportunities of soft computing.
Hodonín, January 21th, 2011
Oldřich Kratochvíl
Honorary professor, Ing., Dr.h.c., MBA, Ph.D.
rector
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
7
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
8
SECTION 1
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
9
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
10
MAINTANCE OF FIELD POINT IN GEODESY VIA GIS
Dalibor Bartoněk
European Polytechnic Institute, Ltd. Kunovice
Abstract: This article deals with the geographic information system (GIS) for the maintenance of
field point of Králický Sněžník locality (Czech Republic). The system was created to support the
training in field in the branch of Geodesy and cartography. Thus at recognition the students can
identify the points in an easier way which contributes to effective planning and solution of the
assigned measuring tasks in field. Besides cartographic data in the form of a base map and an
orthophoto of the locality in question even topography of the individual points including their
photos with further descriptive attributes in digital form are available. Currently the spot fields in
Jedovnice in Blansko region and Nesměř nearby Velké Meziříčí are being processed in this way. All
work activities on this system are processed in Application Geomedia Intergraph and ARC/INFO.
Key words: field point, GIS, geodesy and cartography course.
INTRODUCTION
In the course of studies at the Institute of Geodesy, Faculty of Civil Engineering (FCE) Brno University of
Technology (BUT) students are supposed to go through an obligatory subject Training in field. This subject
ranks among summer term lectured in all third forms of bachelor studies and also in the first form of magister
studies following up with it. The tuition is held in 3 different localities in Czech Republic: at Nesměř nearby
Velké Meziříčí (1 year), at Jedovnice in Blansko district (2 and 3 year) and at Dolní Morava below Králický
Sněžník (1 year of following magister form of studies). These localities are totally unknown for many students.
In order to realize the tuition program in a given term and recquired quality it is necessary to provide students
with information about the locality. Therefore project dealing with the bases data within geographic information
system (GIS) in the Geomedia Intergraph or Arc/Info system has been launched. It was aimed to make a wellarranged map in the classical and electronic form completed with the database information concerning the
3 above mentioned localities. This article describes the project of GIS formation for Dolní Morava locality. The
system is made contemporary each year and this concerns even the necessary maintenance of the field point.
WORK ON THE PROJECT
Work on the project was divided into 4 stages:

data, documents and information capture about the field point of a certain locality,

confrontation of obtained foundations with the reality – maintenance of field point if necessary,

documents and data digitalization and classification of information,

data processing in the Geomedia Intergraph and ARC/INFO system.
The 1. stage deals with the choice of topography and the proposal of points attributes. The whole process can be
characterized as an iteration of recquirements concerning the information in confrontation with real possibilities
of the system from potential users.
The 2. stage proceeds right on the locality in the form of reconnaissance. The documentation of the field point
was compared with their real state in field.
In the 3. stage topographic of points were digitalized and modified in Adobe Photoshop proposal of point’s
attributes completed. Each point is characterized by point number (code), name, Y, X coordinates in S-JTSK (the
system of unified trigonometric cadastral net), the height in bpv system (Baltic altitude system after leveling),
picture of topography, note and 3 photos taken in the terrain – see fig. 1. The photographs are taken in 2 different
significant directions, completed with detail of the point to make the easiest point identification possible. For
better orientation the points were split into 4 categories (the total number in the bracket):

points of the state leveling net (19),

training leveling points (58),

points determined by GPS method (17),

piers (12).
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
11
Having prepared all materials work in Geomedia Intergraph was started (4. stage). Map window of a given
locality (see fig. 2) contains:

2 x raster layer of basic map of ČR chosen locality at a scale of 1 : 25000,

22 orthophoto layers at a dimension of 2500 x 2000 m with the resolution of 50 cm/pixel,

geographic elements e.g. points of point field categorized see above,

text descriptions.
While the raster layer of basic map of ČR was transferred into S-JTSK without any other arrangement the
orthophoto snaps had to be photomerged (one mosaic picture) of 22 segments into one layer. After mosaic
composition was created, the whole layer of orthophoto was cropped according to the extent of point field and
after then georeferenced into S-JTSK. Data entering was followed by the editing of the individual elements.
Each element was classified and placed in the map window according to the coordinates and completed with
attribute values (see fig. 1 and 2). When editing is supplemented the project is transferred into the print output
and completed with cartographic elements (scale, north arrow, cadastral boundaries, legend etc.).
point code
point number
point name
coordinates
Y, X
height H
topography hypertext
point photos
(3x) - hypertext
note
Fig. 1 Attributes of the points
RESULTS OF THE PROJECT
The project’s contribution consists in getting not only a classic map but also an output in digital PMF file
(Published Map Format) created in Arc/Info module. PMF file can be viewed in application ArcReader which is
free ware and can be downloaded on ESRI (US) or ARCDATA (CZ) internet home page [6]. ArcReader support
all functions as ArcGis with the exception of editing functions. Thus the basic functions will become accessible
even to the users whose computer lacks in installation of Geomedia Intergraph or Arc/Info product. The output is
processed in 2 different variants for the teachers (full version) and for the students (limited version). Student’s
version lacks in some important information which might make the tuition less effective.
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
12
Fig. 2 Map window of locality Dolní Morava in Geomedia Intergraph
CONCLUSION
The results of the above described project serves to both the students and the teachers in the field of geodesy and
cartography in FCE, BUT. As the point field works frequent changes it is necessary for the project to be made
more topical and supplemented. Students will get involve of in this activity mainly in those subjects: Computer
graphic, Database, Land information systems and Digital terrain model. Students will get acquainted with
modern GIS technology thus the tuition will be directed to the needs of the practice.
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
13
Fig. 3 Map layout of Dolní Morava locality
LITERATURE
[1]
ALEXY, J. Informačný a znalostný manažment v pedagogickom procese a praxi. Trenčín : FSEV
TnUAD, 2006.
[2]
BRUCE, CH. S. The seven Faces of Information Leteracy. AUSLIB Press, 1997.
[3]
DRUCKER, P. E. Management. Bodoucnost začína dnes. Praha : Management Press, 1992.
[4]
VEBER, J. Management. Praha : Management Press, 2003.
[5]
FORAL, J. Geodesy I. Modul 01 – Geodetic training I. Lecture notes FCE, BUT, electronic version,
64 p., 2005, in Czech.
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
14
[6]
[7]
[8]
[9]
[10]
[11]
FIŠER, Z.; VONDRÁK, J.; PODSTAVEK, J. Training in field II. Lecture notes, FCE BUT, electronic
version, 64 pp, 2005, in Czech.
JANDOUREK, J.; BLAŽEK, R. Geodesy (guide in training). 2. ed., ČVUT, Praha, 1990, 219 pp, ISBN
80-010-0305-1, in Czech.
NEVOSÁD, Z.; VITÁSEK, J.; BUREŠ, J. Geodesy IV: Coordinate computation. 1. edition, CERM,
Brno, 2002, 157 pp, ISBN 80-214-2301-3, in Czech.
ČADA, V. Geodesy. Electronic lecture notes, in Czech [on-line] Available from:
http://gis.zcu.cz/studium/gen1/html/index.html
Manuals of Geomedia Intergraph. [on-line] Available from: http://www.intergraph.cz/,
http://www.intergraph.com/
Manuals of ARC/INFO [on-line] Available from: http://www.arcdata.cz/, http://www.esri.com/
ADDRESS
Ing. Dalibor Bartoněk
Evropský polytechnický institut, s.r.o.
Osvobození 699
686 04 Kunovice
Czech Republic
email: [email protected]
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
15
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
16
USAGE OF GIS IN ARCHAEOLOGY
Stanislava Dermeková, Dalibor Bartoněk
Brno University of Technology
Abstract: GIS in medieval archeology it is the topic of this paper. The aim of this project is a
location of community called Kocanov, which disappeared in the Middle Ages in the ArcGIS and
creating a prediction, which is based on characteristics of human activity and their relation to
space. Estimation of habitual area of the community in the Middle Ages is based on spatial analysis
and presents new opportunities in classical archeology. In this work there are described various
procedures of making archaeological prediction model. Formation model is to use the features of
the hydrological modeling, the distance from water sources, steep terrain, the visibility of the
observed points. Result of the analysis is focused on determining the appropriate settlement
location. The resulting localization is compared to historical sources and analyzed using the
archaeological works. Interpretation and evaluation of results achieved is the starting point for new
opportunities, which would lead to more detailed identification of potential archaeological sites.
Keywords: GIS, archaeology, localization, old maps, prediction model, spatial analysis.
ÚVOD
Currently, geographic information systems are the main element of a non-destructive approach to the nature of
spatial data. Their use in various applied sciences is indispensable nowadays. This may include science
Fig. 1. Overview of the focused locality
and archeology, which work with spatial information associated with historical and contemporary world. The
aim of this project is a certain appreciation of the facts as to push the classical archeology at the comprehensive
link between past and present world. This work deals with possibilities to link spatial information in GIS
environment with archaeological knowledge of the past, such as search and obtain historical and contemporary
documents, evaluation of their use for archaeological prediction. The main emphasis is on landscape analysis
using GIS spatial analysis and analysis resulting from historical, ethnographic, archaeological and environmental
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
17
research. The aim is to locate a comprehensive analysis of the defunct medieval villages in the area of Kocanov,
Jinačovice set out by municipalities, Moravské Knínice, Chudčice and Rozdrojovice – see Fig. 1. In this paper
we deal with verification of each character issue at the site, its archaeological characteristics, reconnaissance,
searching for literary documents and map reconstruction. The next part deals with the issue of archaeological
prediction model and a procedure for solving spatial analysis. In the final section, we evaluate and interpret the
results of tracking a vanished medieval village of Kocanov.
CONCEPTION OF THE PREDICTION MODEL
The first attempt to pragmatically oriented archaeological prediction was recorded in American literature [9].
The solution described in the article was inspired by the projects [10] and [11]. Prediction Methods in
Archaeology can be divided into inductive and deductive. The inductive method works on the basis of already
obtained findings (artifacts). In our case we make use of deductive methods to be used for prediction of the sites
without archaeological findings. Conception of the prediction model is depicted in Fig. 2. The results in [10] and
[11] confirmed the generally accepted finding that the location of archaeological findings depends on the
parameters of the natural environment (especially the distance from the nearest watercourse, slope of terrain,
altitude, possibly on the nature of soils). In addition, there is a strong link to the previous history of the country,
which reflected in the ideological motivation: residential areas, even after many centuries of avoiding the areas
of earlier sites, which must have had some ritual justification. Furthermore, there is the dependence on settlement
processes on social factors. All these facts are reflected in the input parameters of the prediction model.
Input parameters
Environmental factors
Deductive model
for predicting
areas in
archaeology
(based on GIS)
Outputs
Variations of areals
Socio-economic factors
Interpretation of results
Religious and ideological
factors
Landscape history
Correction of input parameters
Evaluate the
results of experts
in the field of
archeology
(feedback)
Satisfied
the
solution?
Preliminary presentation
of results
End of prediction
Presentation of results
Fig. 2. Conception of the prediction model
All these facts are crucial for the predicting the occurrence of prehistoric sites. If the location of prehistoric sites
in addition to the natural environment depends also on the social and ideological factors and the history of the
country, then the prediction is obviously difficult, and in some cases even impossible, only the study of natural
conditions. It is important to study the relationship of the social dimensions of the settlement areas, determine
their structure and history and to make use of GIS tools to achieve these aims. Therefore it has been inserted
feedback into the model. Its function is to correct the input parameters of the model. The results obtained by
experimenting with the model are tested in the iteration cycle. The test is realized in cooperation with experts in
the field of medieval archeology.
PREDICTION MODEL DEVELOPMENT
The project was proposed as a simple prediction model for estimating the potential location of the defunct
medieval village of Kocanov. Verification with "technically advanced" software is now pushing forward and
offering new possibilities in classical archaeology prediction. APM (Archaeological Predictive Modeling) is the
goal (objective) of creating a mere prediction associated with the clarification of human activities on the already
documented sites and their relation to space [1]. Next, create APM presents a forecast of unexplained deaths
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
18
estates context. In developing the archaeological prediction model it is necessary to cooperate with
archaeologists themselves or to use literature, which deals with archaeological interpretation. It is necessary to
consider the contexts associated with certain factors. Among these factors may include the suitability of the
environment, economic factor, factor minimum effort, defensive factor etc. For the considered factors there are
certain criteria that effectively create prediction model for the implementation of spatial analysis in ArcGIS.
Study
of
historical
materials
(Text
or
graphical
documents)
Rough
localization of
area of interest
Historical and
current
data
collection:
Text reports,
Graphical
documents
(maps, plans)
Data preparation
Initial phase
Data
modeling,
spatial
analysis
GIS
in
Data
classification,
transformation
and
georectification
More
detail
localization of
area of interest
Geodetic
surveying of the
locality in field
Spatial analysis and data interpretation
Outputs of results,
presentations
Final phase
Cooperation with experts in the branch of
archaeology
Fig. 3. Development of the prediction model
The whole project consists of 4 phases – see Fig. 3:

Initial phase,

Data preparation,

Spatial analysis and data interpretation,

Final phase.
The used model is generalization of methods used in publication [7].
THE INITIAL PHASE
The aim of this project is to identify the location of the defunct medieval village of Kocanov. Verification of the
existence of a medieval settlement today is feasible only with the help of written historical sources; local names
in old maps and documents, especially with reproducible, providing archaeological evidence of the site in
question [4]. Historical documents from the Middle Ages are preserved only in the form of literary sources.
Therefore, the effort was initially focused on the history of a wide area at municipality of Jinačovice, Moravian
Kninice, Chudčice, Rozdrojovice later it has concentrated on Kocanov locality. Reconnaissance issue at the site
took place on the ground under the supervision of an archaeologist, where each detail has to be in accordance
with archaeological practice [5]. The individual sites were compared with historical map data, which are
subsequently transformed in ArcGIS for further technical procedures in the field of spatial data analysis. Very
important is the study of historical literature. The previous chapter suggests that successful prediction can’t be
made without clarification of the fundamental questions what we want to predict or expect and then we have at
disposal not only environmental factors, but also a number of historical, social and ideological factors.
DATA PREPARATION
Data collection and search are the most important stages of project design for archaeological purposes in ArcGIS
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
19
environment. Sources of data are essential for building an information system as a prerequisite for spatial
analysis. Phase search of suitable substrates is the largest part of the whole work. Among the primary sources
may rank surveying, measuring methods of GPS, Remote Sensing (RS), Photogrammetric, etc. The next step
includes searching variety of cartographic materials, custom drawings and maps, a database in tabular form,
literary, historical text documents, further archaeological and environmental data, data from historical maps,
ethnographic research. For the purposes of the project were used both the primary and secondary data
sources. Among the evidences supporting archaeological interpretations are primarily historical and
ethnographic data. All input data are shown in the Table 1. When working with the old historical maps we use
simple geometric transformations with fewer identical points. The degree of transformation is expressed in
polynomial n-th degree. The background raster pixel position is expressed in the system of map
coordinates. Geo-referencing process was carried out by using ground control points, which are specific for
particular pixel grid coordinates. The quality of input data has been consulted with archaeologists Archaeological Institute of The Academy of Sciences of the Czech Republic (ASCR), Brno. The low numbers of
input data layouts are identical predicting greater emphasis on quality control of data due to the use of rectilinear
spatial analysis. Vector bases in the series underwent adjustments which makes it much clearer. Raster
documents in digital form are usually created by scanning analogue maps or aerial or satellite acquisition
documents, photographic documentation. Most historical documents require the layout type: trimming redundant
information, unifying graphic map sheets in the more compact base. All image adjustments were made in Adobe
Photoshop Version: 10.0.
Sources
Data source (company)
Datum
2003
Czech
Office
for
Surveying and Cadastre 2002
Sources
representing spatial Archive of surveying 1850-1872
and cadastre
features
Military
Geography
and Hydrometeorology 1950,1976
Office, Dobruška
National
Heritage
2010
Institute, Brno
Sources
representing
archaeological
features
Moravian
Library,
Archaeological Institute
of the ASCR (Brno),
1827-2010
State District Archive
Brno-Country
(Rajhrad)
Research Institute of
amelioration and soil
conservation,
2010
Sources of the Department of Soil
geological character services Brno
Map Services Public
Administration Portal, 2010
CENIA
Source from the
Moravian Library
2003
climatologically
science
Number
of sheets
8
2
18
Content
Format
Orthophoto
Fundamental Base of Geographic
Data (3D)
Historical maps in the form of
mandatory fingerprint scanned
imperial stable land of Moravia and
Silesia
GeoTIFF
Vector
SHP
8 bit,
JPEG
4
aerial survey photos,
1814 dpi
1
National Archaeological list of the
municipalities a period of prehistory Vector
and the Middle Ages, coat areas SHP
with archaeological finds
10
loaned archaeological literature
1
Digital form a clear map of valuated
Vector
soil-ecological units at a scale of
SHP
1:5000
1
Map of potential natural vegetation Vector
in Czech Republic
SHP
1
Landscape Atlas of the Czech
Republic
GeoTIFF
Table 1. Input data
SPATIAL ANALYSIS AND DATA INTERPRETATION
The core of the project was to exploit the possibilities of spatial analysis in ArcGIS. Spatial analysis of
individual housing estate was realized the area of interest using landscape characteristics (landforms, river
system, topographical features of the country). In spatial analysis, there were also taken into account other
factors such as environmental suitability, economic factor, factor minimum effort, defensive factor, and the cult
factor. An important variable and the time factor were observed, because the country and its components are
constantly evolving. Therefore it is not possible to examine clearly the linkages between current environment
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
20
and the environment in the past. It is necessary to set certain targets in the planning of spatial analysis. The
primary objective is to find appropriate combinations of spatial analysis in ArcGIS. Another supreme objective
was a comprehensive spatial analysis based on documents obtained by the solving site. It was created a digital
terrain model, which was applied to each particular analysis of landscape characteristics (terrain slope, shaded
relief, exposure area).The substrate that was used for making DMT (Digital Terrain Model) of a given area was
ZABAGED (Fundamental Base of Geographic Data) - digital model derived from the vector map image the
Base Map 1:10 000 CR (ZM10). Results of landscape features were used for further spatial analysis procedures.
For the spatial analysis it was used multiobjective modeling. The model makes use of quantitative (calculating
the area of polygons, the search space with some acreage) and qualitative criteria (reclassification methods - the
process of gradient maps defunct settlement likely in the range 0-7% and 7-90%, buffer zones - distance analysis
of type distance to the village 150m, 300m from the water source). It was necessary to take into account certain
quantitative and qualitative criteria for the landscape component and settlement in the Middle Ages.
For another archaeological prediction model it was used hydrological modeling on the basis of topographic
features (water source in solving - the likelihood of human existence in the Middle Ages). Analyses of field
elements using data from the general geomorfometry related to hydrology are beneficial to the procedure of the
archaeological prediction model (APM). For analysis the hydrological modeling of river network model the
smallest streams were made use of, acting as the confluence of several rivers. Hydrological modeling procedure
itself consisted of creating a hydrological correct DMT, where various analyses were performed: determine the
direction of runoff, accumulated runoff, drainage network, drainage and river length [2]. As a basis for
hydrological modeling was used DMT created raster shapefile on waterways and lakes. At the end of the
hydrological modeling criteria we have set distances (buffer zones) from the water source, areas within 300m
from the water source and the surface to 150m from the water source.
Any action taken as the process of creating maps of slope and hydrological modeling process of moving towards
a targeted location solved vanished medieval village. As a result of finished reclassified layers (buffer zones
within 150 meters from the water source of hydrological modeling, gradient maps - used interval gradient to
7%), the individual areas of possible occurrence of a medieval settlement were created.
For the final analysis, which aimed to identify the likely site the defunct village of Kocanov, we set a couple of
assumptions and criteria. According to literary sources it was identified a clue that has become a precondition for
distance analysis. We set the criterion for the likely location of villages within a distance of 1500 meters from
the site “U tří křížů”. Around the centre of the site “U tří křížů” buffer zones gradually after 500, 1000 and
1500m has been created. Then we made a breakthrough packaging zones created with the previous results form
the basis of topographic and hydrological modeling. In the next stage of spatial analysis it was used the
possibility to analyze the visibility. Parameters for the analysis of visibility were chosen according to historical
documents. Function visibility may also prove certain inaccuracies in the determination of medieval housing
estate, so the height of the likely sites was put 3 m higher. This reasoning implies the existence of possible
watchtowers in these points. When using visual scoring points for the creation of village location it was
a problem to find a spot in nature. The settlement has the character of surface area; therefore it was necessary to
take into account the visibility “lump settlements” which spread out all sides, so the analysis was performed by
visualization of line elements, even though the graphic is shown as a point object (point object due to graphic
simplicity).
In conclusion, spatial analysis was conducted which resulted in comprehensive analysis of partial results of
previous analysis. This comprehensive analysis can determine the existence of potential sites of medieval
villages. Consideration of quantitative and qualitative criteria took place again in the ArcGIS environment.
Complying places that set criteria were consulted with the experts in the branch of archaeology. We chose those
most likely locations for a medieval village and divided them into two categories. The first category, blue ellipse,
contains the most likely area. The second one, yellow ellipse, the area contains less likely area. Graphical
representation is shown in Fig. 4. All specified areas requiring personal assessment on the ground by the
statements of archaeologists in various literary sources.
The focusing of localized settlement has been implemented by manual GPS apparatus – red line in Fig. 1. Used
GPS-type apparatus Trimble Recon GPS XC is rugged field computer with an integrated Compact Flash GPS
Pathfinder XC receiver. The device is designed for mobile data collection and updating of GIS.
THE FINAL PHASE
The goal was a comprehensive analysis of possible off-targeting to verify village of Kocanov technically. In
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
21
agreement with the leading job site, we revealed solved and decrypt individual verbal descriptions and
statements by archaeologists according to the actual state of the country. Surveying process was preceded by
a comprehensive assessment of the conclusions of the analysis in the field. The results of complex analysis can
be verified on 3 sites of interest (see Fig. 4 blue oval with serial numbers 1, 2, 3).
Fig. 4. The result of spatial analysis
Interaction between the general archaeological assumptions medieval village and its incidence is now very
important. In conclusion were discussed the connections of results in spatial analysis of landscape elements with
factors that follow from historical, ethnographic, archaeological and environmental research. Result of
a previous sequence analysis is based on certain assumptions and do not have to be 100% reliable, which
declared that the resulting position estimate is certainly right. Therefore in conclusion, we try to assess the
mutual ties between the archeological general assumptions of the medieval village and the result of spatial
analysis. Peer assessment of spatial analysis and other factors: evaluation of the analysis of topographic and
hydrological features, evaluation of the analysis in terms of natural conditions of the territory, evaluation of the
analysis of visibility with signs housing development and evaluation of the overall position of the defunct
medieval villages.
BENEFITS AND FURTHER USE OF THE PROJECT
The work presents new possibilities in classical archaeology and thus science is moved forward into other
dimensions. On the project cooperated the Archaeological Institute of the ASCR, further institutions dealing with
the history of Brno-country in short companies devoted to archaeology. The project demonstrates the technical
diversity of today, which can be used in "non-technical" sciences. We propose a roadmap for implementation of
spatial analysis and we have tried different results linked to each other with literary sources and historical
documents. For the spatial analysis we took advantage of the hydrological and topographic modeling, which
resulted in the analysis of relief. In spatial analysis, we consider some quantitative and qualitative criteria, which
were set according to historical documents and archaeological sites. For historical documents we have come
close to the location of the village with a retrospective exhibition of notion of Kocanov municipality
[3]. Retrospectively the development of settlements and the formation of cultural landscapes also took place in
the ArcGIS environment. To sum up, this project evaluates results of spatial analysis of factors that issue from
historical, ethnographic, archaeological and environmental research. Used method demonstrated new
possibilities of data collection and analysis to be verified and archaeological objects, whether found or not. Use
of ArcGIS extensions can present complex connection between past and present world. Further usage of the
project results serve for the purpose of archival video documentation of the Archaeological Institute of the
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
22
ASCR Brno-country, the historical archive of municipal offices at Jinačovice and Moravské Knínice to be used
for the presentation of the history of communities.
CONCLUSIONS
Finally, it should also be alerted to the dangers which include prediction methods and to be aware of their usage.
Firstly, the results of prediction are probabilistic nature: to warn the increased probability (risk) of archaeological
sites, but do not guarantee their presence. Second, areas with a high archaeological potential are not
automatically seen as the only interesting part of the archaeological landscape. And outside there may be
extremely interesting archaeological information in a way more significant that occur in unexpected places.
Therefore forecast maps can not replace the expert's opinion, but they can be useful, both for him and the other
users too. We are aware that archaeological prediction lead to a narrowing of the space potential of suspected
archaeological sites, not their precise delimitation. Even this narrowing can in practice be significant enough to
compensate for the costs embedded in the prediction model. We believe that the proposed project could provide
valuable assistance in various spheres of public life, government and business.
ACKNOWLEDGEMENTS
The authors thank Mgr. O. Šedo, PhD. from Archaeological Institute of ASCR in Brno for his cooperation and
assistance in interpreting the results of spatial analysis. They would also like to thank professor J. Unger from
Masaryk University in Brno for consultation in the development of prediction model.
REFERENCES
[1]
GOLÁŇ, J. Archeologické predikatívní modelování pomocí geografických informačných systémů
(Archaeological predictive modeling using geographic information systems). Faculty of Arts of Masaryk
University : Brno. 2003.
[2]
KLIMÁNEK, M. et al. Geoinformační systémy návody ke cvičením v systému ArcGIS (Geoinformation
systems guides the exercises in ArcGIS). Mendel University : Brno. 2008.
[3]
DRESLEROVÁ, D. Modelování prírodných podmínek mikroregionu na základě archeologických dat
(Modelling natural conditions on the basis of micro-archaeological data). In: Archeologické rozhledy.
No. 48, Archaeological Institute of the Academy of Science of the Czech Republic : Prague. 1996.
[4]
HOSÁK, L.; ŠRÁMEK, R. Místní jména na Moravě a ve Slezsku (Oikonyms in Moravia and Silesia).
Academia : Pratur. 1970.
[5]
EICHLER, K. Paměti panství veverského (Memoirs of Veveří estate). own expense, 1891.
[6]
BELCREDI, L.; ČIŽMÁŘ, M.; KOŠTUŘÍK, P.; OLIVA, M.; SALAŠ, M. Archeologické lokality
a nálezy okresu Brno-venkov (Archaeological sites and findings of the district Brno-Country). Brno.
1989.
[7]
MACHÁČEK, J. Prostorová archeologie a GIS (Spatial Archaeology and GIS). In: Počítačová podpora
archeologie (Computer Aided Archaeology). Masarykova univerzita : Brno, 391 p., 2008.
[8]
NEUSTUPNÝ, E. Předvídání minulosti (Predicting the past). In: Technický magazine. No. 11, p. 58-60.
1994.
[9]
ALLEN G.; GREEN S.; ZUBROW, E. (eds.) Interpreting Space: GIS and Archaeology. Taylor and
Francis : London, 1990.
[10] Archeologický potenciál Čech: riziko archeologického výzkumu (The archaeological potential of
Bohemia: the risk of archaeological research) Projekt GAČR, Archeologický ústav AV ČR : Praha,
katedra Archeologie ZČU v Plzni a katedra Geoinformatiky PřF UK v Praze. 2000.
[11] VENCLOVÁ, N.; KUNA, M.; HAIŠMANOVÁ, L.; KŘIVÁNEK, R.; ČIŠECKÝ, Č. Predikce
archeologických areálů (Prediction of archaeological sites). GA ČR č. 404/95/0523. 1995.
ADDRESS
Ing. Stanislava Dermeková
Institute of Geodesy
Faculty of Civil Engineering
Brno University of Technology
Veveří 331/95
602 00 Brno
Czech Republic
[email protected]
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
23
Ing. Dalibor Bartoněk
Institute of Geodesy
Faculty of Civil Engineering
Brno University of Technology
Veveří 331/95
602 00 Brno
Czech Republic
[email protected]
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
24
POSSIBILITIES OF USING THE OPEN PLATFORM „ANDRIOD“ IN INFORMATION
SYSTEM OF EUROPEAN POLYTECHNIC INSTITUTE, LTD.
Juraj Ďuďák
European Polytechnic Institute, Ltd. Kunovice
Abstract: Nowadays, there is a boom in mobile technologies. The mobile device is not only a
cellphone anymore. It is a personal communicator or a smartphone. This is a device with a size of a
common cellphone, powered with the functionality of a personal computer. The most important
players in smartphones domain are Apple with their iOS operating system and also Google with
their Android operating system. In this paper are described abilities of these operating systems and
a proposal of utilization of possibilies offered by the open-source Android operating system.
Keywords: Android, information system, open source
INTRODUCTION
Actual web portals and services have to adapt to possibilities and needs of clients, which are using of these
services. The straightest and easiest form of using of these services is web application which runs in web
browser. There is a precondition that the user use this service in personal computer or laptop with display size as
much as necessary (13” and more). The next precondition is support of JavaScript scripting language or Flash
format. With the attempt to display the content of web application in browser of smartphone, the result can be
different as we except. There are several factors of this reality: the most important thing is screen size. Next, the
way of controlling smartphone. The majority of smartphones has touchscreen, which I controlled by fingers.
With the reduced display of web page is problematic to hit of control elements.
The modern smartphones as is iPhone or devices that use Android operating systems offer much more: run the
mini-application or widget, which will display user-defined information, which obtain from some information
system. In classic web applications, we can these widgets compare with personal Google web page 1, where is
ability to use various gadgets, for example calendar, foreign exchange reference rates, news channel, notepad
and others. These mini-application are known also in Windows Vista and Windows 7 operating system as
sidebar, or in graphic desktop environment Gnome (Linux operating system) are gDesklets 2 or Screenlets 3.
Information represents by this way is always visible and actual. There is no repeated process to login in
information system and looking for desired information. The mini-application supply the login process and
displaying only important information.
In the Figure 1 is example of widget on the smartphone with Android operating system. This widget displays the
events in scheduling calendar. The content of widget like this can be stored locally in smartphone or in some
web server. This example of this calendar widget is interesting, because there are the used as source if data the
Google calendar. Google calendar is also accessible direct web interface. With using the Google Calendar API 4
is possible to use data from scheduling calendar in random application.
1
http://www.google.com/ig
http://gdesklets.de/
3
http://www.screenlets.org
4
http://code.google.com/apis/calendar/
2
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
25
Figure 1 Calendar widget 5
The Google Calendar API is available in various programming languages: NET, Java, PHP, Python, JavaScript,
C++. The developers can use such programming language, which they use for deploying the applications. The
advantage of using Google API is direct support for Android operating system.
THE ANDROID OPERATING SYSTEM
Android is a software stack for mobile devices that includes an operating system, middleware and key
applications. The Android SDK provides the tools and APIs necessary to begin developing applications on the
Android platform using the Java programming language.
Features of OS Android:

Application framework enabling reuse and replacement of components.

Dalvik virtual machine optimized for mobile devices.

Integrated browser based on the open source WebKit engine.

Optimized graphics powered by a custom 2D graphics library; 3D graphics based on the OpenGL ES 1.0
specification (hardware acceleration optional).

SQLite for structured data storage.

Media support for common audio, video, and still image formats (MPEG4, H.264, MP3, AAC, AMR,
JPG, PNG, GIF).

GSM Telephony (hardware dependent).

Bluetooth, EDGE, 3G, and WiFi (hardware dependent).

Camera, GPS, compass, and accelerometer (hardware dependent).

Rich development environment including a device emulator, tools for debugging, memory and
performance profiling, and a plugin for the Eclipse IDE.
The Android platform consists of 4 layers:
1. Linux kernel – include the low laver drivers of display, camera, flash memory, keypad, WiFi, audio and
power management.
2. Libraries – consists of built-in libraries: surface manager, OpennGL ES, SGL, Media framework, SSL,
SQLite, Webkit and libc.
3. Application framework – offer various managers for running applications: window manager, Content
provide, Package manager, Resource manager, Location manager and Notification manager.
4. Application – OS Android include several base application: Home, Contacts, Phone, Browser ...
For the running of application include the Adnroid platform own virtual machine (VM): Dalvik VM, which is
built on the Java platform.
The Android platform exists in various versions. The version are labeled by integer described the API version. In
Table 1 is shown the most used version of Android operating system.
5
http://androidforums.com/android-applications/51919-pure-calendar-widget-official-topic.html
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
26
Platform
API Level
Android 2.3
9
Android 2.2
8
Android 2.1
7
Android 1.6
4
Android 1.5
3
Table 1 Android platforms overview
API Level is an integer value that uniquely identifies the framework API revision offered by a version of the
Android platform.
The Android platform provides a framework API that applications can use to interact with the underlying
Android system. The framework API consists of:

A core set of packages and classes

A set of XML elements and attributes for declaring a manifest file

A set of XML elements and attributes for declaring and accessing resources

A set of Intents

A set of permissions that applications can request, as well as permission enforcements included in the
system
Android API is available from website developer.android.com for free. There is version for Windows, Linux and
Mac OS.
APPLICATION DEPLOYMENT FOR ANDROID PLATFORM
Android platform is built from Java platform, therefore the deployment of new applications is exclusively in Java
programming language. There is one big difference: for the compilation and build the application is not use the
standard compiler JVM (Java Virtual Machine), but the Dalvik Virtual Machine (DVM). Officialy development
tool for Android application deploying is Eclipse IDE. There exist also the plugin in Netbeans IDE, but the
abilities of eclipse plugin are on higher level that solution in Netbeans. In the next part will be described the
procedure with creating of application for Android operating system.
Before installing the Android SDK, you have to install Java Developer Kit (JDK). Next step is to install Android
SDK. SDK is available in developer web: http://developer.android.com/sdk/index.html. In the Eclipse IDE is
necessary install the ADT (Android Development Tools) plugin. The last step in setting up your SDK is using
the Android SDK and AVD Manager (a tool included in the SDK starter package) to download essential SDK
components into your development environment. The SDK uses a modular structure that separates the major
parts of the SDK—Android platform versions, add-ons, tools, samples, and documentation—into a set of
separately installable components. The SDK starter package, which you've already downloaded, includes only a
single component: the latest version of the SDK Tools. To develop an Android application, you also need to
download at least one Android platform and the SDK Platform-tools. However, downloading additional
components is highly recommended.
To download components, use the graphical UI of the Android SDK and AVD Manager, shown in Figure 2, to
browse the SDK repository and select new or updated components. The Android SDK and AVD Manager will
install the selected components in your SDK environment
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
27
Figure 2 Android SDK and AVD Manager
For the running of created application is necessary to create virtual Andoid device, or download the application
in real smartphone. We create the virtual device. By creating the virtual device, some parameters as size of SD
card, display size and API version must be specified. After creating this virtual device, the deployed applications
can be launched. The launching of new application handle the Eclipse IDE. Created application run in
environment of Android operating system. Since the Android OS can be installed on varied devices (devices
with touchpad, with keyboard, with numeric keyboard, …), all of these abilities are available in Android virtual
device (Figure 3)
Figure 3 Android virtual device
THE USE OF OPEN PLATFORM ANDROID IN INFORMATION SYSTEN OF EUROPEAN
POLYTECHNIC INSTITUT
The information system of EPI is on web address sis.edukomplex.cz. This modern web portal include next parts:
timetable (Rozvrh hodin), online record from teaching (Online zápis z výuky), student’s book (Elektronický
index), my exams (Moje zkoušky), list of bachelor work (Seznam všech prací), library catalog (Internetová
knihovna), absence excuse (Omluva absence) and study department (Studijní oddelení).
For the next modernization and widening of usability, is appropriate some parts of EPI IS create as applications
or widget for mobile devices with Android operating system.
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
28
APPLICATION PROGRAMMING INTERFACE OF EPI INFORMATION SYSTEM
The prerequisite for proper functionalist of new applications is the creating of application programing interface
(API) of EPI information system. This EPI API should offer to remote applications the requested data (data for
timetable, bachelor work, library catalog, …). With this API is no problem to extend EPI IS to any platform, as
is Android, iPhone, BlackBerry, desktop application in Windows, Linux, or independent web application in
which will be embedded the “EPI IS module”.
ROZVRH DROID
The first proposed application is rozvrhDroid – the view of actual schedule of school classes with all of
information (created as application) or as simpler view (as widget). In design process of new application for
mobile devices with small display size is necessary to change conception of displaying the data. In schedule
application is not possible to display the schedule of single day in one row of table as in web application,
because the table row has 16 cells.
There is the proposed layout of rozvrhDroid application:

The main screen – the selection of the week and day for schedule displaying – Figure 4.a

For the day selection are the buttons “Monday” to “Sunday”

For the week selection are the “>>” and “<<” buttons.





The settings of application are accessible after pressing the “Menu” key – Figure 4.b.
The settings page (Figure 4.c) is divide to “Login” and “Information selection”
Login information: the login, context and password are required – Figure 4.d.
Information selection:

There are 4 possibilities (Figure 4.e): Teacher, Class, Room and Object.

Value for information searching. These values are the same as in web portal sis.edukomplex.cz
(e.g. DD, PT, SX, ... for Teacher; 1EP, 2EI, … for Class; P1,H7, … for Room, or SEJ, DAS, ELE,
… for Object).
Displaying of the schedule will be in “linear” view (Figure 4.f).

The schedule is displaying only for one day, because there is no place for all days displaying in one
screen.
a
b
d
e
c
f
Figure 4 Application rozvrhDroid
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
29
The following source code in XML language is definition of application layout of main screen (Figure 4.a). In
the Android platform, the design can be defined declarative (or static) or dynamic, by using Java programming
language.
SOURCE CODE 1 DECLARATION OF “ROZVRHDROID” SCREEN.
<?xml version="1.0" encoding="utf-8"?>
<LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"
android:orientation="vertical" android:layout_gravity="center">
<TableLayout android:id="@+id/mailTable" >
<TableRow android:id="@+id/main1r">
<Button android:id="@+id/mainSpat" android:text="<<" ></Button>
<TextView android:id="@+id/mainInfo" android:text="X. tyzden"></TextView>
<Button android:id="@+id/mainVpred" android:text=">>" ></Button>
</TableRow>
</TableLayout>
<LinearLayout android:orientation="vertical">
<TextView android:id="@+id/TextView01" android:text="@string/hello" ></TextView>
<Button android:id="@+id/b_pon" android:text="@string/mon"></Button>
<Button android:id="@+id/b_uto" android:text="@string/tue"></Button>
<Button android:id="@+id/b_str" android:text="@string/wen"></Button>
<Button android:id="@+id/b_stv" android:text="@string/thu"></Button>
<Button android:id="@+id/b_pia" android:text="@string/fri"></Button>
<Button android:id="@+id/b_sob" android:text="@string/sat"></Button>
<Button android:id="@+id/b_ned" android:text="@string/sun"></Button>
</LinearLayout>
</LinearLayout>
CONCLUSION
In this paper, the possibilities of Android platform were described. Also possibilities of EPI information system
were described. The actual web portal of EPI is modern and well arranged. The next step in development
progress can be “Mobile EPI portal” designed for small screen devices. The proposed rozvrhDroid application
was designed for displaying actual school schedule. The application was created for the Android platform.
LITERATURE:
[1]
BRUNETTE, E. Hello, Android. Introducing Google’s Mobile Development Platform. Pragmatic
programmers. 2010. ISBN 1-934356-56-5.
[2]
MEIER, R. Professional Android 2 Application Development. Wiley Publishing Inc., 2010.
ISBN 978-0-470-56552-0.
[3]
ECKEL, B. Thinking in Java Third edition. Prentice Hall, 2003. ISBN 0-13-100287-2.
[4]
Android SDK. Android developers, [online] Available from: http://developer.android.com/sdk/index.html.
[5]
Android
reference.
Android
developers,
[online]
Available
from:
http://developer.android.com/reference/packages.html
ADDRESS
Ing. Juraj Ďuďák
European Polytechnic Institute, Ltd.
Osvobození 699
686 04 Kunovice
Czech Republic
E-mail: [email protected]
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
30
AUTOMATED COLLECTION OF TEMPERATURE DATA
Gabriel Gašpar1, Juraj Ďuďák2
1
Materiálovotechnologická fakulta STU BA v Trnave
2
Evropský Polytechnický Institut, s.r.o.
Abstract: This paper deals with automated collection of temperature data. Requirement of long
term temperature data measurements is urgent in various areas of the industry. Using the proposed
system it is possible to detect the thermal transmittance, respectively insulation properties of
various building materials. Multiple temperature sensors allow us to create a dynamic temperature
map of the monitored area, thus the change of the temperature over time. Based on this information,
may be subsequently proposed optimization solutions to remedy the unintended condition (heat
losses, accumulation of heat, ...).
Keywords: temperature sensor, temperature data collection, web application, data visualization.
INTRODUCTION
Today we are experiencing an unprecedented boom in the requirements for distributed measurement of physical
quantities in many areas from industry, construction, energy to the building management. Requirements for such
systems include scalability, extensibility, ease of use for the end user, the maximum length of lines, a certain
level of system‘s autonomy and simple processing and data visualization on the customers side. In this article
will be presented the possibility of measuring temperature range from -55 ° C to +85 ° C using a digital
thermometer DS18B20 made by Maxim 1. The advantage of this solution over analog thermometer is that the
communication with a host computer uses only two-wire line with a maximum length of up to 500 meters with
a parallel connection of sensors. To communicate with a 1-wire network was created an interface based on
a CY8C29466-PXI microprocessor by Cypress Semiconductors 2 and a software layer created in programming
language Python 3 for the GNU 4 / Linux Ubuntu 9.10 5 operating system.
HARDWARE
To communicate with the 1-wire network was created a simple interface communicating with a host computer
through a serial UART interface. The main part of the device is a CY8C29466-PXI microprocessor by Cypress
Semiconductors. Architecture of this microprocessor allows us to configure peripheral devices like UART and
OneWireSW 6 directly to the pins of the microprocessor. For the voltage levels conversion from RS-232 to TTL
was connected MAX232N based converter in Fig.1.
RX/TX
MAX232
DQ
PC
1-wire network
OneWireSW
Fig. 1 UART interface – 1-wire network
DS18B20 sensors are supplied in a TO-92 package. For enhanced protection in aggressive environments sensors
have been inserted into additional aluminum enclosures that were filled with epoxy resin with a coefficient of
thermal conductivity of 1.2 W/m.K in Fig.2. Sensors were connected in the parasite connection, where the for
1
http://www.maxim-ic.com/
http://www.cypress.com/
http://www.python.org/
4
http://www.gnu.org/
5
http://www.ubuntu.com/
6
http://www.maxim-ic.com/products/1-wire/
2
3
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
31
the network connection is required only two-wire line and the energy necessary for the functioning of the sensor
is taken from the data line. Each sensor is equipped with a unique 64-bit address that is used for addressing
individual sensors on the network.
Fig. 2 TO-92 package and the sensor in additional aluminum enclosure
Microprocessor firmware was written in C language. The host computer controls the interface by commands
transmitted to the serial port:

DNR – shows the number of sensors connected in 1-wire network

ADR – shows the addresses of sensors connected in 1-wire network

TMP <64-bitová address> - temperature measurement using the parameter to address a sensor
Temperature measurement takes 0.75 seconds. This is due to phasing of the various operations occurring when
using the parasite connection of the sensors. In the first phase of the high-line state is the internal capacitor of the
sensor charging, the second phase is a conversion from analog to digital temperature value and finally, in the
third phase the digital value is sent to a host computer.
DATA COLLECTION
For data collection was chosen GNU / Linux Ubuntu 9.10 as a host computer operating system, Python
programming language and MySQL database server. There are many available libraries for Python from which
we used pyserial 7 and MySQLdb 8.
Pyserial is a library for communication with serial ports. It works under the GNU / Linux operating system and
also under MS Windows. The following code snippet is a simple description of basic operations.
ser = serial.Serial('/dev/ttyUSB0', 19200, timeout=1)
ser.flushInput()
ser.flushOutput()
ser.write([email protected] 281E72C7010000AC\r\n')
str=ser.readline()
# communication settings
# input buffer flush
# output buffer flush
# command - temperature measuring
# read the reply
Since the measurement value is returned as a hexadecimal number, it is necessary to perform the conversion
according to the selected bit resolution of the sensors datasheet.
Data storage in the database is provided by the library MySQLdb and a simple example of use is in the following
code snippet.
try: conn=MySQLdb.connect(host="dbserver.com",user="root",passwd="passwd",db="temperature")
except MySQLdb.Error, e:
print "Error %d: %s" % (e.args[0], e.args[1])
sys.exit (1)
# attempt to connect to a defined database with defined user name and password
cursor=conn.cursor () # set the cursor to the last line
cursor.execute("INSERT INTO meranie (teplomer, datum, teplota) VALUES (%s,%s,%s)", (tep,cas,teplota))
# write values to the table
7
8
http://pyserial.sourceforge.net/
http://sourceforge.net/projects/mysql-python/
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
32
Running a script at periodic intervals is ensured by the system process - task scheduler Cron. We chose the
sample period of 5 minutes for 5 sensors. The data is stored in a MySQL database, for each line there is recorded
the name of the thermometer, date and time of measurement and the measured value (Fig. 3).
Fig.3 – Entry of temperature data into database table
VISUALIZATION OF MEASURED DATA
The presentation layer has an important role in the whole system. Measured data can be presented in various
forms. It can be represented in simple tabular information, static graphs or dynamic – in time changing graphs.
Tabular representation of data is usually less transparent and inefficient, but with this type of data representation
can be read the most accurate values.
A suitable form of visualization is to use graphs showing temperature dependent on time. There are several ways
to create a graph.

Graphs generated on the server side.
There is a library that will ensure the creation of the graph, respectively bitmap with a graph on a web server.
These include libraries i.e. matpotlib 9, gnuplot 10, SciPy 11 (for Python) or jpgraph 12, GraPHPite 13 (for PHP
language) and others. Fig, 4 shows a graph of measured temperature data using the matplotlib library.
Fig. 4 Graph generated by matplotlib library
Advantages:
Disadvantages:

Ease of use, wide options for adjusting the graph.
A bigger volume of data is transferred to the user (image file with graph).
Graphs generated on the client side.
This includes technologies where the rendering of a graph is done by the client who displays the graph. In our
case the client is a browser. Currently, there are several technologies that allow it. One of them is SVG format,
9
http://matplotlib.sourceforge.net
http://www.gnuplot.info/
11
http://www.scipy.org
12
http://jpgraph.net
13
http://graphpite.sourceforge.net/
10
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
33
using this we can transmit graphics in a text form. Another option is to use the new (yet unofficial) version of
HTML - HTML 5. HTML 5 offers a new object "canvas", by which we can directly create a dynamic bitmap
graphics.
When using options to generate graphs on the client side we have to send all the information we want to
visualize to the client. At a high quality generated graphs is the size of the data from which the graph will be
drawn usually smaller than the resulting file size of the generated image file with the graph. Fig. 5 shows a graph
of measured temperature data using the library RGraph.
Fig. 5 Graph generated by RGraph14 library
Advantages:
Disadvantages:

fast graph generation, only data to be visualised are transmitted.
element <canvas> is not implemented in all web browsers yet.
Graphs generated by a third side.
There are services which provide the graphs generation. Such technologies include Google Chart Tools2. The
quality of the resulting graph is very high. Also possible settings of graph parameters are excellent. The
disadvantage of this solution is that the data to be visualized are sent to a third party server. Linked to this is also
another fact and that is speed. In comparison with previous types, this category has the largest response time.
Fig. 6 shows a graph of measured temperature data library using Google Chart Tools 15.
Advantages:
Disadvantages:
Obr. 6 Graph generated by Google Chart Tools
setting of graph parameters, added interactivity in graphs (att. Fig. 6)
slow response at a bigger volume of data.
CONCLUSION
The article has presented an automatic system for collecting temperature data. All parts of the measuring system
are provided under the GNU / GPL, which allows free use of the proposed system. The system works in trial
operation for more than 1 year without intervention or repair. Results of measurement data are available through
the web application. This serves as a pilot project to create a more comprehensive system for automatic
14
15
http://www.rgraph.net/index.html
http://code.google.com/intl/sk/apis/visualization
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
34
collection of physical data. Such data may be such as pressure, tension, humidity, seismological activity, voltage,
stress and others.
Using open-source technology has a great advantage in that the development of a particular project involves a
large developer community. Also life cycle, the project updates cycle respectively is shorter comparing to
commercial products.
REFERENCES:
[1]
Analog, linear, and mixed-signal devices from Maxim/Dallas Semiconductor [online]. 2011 [cit. 2011-0115]. DS18B20 Programmable Resolution 1-Wire Digital Thermometer - Overview. Available from:
http://www.maxim-ic.com/datasheet/index.mvp/id/2812.
[2]
FABO, P.; GAŠPAR, G.; PAVLÍKOVÁ, S.; ŠIROKÝ, P. Methods of statistical evaluation of data from
long-term measurement of soil temperature profile using open-source tools. Trenčianske Teplice : ISC
Mechatronika 2010, ISBN 978-1-4244-7962-7.
[3]
RGraph. www.rgraph.net [online]. 2010 [cit. 2011-01-15]. RGraph : HTML5 canvas graph library based
on the HTML5 canvas tag. Available from: http://www.rgraph.net.
ADDRESS:
Gabriel Gašpar
Slovenská technická univerzita v Bratislave
Materiálovotechnologická fakulta v Trnave
Paulínska 16
917 24 Trnava
Slovenská republika
email: [email protected]
Juraj Ďuďák
Evropský Polytechnický Institut, s.r.o.
Osvobození 699
686 04 Kunovice
Česká republika
email: [email protected]
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
35
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
36
INVENTORY AND INVENTORY MANAGEMENT SYSTEMS
Lukáš Richter, Jaroslav Král
University of Žilina
Abstract: Despite the fact that zero level of inventory is the ideal, each company needs some level of
inventory for its smooth running. In this paper is discussed the issues and complexity of inventory
management or control. Because of different types of inventory (based on different purposes of its
existence) it is necessary to consider their specifics and it have to influence the choosing of the
proper inventory management policy. Inventory management systems (MRP, MRP II and ERP) are
discussed in the second part of this paper because these systems are (were) broadly utilized in
praxis to foster quality of decisions about inventory.
Key words: Inventory, purposes of inventory, inventory management, MRP, MRP II, ERP.
INTRODUCTION
Each manufacturer, service-provider or not-for-profit organization needs some inventory for sustaining one’s
smooth running. Inventory management attempts to improve efficiency throughput the system. It is not only
about simple reduction of inventories, but the managers must at first understand the requirements of processes
and ensure, that the inventory serve needs of this processes. The importance of system approach and supply
chain management grows during last years.
Inventory management is complex issue and it cannot be grasped by single employee. Coordination of many
different operations across the whole organization is necessary. For improvement the inventory management it is
crucial to harvest, processed, transfer, analyze and visualize huge amount of data and information in real-time.
The IT finds there its utilization. We are convinced that its potential is not yet fully utilized.
WHAT IS INVENTORY AND WHY IS IT IMPORTANT?
It can be found many different definitions of inventory. Inventory can be defined as “any idle resource held for
future use”. The meaning of “future use” is very relative and depends on the concrete situation and conditions.
Companies hold inventory for a variety of reasons. They can accumulate and deplete finished-goods inventory to
help level the production schedule when demand is not uniform. Inventories of finished goods or finished
subassemblies may be held so the company can respond to customer demand in less than the lead-time required
to obtain the inputs and produce the products. Finished-goods inventories also protect a company from the error
of under-forecasting demand.
Inventories of inputs protect a company against interruptions of supply due to strikes, weather, or other natural
disasters. Companies now try to deal with reliable suppliers who are nearby to reduce this risk, instead of just
looking for the supplier with the lowest price. Accumulation of in-progress inventories between stages of
production allows some processes to run at rates that differ from those of the processes that feed them or that use
their output. Many companies gear their processes to the same space so that these inventories do not accumulate.
Companies often accumulate inventories as a result of buying large quantities to spread order costs over more
units, or as a result of producing large quantities on each production run, to spread the cost of setting up the
equipment over more units. It is even better, however, to reduce the costs associated with ordering or with setting
up the equipment. The company will save money not only in these costs but also in the costs of holding
inventory.
CLOSER LOOK ON INVENTORY
From business process point of view can be inventory divided into following categories:

primary inventory:

raw materials (items in an unprocessed state awaiting conversion into a product, and components and
sub-assemblies to be incorporated into an end product; all these items were produced outside the
company),
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
37







work-in-progress goods (partially finished goods),
finished goods (product manufactured for resale which are ready for dispatch);
and support inventories:
production (detergents),
maintenance (lubricant oil, spare parts),
office (stationery),
welfare (first-aid supplies).
The purpose of inventory from all these categories differs. So it is necessary to understand the specifics and
purposes of its existence, because it strongly determines the way of inventory management politics.
Each company need to deal with some raw material and it is not possible be without any. Hopp and Spearmen
[1] describe three main factors that influence the size of these stocks:

Batching - it has several reasons – quantity discounts from suppliers, physical and technological
limitations of company, and economies of scale;

Variability - between supplier and company, variability exists in two main forms: time (late of delivery)
and quality problems;

Obsolescence – if some raw material is no longer needed by production processes because of some
changes in design of demand.
Factor
Batching
Variability
Obsolescence
Type of stock
Action
Cycle stock
Utilization of the Economic order quantity model (EOQ)
Safety stock
Assess the level of Safety stock
Obsolete inventory
Need to be written off
Table 2: Factors of stock grow and required action
These kinds of raw material are mutually interrelated. For example Hopp and Spearmen [1] shows that safety
and circle stock protects against variability - big order batches cases the reduction of frequency “with which
inventory levels fall to the point where a stock-out is possible”.
Also in case of work-in-progress goods (WIP) is zero inventories the ideal, but real production systems require
some minimal level of inventory to ensure full throughput of a system. According to Hopp and Spearmen [1] can
be amount of inventories determined by some of following states:

Queuing – when WPI waiting for a resource (person, machine, transport device) – it is determined by
high utilization and variability of flow and process;

Processing – during processing the goods;

Waiting for batch – during waiting for creation a batch (batching is usually determined by technology of
operations or logistics operations), in other words this WIP is determined by waiting of transport
operation;

Moving – during transportation between two places;

Waiting to match – time of waiting to an assembly operation because of missing some required
component and this operation cannot be realized – it is caused by synchronization issue of parts arrival to
assembly operation.
Last category of the primary inventory is finished goods. In an ideal, finished goods are delivered to the
customer immediately. This cannot be always achieved and companies must carrying finished goods. From
following reasons:

Customer responsiveness – if delivery lead times are shorter than manufacturing cycle times then it must
be realized make-to-stock 1 policy (competitive advantage of many basics commodity products is based
only on delivery, therefore they must be produced on the stock);

Batch production – if production must be realized in pre-specified batches because of technology etc., so
some finished unsold goods may stay in the stock;

Forecast errors can influence the grow of finished goods;

Production variability in production timing or quantity can results in overproduction, it can cases the
grow of inventory;

Seasonality – if demand is strongly seasonal because of product character, company decide to product
inventory on stock to meet peak demand during the season.
1
Make-to-Order is philosophically contrary approach to Make-to-Stock. It is exists also Assemble-to-Order that combine advantages of these
two approaches - effectiveness of Make-to-Stock with Make-to-Order procedures and it is based on assembling components from a stock.
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
38
Hopp and Spearman say that finished goods inventory must be seen holistically, because mentioned factors
interact. “Whenever we build FGI [finished goods inventory] to provide short lead times or to cover seasonal
demand we increase exposure of the system to forecasting errors.” [1] These ideas beg many inspiring questions
which can help to solve many issues in praxis. Because of range and purpose of this paper they cannot be
discussed closer. Specific group of inventory is support inventory. This inventory is not direct inputs to
product, but it is used for support production processes. Despite fact that this inventory bound not very high
costs of company, the missing of some critical part may case in stop production function of company. So it is
necessary to take attention also on managing support inventory. It must be considered that this category of
inventory management is usually based on experience and knowledge of concrete process owners, because it is
very broad, complex and specific field that is closely related to concrete processes and used technologies.
Generally speaking the main reasons for stocking support inventory are:

Service – any waiting for required parts by maintenance and repair processes will had a impact on time of
complete repair tasks and it can causes the increase of costs;

Purchasing/production lead times – if lead times of spare parts are long, company must carry spare parts
inventory;

Batch replenishment – if economics of scale is significant factor, then company realize batch
replenishment that result in inventory levels increase.
ISSUES OF INVENTORY MANAGEMENT
Most people think of inventory as a final product waiting to be sold to a retail customer. We know this is
certainly one of its most important uses but especially in a manufacturing enterprise, inventory can take on forms
besides finished goods. So, we probably think that inventory management is a very complex task or more exactly
a group of tasks. The purpose of inventory management is to determine the amount of inventory to keep in stock.
The objective of inventory management has been to keep enough inventories to meet customer demand and also
be cost-effective. Companies establish systems for managing inventory.
At each point in the inventory system, the company needs to manage the day-to-day tasks of running the system.
Orders will be received from internal or external customers; these will be dispatched and demand will gradually
deplete the inventory. Orders will need to be placed for replenishment of the stock, deliveries will arrive and
require storing. In managing the system, a company is involved in three major types of decision:

How much to order – Every time a replenishment order is placed, how big should it be (sometimes called
the volume decision)?

When to order – At what point of time, or at what level of stock, should the replenishment order be placed
(sometimes called the timing decision)?

How to control the system – What procedures and routines should be installed to help make these
decisions? Should different priorities be allocated to different stock items? How should stock information
be stored?
We can see the complexity of inventory management issue. This complexity can be demonstrated by following
facts. Company must usually deals with hundred thousand of stock keeping units (SKUs) and they are located on
tens or hundreds of places. Inventories are purchased from hundreds of suppliers. Continuously there are take
place transactions (operations), which are closely related to inventories. These transactions are held in space and
time, and they are a source of most important data and information for inventory management (both decisionmaking and problem-solving). Moreover, the inventory management cannot be grasped by single person because
of high level of mentioned complexity. So it is necessary to coordinate the decisions of many employees (from
different department across whole organization) and take other stakeholders of supply chain into consideration.
So it is necessary to fully utilize the available IT technologies.
INVENTORY MANAGEMENT SYSTEMS – FROM MRP TO THE PRESENT
Each organization dialed and deals with complexity, inventory management and planning issues. Those issues
are usually based on many routines accounting functions with huge amount of data. So it is natural to try
utilizing computers to these specific functions. First successful company on this area was IBM with Joseph
Orlicky’s concept and software that was called material requirement planning (MRP). Next important step in
sophistication was American Production and Inventory Control Society’s “MRP Crusade”. This successful
product helped to create production control paradigm [1]. MRP was a computerized inventory control system
that would calculate the demand for component items (and sub-assemblies), keep track of when they are needed,
and generate work orders and purchase orders that take into account the lead time required to make the items in„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
39
house or buy them from a supplier. Basically an information system, MRP was quite revolutionary in its early
days, because it brought computers and systematic planning to the manufacturing function.
The main objective of any inventory system is to ensure that material is available when needed – which is easily
lead to a tremendous investment of funds in unnecessary inventory. One objective of MRP is to maintain the
lowest possible level of inventory. MRP does this by determining when component items are needed and
scheduling them to be ready at that time, no earlier and no later. Key feature of any MRP system is a product
structure file contains a bill of material (BOM) for every item produced. The bill of material for a product lists
the items that go into the product, includes a brief description of each item, and specifies when and in what
quantity each item is needed in the assembly process. The MRP can be considered as push system because it
tries “to schedules of what should be started in to production based on demand” [1]. Philosophically different is
pull system that starts next production when inventory is consumed.
Fig. 5: MRP and its inputs and outputs
Relatively simplicity of MRP systems bring many issues [1] as: capacity infeasibility, long planned lead times,
system nervousness etc. undermined the effectiveness of an MRP system. It was the main impulse to construct
more complex manufacturing resource planning systems called MRP II. MRP II raised from MRP concept
bud contained many new functions. Hopp and Spearman [1] enumerate these new functions: demand
management, forecasting, capacity planning, master production scheduling, rough-cut capacity planning,
capacity requirement planning, dispatching, and input/output control. The main advantage of MRP II is an
integration of many company’s information systems. Without MRP II integrated systems, separate databases are
held by different functions. However, despite its dependence on the information technologies which allow such
integration, MRP II still depends on people-based decision-making to close the loop.
Fig. 6: MRP II hierarchy [1]
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
40
Next stage of evolution is enterprise resource planning (ERP) concept. The main idea is to focus on all
company’s operations not only manufacturing. Many authors note that ERP can help to manage increasingly
complex processes created by: the globalization of market, grow of competition, need of better customization,
grow of complexity in organization and requirements of SCM (supply chain management). ERP integrated
approach contains following components: (i) integrated functionality, (ii) consistent user interface, (iii)
integrated database, (iv) single vendor and contract, (v) unified architecture and tools set, (vi) and unified
product support.
Fig. 7: Common structure of ERP
SAP’s 2 R/3 is usually mentioned in this context. According to [5] the R/3 system contains: integrated suite of
programs for finance and accounting, production and materials management, quality management and plant
maintenance, sales and distribution, human resources management, and project management. It is necessary to
mentioned, that the successful implementation of any ERP system is usually expensive, complex, and timeconsuming process. Usually it is necessary to customize standardized modules because of specific requirements
of concrete company. It can happen that positive contribution for organization can emerged after relative long
time. These disadvantages mention also Hopp and Spearman [1]:

incompatibility with existing systems,

long and expensive implementation,

incompatibility with existing management practices,

loss of flexibility to use technical point systems,

long product development and implementation cycles,

long payback period,

lack of technology innovation.
Generally speaking the choice of proper IS and its implementation has significant impact in company culture and
processes. So it is necessary to pay attention to all manager decisions related to this issue.
CONCLUSION
In today’s competitive business environment, many companies understand importance of inventory management.
They are usually turning to ERP systems, because it helps them manage the complex and interrelated processes
in supply chain. Despite the integration tendencies there are many problems and disadvantages of utilization
ERP systems. The main disadvantages: long and expensive implementation, long payback period, problems with
incompatibility et cetera must be taken into consideration in making decision about implementation new ERP
systems because it strongly influence processes, workflow and also company culture.
REFERENCES
[1]
HOPP, W. J.; SPEARMAN, M. L. Factory physics. 3/E New York : McGraw-Hill, 2008. 720 p.
ISBN 978-007-123246-3.
[2]
JOHNSON, J. C. et al. Contemporary Logistics. 7/E New Jersey : Prentice-Hall, 1999. 586 p.
ISBN 0-13-798548-7.
[3]
KRÁL, J. Logistics - Creation of the Excellent Customer Service. Dublin : ESB, 2001. CD-ROM for
2
SAP is an acronym for Systems, Applications, and Products.
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
41
[4]
[5]
Long-Life Learning.
KRÁL, J. Podniková logistika - Riadenie dodávateľského reťazca. Žilina : EDIS, 2001. 214 p.
ISBN 80-7100-864-8.
A
systems
view
of
business.
[Online]
[2011-01-04]
Available
from:
http://media.wiley.com/product_data/excerpt/70/EHEP0008/EHEP000870.pdf
ADDRESS:
Lukáš Richter
University of Žilina, Faculty of Management Science and Informatics
Department of Management Theories
Univerzitná 1
010 26 Žilina
Slovenská republika
Tel.: +421 41 513 4454
E-mail: [email protected]
Jaroslav Král
University of Žilina, Faculty of Management Science and Informatics
Department of Management Theories
Univerzitná 1
010 26 Žilina
Slovenská republika
Tel.: +421 41 513 4454
E-mail: [email protected]
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
42
ECONOMIC ORDER QUANTITY MODEL AND ITS UTILIZATION
Lukáš Richter, Jaroslav Král
University of Žilina
Abstract: We introduced the differences between Eastern and Western approach to inventory
management in this article. The purpose of this study is the classical economic order quantity model
(EOQ), its strengths, weaknesses and limitations. The studying of EOQ model is important because
of the model still broadly utilized in contemporary information systems, despite fact that it is one of
the oldest traditional invention scheduling models. The main idea of this model is the minimization
the total inventory investment costs and ordering costs. We consider the deep understanding of
EOQ type models is an ideal introduction to inventory management issues.
Key words: inventory management, holism, reductionism, economic quantity model, ordering costs,
holding costs.
INTRODUCTION
Manufacturing can be characterized as complex, multidisciplinary, forever changing issue that realized in open
live system – in company, or in operations. When we want to deal with this enormous complexity, we need to
utilize some simplified models or theories. Albert Einstein says in this context: “A theory should be as simple as
possible, but no simpler.” And it is true.
The focus of this paper is inventory management issue, and description and ability to utilization of basic
economic quantity model in real conditions of company. Why is it important to engage to this issue? Because
this model or its extensions are usually implemented in ERP systems and they have significant impact to
inventory management politics. The inventory management politic significantly influenced many company’s
processes, performance and financial health of the whole company. It is striking fact that the inventory managers
of a company usually have not idea about the models that are implemented in their ERP system. These ideas
imply many open questions:

Which models are they?

On which presumptions are they based?

Are these presumptions correct, or do they really match with real conditions in praxis?

Which contributions and disadvantages can brings the utilization of these models?
DIFFERENCES BETWEEN WESTERN AND EASTERN APPROACHES TO INVENTORY
CONTROL
For American approach to Operations management is significant utilization of many mathematical models.
Many ideas are rooted in Taylor’s Scientific management 1, for which is significant the effort to construct
mathematical model to foster decision making at all levels of company. The idea of “managing by numbers” was
deeply rooted in the American culture and also nowadays it is apparent not only in American management theory
and practice. Scientific management is based on rational, reductionist and analytical approach of the science. The
reductionist approach can be described as method that is based on following steps:
1. a decomposition of the system to the set of separate parts, and
2. studying separately each part.
According to Hopp and Spearman [1] this paradigm brings many contributions (for example ability to study
relatively complex systems), but has also some disadvantages (deep gap between academic research and actual
situation in industry; focus on separate parts can bring a loosing of broader perspective of a whole system).
Completely opposed point of view to this reductionist approach is the Eastern approach. It has more holistic or
system attributes. This approach much more highlights the importance of connections and relations between
“separated” parts (or subsystems) in context of the achieving the goals of whole system. This philosophy and
way of view a world significantly influenced the Just-in-Time philosophy and it is still strongly remarkable in all
implementation of these ideas (for example in Toyota production system).
1
According to [1] Taylor preferred terms “task management” or “Taylor system”.
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
43
It can be found many examples, these shows the differences between these two main approaches to management.
It is good recognizable in the following example in inventory control. Hopp and Spearman [1] demonstrate it in
case of the problem of the setup time 2 in manufacturing operations. In Western management literature was for
many years the setup time regarded as constrains, and it leaded to development of many sophisticated
mathematical models to find “optimal” lot size to minimize costs of company. Hopp and Spearman comment this
approach by following words: “This view made perfect sense from a reductionist perspective, in which the
setups were a given for the subsystem under consideration”. Japanese follow another way to dealing with the
problem of the setup time. They recognized that the setups are not given and thus they could be reduced. The
holistic view allows them to view the problem in broader context. Shigeo Shingo introduced his SMED (Single
minute exchange of die) in 1985. This set of tools and methods allowed reducing the setup time to minimum.
The article deals with exact models of inventory control and therefore follows the management science
approach. This stance has following reasons:
1. inventory models are widely used also in this days, and
2. on foundation of these inventory models are based many approaches to inventory control (MRP, MRP II,
ERP, Just-in-Time, Lean manufacturing).
The inventory control model gives us two answers: How many to order and when to order. It assumes that
demand for an item is either independent 3 of or dependent 4 on the demand for other items. Following ideas
deeply focuses on managing inventory where demand is independent.
ECONOMIC ORDER QUANTITY MODEL
The economic order quantity model (EOQ) is the oldest and very simple concept, and is broadly used in
inventory control systems. Hopp and Spearman [1] highlight fact that EOQ model was a work of F.W. Harris
(1913) but for many years was a discovery of this model incorrectly assigned.
Because of some costs increase as inventory increases and others decrease, the decision as to the best size of an
order is seldom obvious. The best lot size will result in adequate inventory to reduce some costs, yet will not be
so large that it results in needless expenses for holding inventory. A compromise must be made between
conflicting costs. To develop EOQ it is important to distinguish between holding costs and ordering costs.
Holding costs are associated with holding or “carrying” inventory over time. Therefore, holding costs also
include obsolescence and costs related to storage, such as insurance, extra staffing, and interest payments. Many
firms fail to include all the inventory holding costs. Consequently, inventory holding costs are often understated.
Table 1 shows the kind of costs that need to be evaluated to determine holding costs.
Category
Housing costs: such as building rent, depreciation, operating cost,
taxes, insurance
Material handling costs: including equipment, lease or depreciation,
power operating cost
Labour cost from extra handling
Investment costs: such as borrowing costs, taxes, and insurance on
inventory
Pilferage, scrap, and obsolescence
Overall carrying cost:
Tab. 3: Holding costs of inventory 5
Cost as a percent of inventory value
3-10%
1-3.5%
3-5%
6-25%
2-5%
25-35%
The second category of cost is ordering costs. They include costs of supplies, forms, order processing, clerical
support, and so forth. When orders are being manufactured, ordering costs also exist, but they are a part of what
is called set-up costs. Set-up cost is the cost to prepare a machine or process for manufacturing an order. This
includes time and labour to clean and change tools or holders. We can lower ordering costs by reducing set-up
costs and by using such efficient procedures as electronic ordering and payment.
2
Setup time can be defined as „A period required preparing a device, machine, process, or system for it to be ready to function or accept a
job. It is a subset of cycle time“[7].
For example, the demand for sport dresses is independent of the demand for bicycles.
4
The demand for bicycle components is dependent on the requirements of bicycles.
5
Note: All numbers are approximate, as they vary substantially depending on the nature of the business, location, and current interest rates.
Any inventory holding cost of less than 15% is suspect, but annual inventory holding costs often approach 40% of the value of inventory.
3
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
44
In Figure 1, it can be seen the optimal order quantity occurs at the point where the ordering-cost curve and the
carrying-cost curve intersect. The optimal order quantity occurs at a point where the total order cost is equal to
the total holding cost. This fact importantly reduces development of the economic order quantity model.
Fig. 8: Costs in the EOQ model
The optimal order quantity occurs at the point where the ordering-cost curve and the carrying-cost curve
intersect. The optimal order quantity occurs at a point where the total order cost is equal to the total holding
cost. This fact importantly reduces development of the economic order quantity model. Our task can be reduced
to balancing these two costs.
Harris formulate following assumptions for validity of EOQ model [1]:

production is instantaneous,

delivery is immediate (Lead-time 6 is known and constant),

demand is deterministic,

demand is constant over time and independent,

a production run incurs a fixed setup costs,

product can be analyzed individually,

receipt of inventory is instantaneous and complete 7,

quantity discounts are not possible,

stock-out (shortages) can be completely avoided if orders are placed at the right time.
With these assumptions, the graph of inventory usage over time has a saw-tooth shape, as in Figure 2.
Fig. 9: Inventory versus time in EOQ model
Using the following variables, it can be ordering and holding costs determined:
Q … number of pieces (or other quantity units) per order,
Q*… optimum number of pieces per order (EOQ),
D … annual demand in units for the inventory item,
co … ordering or set-up cost for each order,
ch … holding or carrying cost per unit per year.
6
7
Lead-time is, the time between placement and receipt of the order.
In other words, the inventory from an order arrives in one batch at one time.
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
45
For model are significant costs ordering (or set-up) cost and holding (or carrying) cost and the objective of most
inventory models is to minimise total costs. Thus, if we minimise the sum of ordering and holding costs, we will
also be minimising total costs. The calculation has following necessary steps:
1. Develop an expression for ordering or set-up cost.
2. Develop an expression for holding (or carrying) cost.
3. Set ordering cost equal to holding cost.
4. Solve the equation for the optimal order quantity.
1.
Annual ordering cost = (Number of orders placed per year) x (Set-up or order cost per order)
Annual demand



Set - up or order cost per order 
 Number of units in each order 
D
 co
Q
2.
Annual holding cost = (Average inventory level) x (Holding cost per unit per year)
 Order quantity 

Holding cost per unit per year
2


Q
 ch
2
3. Optimal order quantity is found when annual order cost equals annual holding cost, namely,
D
Q
co  c h
Q
2
4.
To solve for Q*, simply cross-multiply terms and isolate Q on the left of the equal sign.
2 Dco  Q 2 ch
Q2 
Q* 
2 Dco
ch
2 Dco
ch
Now, that we have derived equations for the optimal order quantity (Q*).
THE KEY INSIGHT OF EOQ
The optimal order quality increases with square root of the set-up cost for each order or annual demand rate; and
decreases with the square root of the holding (or carrying) cost per unit per year. Hopp and Spearman [1]
postulate that a more fundamental insight from EOQ model is the existence of following fact: “There is a tradeoff between lot size and inventory”. What it implies? Increasing the lot size cause in increase level of inventory,
but it causes the reduction of ordering frequency.
The meaningful utilization of this model depends on proper input data. This implies the main question: How can
be the setup costs proper estimated? From system approach perspective it is necessary to say that setups imply
many effects in production system:
it may causes grow of product quality problems,


it may influence the variability, and

it strongly influence the utilization of a capacity.
In other words, it is very hard to reduce all these factors in to one single number.
EOQ EXTENSIONS
From time of Harris’s invention of EOQ model it has been extended to many new formulas and models. Some
models include backordering costs, or are customized for multiple items. Other version of the model is so called
Baumol-Tobin model [2][6], that allows to determine the money demand function. The Baumol-Tobin model is
based on idea that a person's holdings of money balances can be seen in a way parallel to a company's holdings
of inventory. Other models consider the backorders, major or minor setups or quantity discounts.
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
46
Some models take into consideration the non-instantaneousness of replenishment. One of them is economic
production lot (EPL) model [1]. It is based on presumption that production rate is finite, constant and
deterministic. It is assumption that the production rate (P) is higher than demand (so production systems have
enough free capacity) the model is able to determine the lot size, that allows minimize the sum of setup and
2 Dco
.
holding costs; Q* 
ch (1  D / P)
CONCLUSION
If we deal with high complexity and dynamic character of inventory, we need some implication of reality.
Technically, there are two fundamental different approaches – the Western (reductionist) and the Eastern
(holistic). This article proceeds from the Western approach that is based on management science.
For dealing with inventory management issues this Western approach contains many models, which are usually
implemented in management (or enterprise) information systems. The key presumption of meaningful inventory
control is to fully understand the strengths, weaknesses and limitations of concrete utilized model.
The most common and wide utilized model is EOQ. A benefit of the EOQ model is that it is robust. By robust
we mean that it gives satisfactory answers even with substantial variation in its parameters. As we have
observed, determining accurate ordering costs and holding costs for inventory is often difficult. Consequently,
a robust model is advantageous. Total cost of the EOQ changes little in the neighbourhood of the minimum. The
variations in set-up costs, holding costs, demand, or even EOQ make relatively modest differences in total cost.
The main weakness of EOQ model is the problem of proper setup costs estimation.
REFERENCES
[1]
HOPP, W. J.; SPEARMAN, M. L. Factory physics. 3. ed. New York : McGraw-Hill, 2008. 720p.
ISBN 978-007-123246-3.
CAPLIN, A.; LEAHY. J. Economic Theory and the World of Practice: A Celebration of the (S,s) Model.
[2]
Journal of Economic Perspectives. Winter 2010. V 24, No 1.
KRÁL, J. Logistics - Creation of the Excellent Customer Service. Dublin : ESB, 2001. CD-ROM for
[3]
Long-Life Learning.
KRÁL, J. Podniková logistika - Riadenie dodávateľského reťazca. Žilina : EDIS, 2001. 214 p.
[4]
ISBN 80-7100-864-8.
Economic Order Quantity Model (EOQ Model). National Programme on Technology Enhanced Learning
[5]
[Online] [2011-01-12]. Available from:
http://nptel.iitm.ac.in/courses/Webcourse-contents/IITROORKEE/INDUSTRIAL-ENGINERRING/part3/inventory/fig2.htm.
Economic
order
quantity.
Wikipedia.org.
[Online]
[2011-01-14].
Available
from:
[6]
http://en.wikipedia.org/wiki/Economic_order_quantity.
Setup time definition. Bussines Dictionary [Online] [2011-01-01]. Available from:
[7]
http://www.businessdictionary.com/definition/setup-time.html.
ADDRESS:
Lukáš Richter
University of Žilina
Faculty of Management Science and Informatics
Department of Management Theories
Univerzitná 1
010 26 Žilina,
Tel.: +421 41 513 4454,
E-mail: [email protected]
Jaroslav Král
University of Žilina
Faculty of Management Science and Informatics
Department of Management Theories
Univerzitná 1
010 26 Žilina,
Tel.: +421 41 513 4454
E-mail: [email protected]
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
47
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
48
REGRESSION TREES IN SEA-SURFACE TEMPERATURE MEASUREMENTS
Sareewan Dendamrongvit, Miroslav Kubat, Peter Minnett
University of Miami
Abstract: Accurate evaluations of sea surface temperature are vital for timely identification of
climate changes. Satellite-based infrared radiometry offers the advantage of providing relatively
frequent and dense global measurements; however, its accuracy is limited due to various
imperfections such as those caused by the intervening atmosphere. In our research, we explored the
possibility of compensating these inaccuracies by “atmospheric correction formulas” induced from
historical data. Two major contributions deserve to be mentioned. First, the errors achieved in our
experiments compare favorably with earlier attempts. Second, as expected, we found out that it
makes sense to use somewhat different formulas for diverse geographic regions, which themselves
can be identified by data analysis.
1. INTRODUCTION
For early identification of climate change, we need a way to determine global sea surface temperature (SST).
Dense in situ measurements (e.g., from drifting buoys) being expensive, researchers have investigated the
possibilities offered by satellite-based radiometry. Early experience gives reason for some optimism, yet the
results are still far from satisfactory: the accuracy of the temperatures thus obtained is negatively affected by
such factors as the intervening atmosphere, measurement angle, or the changes in the measurement conditions
throughout the day. On the positive side, the incurred errors appear to be systematic both in size and sign, and
this leads us to assume that they might be reduced by appropriate correction formulas. This, indeed, is what some
researchers have been trying to accomplish over the last decade or so.
As of now, perhaps the most widely used algorithm for daytime SST calculation is NLSST [1]. Let us denote by
the brightness temperatures measured at the n m wavelength (e.g.,
is obtained at 12 m), and let
denote the satellite zenith angle (in radians). Further on, let
denote the reference SST (or “first-guess
temperature”). NLSST calculates the SST by the following formula:
(1)
. A
The frequently criticized shortcoming of this equation is its reliance on the rather subjective term,
formula achieving comparable prediction accuracy without resorting to the first guess would be deemed
“cleaner”, and thus more scientific.
[2, 3], is used. Relying on two bands in the 4 m atmospheric
For night-time estimates, another algorithm,
window, it takes advantage of the greater temperature sensitivity of the infrared emission on the short
wavelengths, something that cannot be exploited in daytime measurements because shorter wavelengths are
contaminated by the sunlight reflected from both land and sea surfaces. Using the same notation as before, the
temperature estimate is accomplished by the following formula:
(2)
Our experiments indicate that the accuracies achieved by both of these two formulas still leave some room for
improvement; therefore, we decided to explore the possibility of inducing better correction formulas (i.e., more
accurate and disposing of the inconvenient “first guess” term) by the use of regression analysis.
To be more specific, we worked with data obtained from two satellites, TERRA and AQUA, over the period of
several years. In these resources, each record consists of such variables as satellite-based radiometry at diverse
wavelengths, inclination angles, geographic locations, or daytime—these are treated as independent input
variables. As for the dependent output variable, each record contains also the result of the corresponding in situ
SST measurement. As usual in multivariate regression, the task is to predict the value of the dependent variable
based on the values of independent variables. Apart from classical statistical methods, one can consider here
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
49
such diverse techniques as artificial neural networks, support vector machines, instance-based regression, and
regression trees.
From these, we chose that last-mentioned. The reason is that the errors of satellite-based radiometry are known
to depend on atmospheric features and processes that exhibit diverse behavior in different geographic locations.
This suggests the use of a different correction formula for different regions—assuming, of course, that we know
how to identify such regions. This, as we will show in the next section, can be accomplished by the method of
regression trees without major difficulties.
The results of our experiments are most encouraging. Our technique yielded more accurate SST estimates than
previously used formulas, and we were also able to do away with the awkward first-guess term. Importantly, the
induced correction formulas appear to be sufficiently robust to be employed on future data. Finally, we believe
that the regions we have identified can themselves be subjected to further analysis by climatologists and
meteorologists.
In the following section, we explain briefly the methodology used to establish the individual regions, to apply
regression to them, and to evaluate the results. Section 3 summarizes our data and reports the results of our
experiments. Section 4 offers concluding remarks.
2. PROBLEM STATEMENT AND METHODOLOGY
Due to various reasons, the atmosphere has different parameters at various locations (for instance, the aerosols
spreading from Sahara over the Mediterranean are absent over the Pacific). Nevertheless, we suspect that the
nature of this interference, and its impact on the accuracy of satellite-based radiometry, are relatively stable over
specific geographical regions. Our intention was to identify these regions by the regression-trees technique
invented by [4] and available in the MATLAB Statistics Toolbox.
For the induction of the regression tree's internal nodes, we used only two input variables (latitude and
longitude), the output variable being the in situ SST measurements. The regression tree is created by recursively
splitting the geographic regions in a manner that seeks to minimize the variance within each individual area. In
the next step, the tree is pruned, and, in the final stage, linear regression is applied separately to each resulting
, or inclination angle),
region, inducing the error-minimizing function from all the relevant variables (such as
this time ignoring the geographical coordinates.
An example of the regions thus obtained is depicted in Figure 1 (note that most of them are “horizontal”). Each
is associated with a different linear function that calculates SST from a set of input variables.
Once the regression tree has been induced, it is used to estimate SST in the following manner. First, the latitude
and longitude are used to “propagate” the given vector of satellite-obtained data from the regression tree's root
all the way down to the terminal node corresponding to the given measurement's geographic coordinates. Then,
the linear function associated with this node is used to calculate the SST value.
Figure 1: Boundaries of the regions discovered in the course of the regression tree induction. Note that most of
the regions are “horizontal.”
The accuracy of these predictions is in the literature usually evaluated by their root mean square error (RMSE).
To be more specific, let us denote by
the real in situ value whereas
is the estimate based on
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
50
the satellite data. Let n be the total number of measurements. The RMSE is then defined by the following
formula:
(3)
In our research, we wanted to make sure that the RSME of the SST estimates is on average lower than those
achieved by the formulas from Equations 1 and 2. Moreover, we wanted to establish whether a regression tree
induced from historical data can be used on future data.
3. EXPERIMENTS
3.1 DATA AND EXPERIMENTAL SETTING
We relied on data obtained from two satellites, TERRA and AQUA, during years 2004 through 2007. Wishing to
decide whether equally reliable results can be achieved during daytime and night-time, we divided the files
accordingly. The numbers of examples available for each category (Table 1) indicate that statistical reliability is
not likely to be an issue. Still, whenever comparisons were needed, we relied on the 5-fold cross-validation.
be the brightness temperatures measured at the wavelength of n m. Let
denote the satellite zenith
Let
angle (measured in radians), and let
denote the angle of incidence (again, measured in radians). During the
daytime, the underlying physics is reflected by Equation 4; for the night-time, we prefer Equation 5 (because of
the effects of sun glitter). The reader can see that these equations are somewhat similar to NLSST (Equation 1).
is no longer needed, in these new equations.
Note, further, that the “unpleasant” parameter
Data
Daytime
Night-time
2004
24600
16387
Terra
Aqua
2005
2006
2007
2004
2005
2006
39650
32331
37966
26166
40828
34612
28150
22833
26077
14991
24800
22061
Table 1: Number of examples in each category.
2007
47161
28896
(4)
(5)
The values of all variables appearing in these equations are available in the corresponding databases. Regression
is used to establish the coefficients, , that maximize the accuracy of the SST estimates.
3.2 RESULTS
Are the SST-estimates sufficiently accurate?
Using 5-fold cross-validation, we obtained the estimate of root mean square error (RMSE, see Equation 3) of the
SST predictions made by the regression trees. For daytime values, the errors are summarized in Table 2; for
night-time values, they are summarized in Table 3. To provide some perspective, the tables give also the errors
at night).
of the equations that have been deemed best so far (NLSST for day and
and
, results in much lower RMSE
The reader can see that our new correction algorithms,
than the previously used formulas, and that the improvement is equally convincing in the daytime and night-time
estimates. The night-time 4 m SST retrievals exhibit smaller errors than the 11-12 m SST. This reflects the
inherent superiority of the shorter-wavelength atmospheric window. The correction in the AQUA retrievals are
better than those in the TERRA retrievals, which can perhaps be explained by the improvements in the
instrument construction and pre-launch calibration of the more recent AQUA satellite, as some of the lessons
learned in the course of early experience with the TERRA satellite. The circumstance that the uncertainty
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
51
characteristics in later years are better than those in the earlier years may be explained by the later years having
more data, as well as by consequence of more smaller working regions.
Can the induced regression trees be used on future data?
We needed some evidence that a regression tree induced from historical data can be applied to future data. Put
another way, we wanted to know whether a regression tree induced in one year is capable of high-precision
estimates in another year.
This question motivated another experiment whose results are summarized in Table 4 for the daytime data, and
Table 5 for the night-time data, always separately for the TERRA and AQUA satellites. Note that the row always
represents the year on whose data the regression tree was induced, and the column represents the year to which
the tree was applied. For instance, the reader can see that a regression tree induced on the TERRA data in 2004
achieved RMSE = 0.578 when applied to the TERRA data in 2006.
The reader can see that the results are fairly stable across the three years' period. Comparison with the two tables
from the previous subsection indicates that the use of a given regression tree on future data does not seem to
indicate any major increase in the RMSE error.
Terra
2004
0.581
±0.016
0.556
±0.013
SST
2005
0.559
±0.008
0.513
±0.008
2006
0.584
±0.012
0.548
±0.010
2007
0.571
±0.012
0.528
±0.007
Table 2: The prediction error (RMSE) of
2004
0.578
±0.011
0.549
±0.014
2005
0.457
±0.014
0.390
±0.011
2006
0.468
±0.024
0.406
±0.028
2007
0.457
±0.015
0.395
±0.012
Table 3: The prediction error (RMSE) of
Train year
2004
2005
2006
2007
2004
–
0.565
0.571
0.566
Terra
Test year
2005
2006
0.546
0.578
–
0.574
0.541
–
0.529
0.573
2007
0.542
±0.008
0.500
±0.008
as compared to the NLSST algorithm.
Terra
2004
0.489
±0.018
0.427
±0.016
Aqua
2005
2006
0.561
0.567
±0.004
±0.007
0.516
0.548
±0.002
±0.008
2004
0.438
±0.010
0.368
±0.011
Aqua
2005
2006
0.423
0.410
±0.009
±0.015
0.346
0.343
±0.007
±0.018
as compared to the
2007
0.567
0.542
0.551
–
2004
–
0.564
0.571
0.564
2007
0.419
±0.014
0.339
±0.018
algorithm.
Aqua
Test year
2005
2006
0.545
0.578
–
0.572
0.554
–
0.530
0.562
2007
0.545
0.520
0.528
–
Table 4: The prediction error (RMSE) of
when training on one year and testing on another year
according to the corresponding regions.
Train year
2004
2005
2006
2007
2004
–
0.433
0.435
0.431
Terra
Test year
2005
2006
0.407
0.425
–
0.420
0.405
–
0.400
0.414
2007
0.420
0.407
0.405
–
2004
–
0.375
0.374
0.374
Aqua
Test year
2005
2006
0.354
0.351
–
0.350
0.353
–
0.352
0.350
2007
0.351
0.348
0.347
–
Table 5: The prediction error (RMSE) of
when training on one year and testing on another year
according to the corresponding regions.
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
52
4. CONCLUSION
The paper briefly summarized our recent experience with the use of the technique of linear regression trees for
the compensation of the errors incurred by SST measurements by satellite-based radiometry. The results indicate
that the technique considerably reduced the root mean square error of these temperature estimates, which have
how approached the accuracy of the measurements obtained in situ by the use of drifting buoys. The importance
of these results is hard to overestimate: they strongly indicate that, in the near future, climatologists and
meteorologists will be able to obtain reliable estimates of the sea surface temperatures over the globe, and thus
gain a tool for timely identification of climate change.
In our future research, we want to analyze the shape and location of the individual regions and to provide
explanations for their slightly varied behavior. We are also thinking of looking for way to further reduce the
residual errors by fine-tuning the various parameters of the employed regression technique.
ACKNOWLEDGEMENT
The research was partly supported by NASA grants NNX08AE58G and NNX08AE65G.
REFERENCES
[1]
WALTON, C. C.; PICHEL, W. G.; SAPPER, J. F.; MAY, D. A. The development and operational
application of nonlinear algorithms for the measurement of sea surface temperatures with the NOAA
polar-orbiting environmental satellites, J. Geophys. Res., vol. 103, no. C12, pp. 27999-28012, 1998.
ESAIAS, W. E.; ABBOTT, M. R.; BARTON, I. et al. An Overview of MODIS Capabilities for Ocean
[2]
Science Observations, IEEE Trans. Geosci. Remote Sensing, vol. 36, pp. 1250-1265, 1998.
JUSTICE, C. O.; TOWNSHEND, J. R. G.; VERMOTE, E. F. et al. An Overview of MODIS Capabilities
[3]
for Ocean Science Observations, Remote Sensing Environ. vol. 83, pp. 3-15, 2002.
BREIMAN, L.; FRIEDMAN, J. H.; OLSHEN, R. A.; STONE, C. J. Classification and Regression Trees.
[4]
Statistics/Probability Series. Wadsworth Publishing Company, Belmont, California, 1984.
ADDRESS:
Sareewan Dendamrongvit
Department of Electrical & Computer Engineering
University of Miami
Coral Gables, FL 33146
U.S.A.
[email protected]
Miroslav Kubat
Department of Electrical & Computer Engineering
University of Miami
Coral Gables, FL 33146
U.S.A.
[email protected]
Peter Minnett
Department of Meteorology & Physical Oceanography
Rosenstiel School of Marine & Atmospheric Science
University of Miami
Miami, FL 33149
U.S.A.
[email protected]
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
53
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
54
USE OF GENETIC ALGORYTHMS IN ECONOMIC DECISION MAKING PROCESSES
Jan Luhan, Veronika Novotná
Brno University of Technology
Abstract: As information systems and technologies develop, so does their application in economics
and management. An inseparable part of it is the decision-making process which uses Soft
Computing methods. The methods include genetic algorithms which represent an optimizing tool or
unconventional retrieval. This article deals with the application of genetic algorithms in various
economic processes, ranging from optimizing tasks, prediction to speculations.
Key words: Soft Computing, genetic algorithms, artificial intelligence (AI)
ARTIFICIAL INTELLIGENCE
„Decision-making is an inseparable part of a sequential managerial role. The importance of decision-making is
shown in particular in that the quality and outcome of the processes (notably strategic decision-making
processes) fundamentally influence the efficiency of an organization’s functioning and future development.
Low-quality decision-making may substantially contribute to a company’s failure. The responsibility for
decision-making must be emphasized. The decision-maker may use various tools and aids (programming tools,
consultants etc.), that might help take a decision. But is always the decision-maker (manager) who is ultimately
responsible for the decision, not for example the author of the programming tool supporting decision-making, or
the consultant. “[19].
Managerial decision-making often takes place under conditions of uncertainty and vagueness, where the existing
problems cannot be fully and precisely defined, the initial information is incomplete or inaccurate, and the terms
are vague. As modern information technologies are developing and booming, new methods and possibilities of
optimizing and management appear in managerial decision-making support. Methods that can be widely used in
practice are called Soft Computing [9].
According to L. Zadeh Soft Computing is a connection of computational methodologies such as, above all, fuzzy
logic, neural networks, genetic algorithms and probability. Methodologies comprising soft computing are mostly
complementary and synergistic rather than competitive. The leading principle of Soft Computing is the use of
tolerance towards inaccuracy, uncertainty, partial truth and approximation in order to achieve manageability,
robustness, lower price of a solution, and better compliance with reality. One of the main goals of Soft
Computing is to establish foundations for designing, creating and applying intelligent systems which use their
components symbiotically and not separately.
Practice very often uses findings resulting from an expert’s decision-making. The quality of the expert’s findings
depends on the ability of efficient processing of available information, while its uncertainty – vagueness - is of
an entirely different character than stochastic uncertainty. It is efficient use of terminological, lexical uncertainty
(vagueness) which is highly developed in people. What is also efficient is the use of simple but highly efficient
non-numeric algorithms which enables a human expert to interconnect profound knowledge with shallow
knowledge with the result of achieving higher quality findings in problem-solving, decision-making and
management.[13]
While classical mathematical statistics is based on principles of empirical probability which use the knowledge
of probability density distribution of random phenomena (variables), methods for working with vagueness are
based on laws of so called possibility distribution.
It is interesting that most of these technologies originate from biological phenomena or animal or human
behaviour, and many of them are analogical with systems found in human or animal environment.
Artificial intelligence, which partly deals with decision-making methods and solving difficult problems,
provided terms to describe these complex systems which are characterized by the possibility to work with nonnumerical (lingual) input.
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
55
Artificial intelligence has not been yet defined unanimously. M.Minsky (1967) for example defines artificial
intelligence the following way: „…artificial intelligence is a science of making machines (devices) or systems
that will be using methods to deal with a specific problem, which, if performed by a human being, would be seen
as demonstration of human intelligence.“ E. Rich (1991) believes that „…artificial intelligence tries to find out
how to ensure computerized dealing with tasks which are still solved better by people.“
GENETIC ALGORITHMS
Genetic algorithms are unconventional search algorithms or optimizing algorithms inspired by processes
observed in natural evolution. In other words, the algorithm works with certain individuals (population of
individuals), whose attributes are represented in a certain structure comparable to the chromosome of the
organism. The aim of the algorithm is to create, in the population of individuals, increasingly better individuals
by evaluating its “quality” which must be represented by a function, usually called fitness function. This quality
makes the algorithm a perfect tool to deal with optimizing problems, i.e. problems where the best of all possible
solutions to the problem is being sought.
In other words – most evolutionary optimizing algorithms are inspired by Darwin’s concept of evolution, i.e.
better adapted individuals stand a better chance to survive and reproduce. This principle is ensured by means of
evaluating (also special-purpose or fitness) function which can be found, among others, in genetic algorithms
[15], [18], [19], or systems based on evolutionary programming [13].
Create the first
generation of
chromosomes
Evaluate each
chromosome in the
population
Create a new generation
of chromosomes
Select good chromosomes
Is the final
condition
met?
No
Yes
An advantage of an algorithm is that when dealing with a problem it only works with strings of digits 0 and 1
and quality of strings. The quality is shown when the function is being decoded. The algorithm aims to achieve
the best possible strings. Only three operators are used in an algorithm – reproduction, transposition and
mutation. Reproduction means copying string from the old generation into a new generation, transposition lies in
exchanging information between strings, and mutation causes random changes in a string.
Plenty of various implementations of the basic design of genetic algorithms are employed in practice.
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
56
USE OF GENETIC ALGORITHMS
Genetic algorithms can be successfully used in a range of areas, such as robotics, optimizing in engineering,
automated proposal, optimizing of electric circuits and also in economics. In economics genetic algorithms can
be used to optimize asset allocation or to assess outcome of neural networks. We shall consider some
applications more closely.
One of potential applications in economics is verifying hypotheses by means of empirical data. We shall have
a hypothesis which represents a functional regulation. Using a symbolic regression we try to identify a function
which corresponds to the empirical training data and the hypothesis is verified or disproved by an experiment
repeated sufficient number of times. The article [14] describes an experiment where a macroeconomic equation
of the quantitative theory of money is verified. The source of the data used was the macroeconomic data of the
US economy over the period of 30 years. There is also a range of experiments with artificial economies applied
in multi-agent systems. Simulation of corporate behaviour on the market is carried out by [10]. Chen in [7] and
[8] demonstrates an application of genetic programming with a view to verifying a hypothesis of an efficient
market.
Genetic algorithms have also been used to simulate economic processes, e.g. market speculations [9].
Genetic algorithms contribute to minimizing risks when trading in shares in [6]. To that purpose a methodology
has been defined, based on the design of a genetic algorithm GAP and an incremental training technique adapted
to the learning of series of stock market values. The GAP technique consists in a fusion of GP and GA. Applying
the proposed methodology; rules have been obtained for a period of eight years of the S&P500 index. The
achieved adjustment of the relation return-risk has generated rules with returns very superior in the testing period
to those obtained applying habitual methodologies and even clearly superior to Buy&Hold.
Dostál in [20] shows an example of optimizing a portfolio of American shares as well as a portfolio of Czech
shares. In [4] classification and prediction of success rate on the Chinese market are dealt with. Case-based
reasoning (CBR) is a machine learning technique of high performance in classification problems, and it is also
a chief method in predicting business failure. In this research, we provide evidence on performance of CBR in
Chinese BFP from various views of sensitivity, specificity, positive and negative values. Data are collected from
Shanghai Stock Exchange and Shenzhen Stock Exchange in China.
The paper [1] describes a new direct search method for solving non-standard constrained optimization problems
for which standard methodologies do not work properly. The method for finding an optimal amount to be
ordered is described in [2]. In this research, an economic order quantity (EOQ) model is first developed for
a two-level supply chain system consisting of several products, one supplier and one-retailer, in which shortages
are backordered, the supplier's warehouse has limited capacity and there is an upper bound on the number of
orders. At the end, a numerical example is given to demonstrate the applicability of the proposed methodology
and to evaluate and compare its performances to the ones of a penalty policy approach that is taken to evaluate
the fitness function of the genetic algorithm.
Production planning in civil engineering is dealt with in [21]. This research studied the relationships between
pre-cast's production operations and site construction activities to develop a pre-cast production planning model.
The algorithms employed for solving the pre-cast production planning model were genetic algorithms and the
branch-and-bound method. The results show that the model and the solving algorithm provide better quality of
solutions than the pre-caster's production plan and improve the actual production results in the illustrative case.
Genetic algorithms can be used for time series forecasting. Some papers also look into applications aiming to
forecast macroeconomic quantities in time, e.g. [12] shows prediction of GNP deflator, or Kaboudan [11]
employed genetic algorithms to forecast demand for gas in the USA.
Another area where genetic algorithms can be applied is data mining. The discipline started to develop in the
early 1990s aiming to improve risk management in banks specializing in credit cards. In literature, data mining
techniques have been applied to stock (market) prediction. Combination of methods for stock market prediction
is described in [5]. Feature selection, a pre-processing step of data mining, aims at filtering out unrepresentative
variables from a given dataset for effective prediction. As using different feature selection methods will lead to
different features selected and thus affect the prediction performance, the purpose of this paper is to combine
multiple feature selection methods to identify more representative variables for better prediction. In particular,
three well-known feature selection methods, which are Principal Component Analysis (PCA), Genetic
Algorithms (GA) and decision trees (CART), are used.
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
57
CONCLUSION
Artificial intelligence is a vast area with increasing potential. These methods should not be used if 100%
reliability is required or if a deterministic method to solve a task is available. However, a great advantage of the
techniques is that the outcome is not dependent on the initial conditions. The applicability and conditions for use
vary according to the tools being used, process, complexity of the system in question and the problem defined.
Outcomes of advanced methods lead to higher quality decision-making process, in particular in multi-criteria
tasks and tasks which are difficult to put in algorithms. A forward-looking company striving to succeed on the
market has to invest adequate funds in the development of these methods.
BIBLIOGRAPHY
[1]
SANCHEZ, A. J.; MARTINEZ, D. Optimization in Non-Standard Problems. An Application to the
Provision of Public Inputs, Computational Economics, Volume 37 Issue 1, January 2011, pg. 13-38,
Kluwer Academic Publishers Hingham : MA, USA, ISSN: 0927-7099.
PASANDIDEH, S. H. R.; NIAKI, S. T. A.; ROOZBEH NIA, A. A genetic algorithm for vendor managed
[2]
inventory control system of multi-product multi-constraint economic order quantity model. Expert Syst.
Appl.
38,3.
2011,
p.
2708-2716.
DOI
=
10.1016/j.eswa.2010.08.060
Available
from:
http://dx.doi.org/10.1016/j.eswa.2010.08.060.
KAMPOLIS I. C.; GIANNAKOGLOU, K. C. Synergetic use of different evaluation, parameterization
[3]
and search tools within a multilevel optimization platform. Appl. Soft Comput. 11, 1. January 2011, p.
645-651. DOI=10.1016/j.asoc.2009.12.024. Available from: http://dx.doi.org/10.1016/j.asoc.2009.12.024.
LI, H.; SUN, J. On performance of case-based reasoning in Chinese business failure prediction from
[4]
sensitivity, specificity, positive and negative values. Appl. Soft Comput. 11, 1. January 2011, p. 460-467.
DOI=10.1016/j.asoc.2009.12.005 Available from: http://dx.doi.org/10.1016/j.asoc.2009.12.005.
TSAI, CH. F.; HSIAO, Y. CH. Combining multiple feature selection methods for stock prediction: Union,
[5]
intersection, and multi-intersection approaches. Decis. Support Syst. 50, 1. December 2010, p. 258-269.
DOI=10.1016/j.dss.2010.08.028 Available from: http://dx.doi.org/10.1016/j.dss.2010.08.028.
GARCIA, M. E. F.; CAL MARIN, E. A.; GARCIA, R. Q. Improving return using risk-return adjustment
[6]
and incremental training in technical trading rules with GAPs. Applied Intelligence 33, 2. October 2010,
p. 93-106. DOI=10.1007/s10489-008-0151-x. Available from: http://dx.doi.org/10.1007/s10489-0080151-x.
CHEN, S. H; YEH, C. H. Genetic programming and the efficient market hypothesis. In: KOZA, J.;
[7]
GOLDBER, G. D.; FOGEL, D.; RIOLO, R. Genetic programming 1996: proceedings of the first annual
conference. MIT Press : Cambridge, 1996. p. 45-53.
CHEN, S. H.; YEH, C. H. Towards a computable approach to the efficient market hypothesis: An
[8]
application of genetic programming. Journal of Economic Dynamics and Control 21, 1997. p. 1043-1064.
MAŘÍK, V.; ŠTĚPÁNKOVÁ, O. Umělá inteligence II. Praha : ACADEMIA, 1997. ISBN 80-200-0504[9]
8.
[10] CHEN, S. H.; YEH, C. H. Simulating economic transition processes by genetic programming. Annals of
Operation Research, 2000. p. 265-286.
[11] KABOUDAN, M.; LIU, Q. Forecasting quarterly US demand for natural gas. ITEM (Inf. Technol. Econ.
Manag.), Vol. 2, 2004.
[12] KOZA, J. R. A genetic approach to econometric modeling. Sixth World Congress of the Econometric
Society, 1990.
[13] POKORNÝ, M. Umělá inteligence v modelování a řízení, Nakladatelství BEN : Praha, 1996.
[14] KOZA, J. R.; ANDRE, D. Evolution of iteration in genetic programming. In: FOGEL, L. J.; ANGELINE,
P. J.; BAECK, T. Evolutionary Programming In: Proceedings of the Fifth Annual Conference on
Evolutionary Programming. Cambridge, MA : The MIT Press, 1996. p. 469-478.
[15] MAŘÍK, V. a kol. Umělá inteligence III. Praha : Academia, 2002.
[16] POSPÍCHAL, J.; KVASNIČKA V. P. Evoluční algoritmy, STU : Bratislava, 2000.
[17] VOLNÁ, E. Neuronové sítě a genetické algoritmy. Ostrava : Ostravská univerzita, 1998.
[18] VONDRAČEK, I. Umělá inteligence a neuronové sítě. Ostrava : Vysoká škola báňská - Technická
univerzita, 1994.
[19] DOSTÁL, P. Pokročilé metody analýz a modelování v podnikatelství a veřejné správě. Brno : CERM
Akademické nakladatelství, 2008. 340 p. ISBN: 978-80-7204-605-8.
[20] LI, S. H. A.; TSERNG, H. P.; YIN, S. Y. L.; HSU, CH. W. A production modeling with genetic
algorithms for a stationary pre-cast supply chain. Expert Syst. Appl. 37, 12. December 2010, p. 84068416. DOI=10.1016/j.eswa.2010.05.040 Available from: http://dx.doi.org/10.1016/j.eswa.2010.05.040
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
58
ADDRESS
Jan Luhan
Brno University of Technology
Faculty of Business and Management
Kolejni 2906/4
612 00 Brno
Tel.: +420 541143717
E-mail: [email protected]
Veronika Novotná
Brno University of Technology
Faculty of Business and Management
Kolejni 2906/4
612 00 Brno
Tel.: +420 541143718
E-mail: [email protected]
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
59
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
60
BUSINESS WAR GAME AS A KIND OF BUSINESS SIMULATION
Karolína Mužíková
Žilinská univerzita v Žiline
Abstract: This article is devoted to business war game as a kind of business simulation which
improve
a whole process of strategic planning in organization. The concept of business war game is based
on an effective teamwork activity and on the free information sharing within an organization. It
does not require neither expensive computer software nor the participation of external consultants.
It is suitable and very useful for every organization on all its organizational levels. Some reasons
and presumptions for playing the effective business war game as well as the whole game procedure
are described in this article.
Keywords: business simulation, business war game, game theory, strategic planning, competition,
competitive intelligence.
INTRODUCTION
A business war game is an uniquely structured battle of minds between teams representing various rivals in the
industry [7]. Business war games should be ranked among managerial methods which support decision making.
The application of the business war game framework is especially suitable during the demanding process of
strategy or plan formulation, because the success of any strategy or plan highly depends on the reaction of the
company’s external setting. Just this reaction is submitted to analysis during the business war game. Playing
business war games does not substitute the process of strategic planning, but it is a technique which should be
used along with traditional strategic planning processes [2, p. 28].
Business war game is based on a specific teamwork role-playing activity which fulfils several presumptions and
which proceeds in recommended succession of steps. These necessary conditions of business war games are
further dealt with in this article.
The chief world-known author and reputable consultant in the area of business war games is Benjamin Gilad.
His main professional field of interest is the theory of competitive intelligence. He has published multiple books
on the subject, like The Business Intelligence System in 1988, Business Blind Spots in 1994, Early Warning in
2003 and Business War Games – How large, small, and new companies can vastly improve their strategies and
outmaneuver the competition in 2009. [2; 8].
1. BUSINESS SIMULATIONS
Business war games belong to the large group of business simulations. Business simulations are considered as
forms of empirical learning methods in which their participants learn from their experience gained during the
game.
The participants in the game can develop strategies, make decisions, and gain decision-making experience in
a risk-free virtual environment. Due to this fictitious environment, the players are able to learn from their
mistakes without causing any real harm. [5, p. 2]. There are two leading methodologies used for business
simulations – computer-based and human-based simulations [3].
Computer-based simulations, which are generally founded on mathematical algorithms, are very sophisticated.
Mathematical models form an essence of these games. This fact may bring many difficulties during the phase of
data gathering. The model is usually fixed and inadaptable which may cause the lack of realism of the total
simulation. Its whole procedure and obtained results may not correspond to the complex reality. There are two
varieties of computer-based simulations - game theoretic and mathematical programming simulations.
Game theoretic simulations are pre-packed games which offer quick back and forth simulations of hypothetical
moves and a mathematical algorithm which searches for an equilibrium solution based on game theory principles
or a set of equations [3].
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
61
The approach based on game theory is submitted to large criticism mainly because of its demanding
presumptions. Games based on game theory assume the full rationality of all players participating in the game.
Assumption of a common knowledge of rationality is very optimistic. It is further assumed that players are able
to determine, at least probabilistically, the outcome of their actions and to rank them in the order of preference
[4, p. 175]. In order to find an optimal solution players have to be able to predict outcomes of all players in the
game, not just their own. Another major criticism of game theoretic framework is that it sometimes fails to
deliver unique solutions, usually because the game has more than one equilibrium [4, p. 177]. In these cases the
final solution should be based on random experiments or on predictions about other’s motivations and moves.
The applicability of game theory in strategic planning is considerably limited [2, p. 21]. Kelly states that game
theory clearly fails to describe the reality of decision making in some circumstances, although in its defense, it
should be said that it primarily seeks to provide a prescriptive analysis that better equips players to make good
strategic decisions [4, p. 180]. This is in accordance with the popular affirmation that game theory mainly
enables strategic thinking or facilitates thinking about strategic interactions [6].
Mathematical programming simulations are large-scale customized integrated systems which offer
a mathematical model with random generation of unexpected events as well as econometric modelling of
a whole system [3]. These types of games require expensive resources during their research and development
phase. This fact influences a price increase of these simulations. These simulations based on high-level
mathematical background, which may even employ some artificial intelligence concepts, may be very
incomprehensible for end-users.
Human-based simulations are intelligence-driven, analytically and behaviorally modeled role-playing exercises
[3]. Games based on role-playing yield into better results because human or organizational behaviour totally
goes beyond the scope of contemporary mathematical models [2, p. 21].
Business war games are classified in this group of business simulations. The term business war game has already
become quite popular. In spite of the name business war game, it has nearly nothing significant in common with
features of a war or a game.
In the light of the war analogy it should be remotely compared to war games. In general, war games are regarded
as a predecessor from which business simulations have been evolved. These original war games are, in most
cases, played at the table which is equiped with several small models and maquettes of factual landscape,
armaments and figures of warring troops. Through the throw of dice random events should be involved
[1, p. 15]. Business war games do not employ any weapons, soldiers, destruction manners or any other
approaches which aim is to harm or destroy others. The only important similarity is, that in advance, some
strategies are virtually and iteratively played in order to recognize starting position, possible moves,
countermoves and consequences of some decisions.
From the game analogy point of view business war game is an intellectual activity like many other board games.
But the activity of playing business war game does not finally give a winner and a looser which is typical for
many board games and also for some games based on game theory (zero-sum games). Business war game’s most
valuable contribution is the early detection of some weaknesses of the formulated strategy. From the wider point
of view the winner of the business war game should be anybody who plays this game in an appropriate time.
2. BUSINESS WAR GAME CONCEPTION
To make the business war game effective and successful several presumptions have to be assured in a company
which is going to play this game. Gilad suggests that business war game has to be:

realistic,

empowering,

accessible,

entertaining,

inexpensive,

simple,

transparent [2, p. 25].
Each of these phenomena represents a key requirement for playing business war game. Gilad names them Seven
attributes of an effective business war game [2, p. 24].
The game has to give a realistic view on a competition and on the whole entrepreneurial branch. It should
realistically predict the competitors’ reactions on the company’s strategy. The game is considered as
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
62
empowering when it enables to managers to get back out of a wrong strategy and to make some changes in it
which will be suggested by game participants. This ability is influenced by the structure of internal policies and
by the corporate culture. The accessibility of the game represents its availability to all managers. It is not an
instrument only for a top management. It may be used at all managerial or organizational levels in a company.
The effective game also brings entertainment and lots of fun instead of stressful atmosphere. Gilad states that
a lot of the game’s participants usually maintain an ethusiastic external focus for many years after the end of the
game [3]. The effective business war game is inexpensive because it does not require neither expensive
simulation programs nor high-priced cooperation with consulting firms. The business war game has to be simply
arranged with no elusive playing rules. And the last business war game’s feature represents its transparency
which means no use of incomprehensible computer algorithms and that the top management permits a free
exchange and use of information during the game.
Effective business war game makes manager a realist. Effective managers are not neither optimist nor pessimist.
They are well prepared on meeting the market reality. [2, p. 25].
Business war game can help either in a process of a new strategy formulation or during a revision phase before
a kick off of a selected strategy. According to this business war games can be classified into two basic groups –
landscape and test war games [2, p. 98].
Landscape war games are based on a deep survey of the competition setting and so facilitate a formulation of
a new strategy or plan.
Test war games investigate a possible reaction on a selected strategy and thus are instruments for testing and
verifying a strategy already planned.
3. BUSINESS WAR GAME PROCEDURE
The whole process of business war game application in a company can be divided into five key parts.
A. Strategic crossroad identification
A timing of the game is very important for its further success. It is recommended to organize the game at
a critical point on a strategic crossroad [2, p. 201]. The chief reasons for playing the game are:

managers are in a situation where they have a need to make a decision or formulate a plan,

there are external subjects whose reaction have a significant impact on the future success of this decision,
and managers do not have firsthand knowledge about those subjects’ intended actions,

if this decision will be wrong, it will cause serious difficulties and cost [2, p. 26].
Besides these, the business war game should be played in organizations under the acquisition process, in those
companies which are going to enter a new market or launch a new product.
B. Decision to play business war game and its type choice
If a company decides to play the business war game it has to determine the purpose of the game in next step. If it
wants to create a new strategy, then it selects landscape war game. In the case of testing an existing strategy, test
war games are more suitable.
It is possible to play both of these game types in connection with one plan or strategy. At the beginning,
landscape war game should be played in order to formulate a new strategy. As early as this strategy is elaborated
it should be testified on a market reaction via test war game.
C. Teams creation and roles division
In business war game several participants are involved. These participants usually come from various
departments of one company, which is just occupying by a strategy or plan examination, but some external
experts, advisers or stakeholder representatives should be also included in the game, in order to get a better
insight into the setting under examination. In contrast to games based on game theory, the real competition
representatives are not players in the game. This competition’s point of view is examined by a role-playing
exercise. Competitors and other important external subjects like customers, suppliers, regulatory authorities and
distributors are all treated as characters. At first it is necessary to determine how many characters will be played.
Each character is usually represented by one team. Naturally the company which organizes the game is regarded
as a domestic team. Other teams may represent the most important competitors. If there are many significant
competitors, they can be consolidated according to their features. One team may represent a customers’ point of
view, some other subjects or interest group.
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
63
Gilad suggests that number of players involved in the game should range from 12 to 48 persons. There should be
from 3 to 8 players in one team and the total number of teams should be 6 at maximum. [2, p. 117-118]. There
are several principles how to divide all participants to teams according to their professional status and their
character.
D. Game
The schematic plan for playing the business test war game is shown in Figure 1.
Role of the intelligence in the business war game is crucial. Team members have to learn how to role-play
competitors with real market intelligence [3]. This bears on the total standard of competitive intelligence in
a company. Successful organizations have to collect useful data, information and develop knowledge about other
subjects in their environment. It is necessary to make certain database and keep it actual. All information and
materials created during the game have to be recorded and integrated into the database. All game participants
have to be also aware of the high confidentiality of the whole game.
First round
Introduction
Domestic team plan presentation
Analytical scope and methods identification
Individual teamwork I
Team 1
presentation:
role competitor 1
Second round
Team 2
presentation:
role competitor 2
Team 3
presentation:
role competitor 3
Domestic team
presentation about
blind spots of a plan
Reallocation of domestic team’s members
Individual teamwork II
Team 1
presentation:
role of domestic
team
Team 2
presentation:
role of domestic
team
Team 3
presentation:
role of domestic
team
Determination of strategic possibilities of domestic team
Successive opponency by teams
Figure 10: Business test war game schematic procedure
Source: own elaboration based on [2, p. 188]
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
64
The key-stones of the whole game are individual teamwork role-playing activity, subsequent information sharing
and final brainstorming.
At the beginning of individual team activity each team has to quickly analyze the whole entrepreneurial branch
and market disequilibria. For this task Porter’s model of five competitive forces is suitable. [2, p. 52-53].
The second important task for each team is to analyze competitor’s behaviour (each team occupies by the
assigned competitor, domestic team deals with the domestic company). Porter’s model of four corners should be
evolved in this phase [2, p. 56].
Teams have to put themselves in competitor’s place and see the reality by his point of view and understand his
character and attitude. An idea to see the situation by own point of view based on the question: What should I do
if I were my competitor, is not correct [2, p. 64]. If managers are able to think as a competitor whose role they
are playing they will be able to make an accurate prediction about true steps of their competitor [2, p. 23].
Players should recognize their own and competitors’ so called hot buttons and blind spots. Hot buttons indicate
actions from which company should refrain, because an attack on these hot buttons may cause uncomfortable
competitor’s reaction or retaliatory measures. In contrast blind spots determine an unused idea or element whose
utilization may bring success and profit to a company which is aware of this item. On the other hand disregard
some blind spots may induce a serious threat. [2, p. 70-71, 78].
Besides Porter’s model of five competitive forces and Porter’s model of four corners many other strategic tools
can be used to advantage. For example influence and activity maps, scenario planning, decision trees, PEST
analysis, core competence analysis or stakeholder analysis. Teams can derive benefits by reading competitors’
annual reports and publicized interviews, contemplating acquisitions, new products development, looking at their
managers and employees history and by reviewing past problems and recent moves. The process is not about
collecting and studying all accessible data, but about picking the useful ones out.
E. Closing of the game
After the end of the second round a choice of the best strategic possibilities is made together with the whole
game summary and with a specification of consequent activities. It is also appropriate to determine rewards for
game participants and to emphasize the importance of maintaining all the information secret. The company’s
information database should be completed by new information.
CONCLUSION
The effectiveness of the business game is influenced by two aspects, namely the quality of game design and
human behaviour [5, p. 4]. Playing of business war games does not guarantee certain success but it increases the
probability of success [2, p. 19].
If business war games are played properly they:
produce practical ideas, which help in making better decisions and formulating improved plans,


provide participants with better insights into market dynamics and competitive threats,

formulate specific action recommendations,

suitably allocate limited valuable resources with regard to the market and competitors,

thoroughly evaluate the company’s strategic global and regional position,

protect a launch of a new product,

get a support and engagement of other members in a company through their identification with the
project,

formulate tenable entrepreneurial project,

reveal company’s internal culture,

detect company’s level of intelligence (data, news, information, knowledge),

identify and gain a deeper understanding of the industrial trends and anticipate their changes,

find company’s and competitors’ blind spots and hot buttons,

make the most accurate predictions of the third parties’ moves,

place competitors’ moves in perspective,

develop system of early warning [2, p. 29-32; 3; 7].
In several case studies and retrospective games the usefulness of the business war game tool was proven [2]. The
process of strategic planning in companies is very important for their future. Business war games certainly help
in this process because they put an emphasis on the recognition of the possible and probable future reaction on
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
65
the strategy in advance. Through business war games some corrective measures can be taken in time. This may
prevent from any real serious problems or future damage. It can be said that the method of business war game is
cheap, simple, safe, structured, constructive and gives realistic useful results.
BIBLIOGRAPHY:
[1]
FOTR, J.; GRIGAR, F.; HÁJEK, S. Simulační hry. Praha : Institut řízení, 1981. ISBN 57-01-81.
GILAD, B. Strategické válečné hry v podnikání. Praha : Management Press, 2010. ISBN 978-80-7261[2]
216-1.
GILAD, B. Business war games. [online] [2011-01-11]. Available from: http://www.bengilad.com/
[3]
KELLY, A. Decision making using game theory. Cambridge : Cambridge University Press, 2003.
[4]
ISBN 0-521-81462-6.
LIJUAN L. Overview of business games. In: Wireless Communications, Networking and Mobile
[5]
Computing, 4th International Conference 2008. [online] [2011-01-11]. Available from:
http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&ar number=4681098&isnumber=4677909.
SHOR, M. Game theory and business strategy. [online] [2011-01-12]. Available from:
[6]
http://www2.owen.vanderbilt.edu/mike. shor/courses/game-theory/lecture01.html.
War
Gaming:
Theory
&
Practice.
[online]
[2011-01-11].
Available
ofrom:
[7]
http://www.academyci.com/Seminars/c-war_gaming.html.
Benjamin
Gilad.
Wikipedia.org.
[online]
[2011-01-11].
Available
from:
[8]
http://en.wikipedia.org/wiki/Benjamin_Gilad.
ADDRESS:
Ing. Karolína MUŽÍKOVÁ
Department of Management Theories
Faculty of Management Science and Informatics
University of Žilina
Univerzitná 8215/1
010 26 Žilina
Slovak Republic
E-mail: [email protected]
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
66
GIS IN MUNICIPALITY ADMINISTRATION
Irena Opatřilová, Dalibor Bartoněk
Brno University of Technology
Abstract: GIS for small municipalities is the topic of this paper. The project is based on the
information needs of self-government of small municipalities or city districts. It solves the
compromise between the lack of funds to build special-purpose GIS and the need for computer
support in accounting and decision making processes. The article describes the construction of the
project. The first phase is the design of the system included the determination of user requirements
and data collection. The layers integrated into the system (national map series, ortho-photomaps,
cadastre of real estates, town-planning documentation, environment, etc.) and data providers are
described in this first part. The second phase is the implementation of this design system contained
steps as the choosing appropriate software (primary ArcGIS software), the input, the data
transformation and data analyses. The analyses were made for two types, the surface analyses
based on created digital terrain model (terrain maps of exposure, slope, shadow, visibility, etc.) and
thematic spatial analyses (finding the optimal place to live, selection localities for the construction
of wind or solar power, a good place for a playground, etc.). The final phase is about the creating
of required outputs and the installation of the system on the local authority. The project was
exported into format *.pmf for freeware browser ArcReader because users have not had ArcGIS
software. Some problems that appear in individual phases of creating are described in the article,
too. To date the technology of GIS has been deployed using this method into small municipalities
Křetín, Vranová, Jinačovice, Moravice, Želešice, Silůvky and into the District of Brno-Jundrov,
Brno-Líšeň and Brno-Jih (Komárov, Horní and Dolní Heršpice, Přízřenice).
Keywords: GIS, municipality, administration, spatial analysis.
1. INTRODUCTION
In the recent years information technology belongs to the fastest growing science disciplines and cuts across all
branches of knowledge. There is no difference in areas of the state administration, where the demand grows for
its own geographic information system (GIS) over the territory of its autonomy. GIS arise at the regional level,
as well as at the level of the small municipalities and smaller city districts.
The deployment of GIS technology into the ambience of small municipalities or city districts has its own special
specifics. On the one hand, there is the modern technology that is certainly a very useful tool in the practice of
the state administration and in many cases can be enforced by the legislation; on the other hand, there are very
restrictive means for the acquisition of hardware and software components of GIS, including training of the
operator, due to financial unavailability.
Both of these opposing factors have led to the idea of farming out the implementation of GIS for small
municipalities (or city districts) in the form of master’s thesis. The advantages are negligible cost of the system
design and low, user-controlled costs on its running. To date the technology of GIS has been deployed using this
method into small municipalities Křetín, Vranová, Jinačovice, Moravice, Želešice, Silůvky and into the District
of Brno-Jundrov, Brno-Líšeň and Brno-Jih (Komárov, Horní and Dolní Heršpice, Přízřenice) [1].
2. DESIGN OF THE SYSTEM
2.1. REQUIREMENTS FOR THE SYSTEM
The uniform frame of the structure of GIS for needs of small municipalities can’t be defined easily, because each
municipality has different requirements for the content of the system depending on the position, historical and
industrial specifics of the municipality. Therefore there was firstly the need to arrange a personal meeting by the
mayor (or mayoress) of the municipality or the city district. The interview determined what the requirements the
representatives of the office have and what the idea about of the overall structure of the system the
representatives have as future users of GIS. At the same time it was checked whether the office has some
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
67
materials that could be utilized for the following creation of GIS. In conclusion of a meeting the creator of
system arranged with representatives of the office to the possible future cooperation.
THE MOST COMMON USER REQUIREMENTS

drawing of all underground services

display of objects registered in the cadastre of real estates, including the database of the file of descriptive
information relating to these objects

valid town-planning documentation

ortho-photos of the interest area

planimetry and altimetry of map for given location

technical characteristics and master plans (transport, greenery, waste management, culture, etc.)

register of population, register of ratepayers behind dogs and waste

price map of plots

information about the environment

thematic spatial analysis.
2.2. THE COLLECTION OF NEEDED DATA
The data collection is a crucial phase of the design of the GIS because in simple terms any GIS would not have
been created without data. Generally, data are located in different places, under different administrators or heads
of departments. The result of system design is dependent not only on the availability of materials, but also on the
willingness and helpfulness of the administrator of these data.
The following is a list of categories of data layers, which were included into the system on the basis of user
requirements and the discretion of the system creator. Further, the main data providers are introduced here.
THE TOPOGRAPHIC LAYERS

ortho-photomaps of the interest area (source: Geodis Brno, The Czech Office for Surveying, Mapping and
Cadastre (COSMC))

planimetry and altimetry of map, digital terrain model (source: ZABAGED - COSMC)

cadastral maps (digital cadastral map, thematic cadastral map, raster of the cadastre of lands, orientation
map of parcels; source: the cadastre office, the local authority (LA))

tourist and bike tourist maps, nature trails (source: SHOCart, LA)
THE THEMATIC LAYERS

town-planning documentation (source: Brno City Municipality (BCM), LA)

utilities (source: BCM, network administrators - E.ON, JMP Net, Jihomoravské vodovody a kanalizace,
Telefonica O2 and others)

infrastructure - maps of roads, railways, public transport networks etc. (source: BCM, LA, ZABAGED –
Fundamental Base of Geographic Data)

Price map of building or agricultural plots (source: BCM)

environment - waste management, significant green space, maps of protected areas, air quality maps, map
of the main wind directions, noise maps, floodplains, soil maps and cover forest maps etc. (source: BCM,
The Agency for Nature Conservation and Landscape Protection of the Czech Rebublic, T. G. Masaryk
Water Research Institute, Lesy ČR)
geological and geophysical maps - radon index, gamaspektrometry, radiometric and geomagnetic maps

(source: Czech Geological Survey, Geofyzika Brno)

historical maps, maps of immovable cultural monuments (source: server mapy.cz, The South Moravian
Regional Authority)

interactive maps with hyperlinks created on the basis of own photographs of objects in a given locality
The biggest problem in collecting data was the reluctance of some data administrators to provide data free of
charge and therefore not been possible to integrate the data into the system. Another problem can be timeconsuming communication between data providers and the applicant because it was not always easy to simply
explain what one asks for, what purposes the data are needed, in what form, etc. Another complication was the
incompleteness of the data so in some cases necessary data was supplemented by direct field measurements. In
one case the problem was the lack of data for the requested data layer.
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
68
It is worth mentioning that a large amount of information and data can also be found on the Internet. Map
services that may be used are particular so-called IMS (Internet Map Service) and WMS (Web Map Service).
The important provider is primarily The Czech Environmental Information Agency that provides services
through web geoportal of the public administration. You can find there a large amount of useful data not only
from the environment, but also information on the administrative division, noise maps, population,
transportation, etc. Other providers of WMS are for example COSMC and the Czech Geological Survey.
It is rewarding to use the data from these services for the creation of GIS. Either we can make the vectorization
of maps from service and then enter the object information into the attribute table (this is useful when we know
that the municipality has a problem with connecting to the Internet), or suffice to have these services connected
with our GIS using the URL address.
3. IMPLEMENTATION OF THE DESIGNED SYSTEM
3.1. THE CHOOSING APPROPRIATE SOFTWARE
After designing the system followed by a phase of data processing and displaying in the appropriate software.
The implementation itself can be divided into several steps.
For creating GIS was chosen primarily the software ArcGIS from ESRI Company, specifically product ArcGIS
Desktop with the highest level of services, ArcInfo. Also the software Geomedia from Intergraph was used in
several cases.
3.2. THE INPUT AND THE DATA TRANSFORMATION
This is generally about integrating the materials into the layers of a single coordinate system, in this case of the
Czech system S-JTSK. The obtained data were in double form, either in the form of digital (vector or raster) or
analog. With the materials that were digital and have already had the defined coordinate system S-JTSK was the
least work. The data that were not delivered in this system had first to be transformed. Problems were for
example in the transformation of historic maps because it was difficult to look for identical points needed for the
transformation. In this part some errors in planimetry of individual layers had to be resolved.
The most of work was done with analog materials. The maps of these documents had to be first converted to
raster form by scanning. These data were incorporated into the system either in the newly created bitmap format,
or even further converted to vector form using editing tools.
Other materials obtained by analog can be various databases that the municipalities lead in the paper form (e.g.
the population register). In this case, do nothing else but the database manually transcribe into digital form and
then connect it with the graphic part of the system.
After the integration of various GIS layers remained the last step. To classify properly these layers, i.e. to define
the graphic attributes of each class of elements and then to fill in attribute table of layers for additional needed
information. If we do not create metadata, at least every object should contain in its attribute table the basic
information about the type of object, data source, updating data, date of creation of layer, author, possibly
hyperlinks to related documents, photos or web pages.
3.3. THE ANALYSES OF DATA
Once we have created the GIS we can proceed to the next phase: the data analyses of this system. Thus we get
more data and new information. Analyses can be divided into two groups: on the surface and thematic spatial
analyses.
THE SURFACE ANALYSES
The digital terrain model (DTM) was created from the 3D contour of ZABAGED. The following rasters were
derived from DTM on the basis of surface analyses:

exposure terrain map - gives information about the orientation to cardinal point

slope terrain map - informs about slope conditions in the area

shadow terrain map - shows rate of reflected light for each surface of terrain towards a defined light
source

visibility terrain map - specifies the visible area from a predefined position. The results of the analysis are
places that are visible from this station and which are not visible and are shaded terrain.
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
69

grid terrain model - the interpretation of elevation using colored hypsometry contour.
These layers were then used for following analyses.
THE THEMATIC SPATIAL ANALYSES
The following analyses were processed for the given locations using ArcGIS tools from data of created GIS and
derived layers from the DTM:

Finding the optimal place to live – the specific parameters of this place were identified on the base of the
questionnaire. The layers used for the analysis were town-planning documentation (areas of housing,
woodland, gardens and public transport stops), the noise maps of road transport, container space for waste
separation, the price of building land map, slope and exposure map of the terrain [2].
Site selection for construction of houses – data included in the analysis were from noise maps of road

transport, thematic cadastral map (a nature of land use arable land, gardens and permanent grassland), bus
stops and the boundaries of the floodplains Q100 [3].
Assessment of stressor factors in the areas of housing from standpoint of environmental studies - the

analysis was based on ten stressor factors. The data were used from dispersion studies, flood plain area,
noise maps of road transport, area of groundwater contamination and areas of former landfill [4].
Finding a location for solar power plant - the parameters given to companies engaged in construction of

these buildings. Used data for the analysis were particularly from exposure and slope terrain map, from
solar study and data of the Czech Hydrometeorological Institute (CHMI) [5].
Finding sites for wind power plant - the parameters chosen for a specific type of wind power. Used data

from the CHMI and the design of wiring [6].

Appropriate location of a playground - the parameters of the plot for the location of a playground were
given by the processor. Used data from the cadastral map and the slope terrain map [5].

Location of ground water purifier - the parameters chosen according to the specific type of plant. It was
worked with the data of cadastre of real estate [6].

Choosing of location for the observation tower - used data from the maps of visibility terrain [6].

Values of erosion for the interest area in the farmed land and the proposal of anti-erosive steps - used
a cadastre data, DTM and BPEJ [6].

Selection of parcels in the noise band - using cadastral map [7].
4. PRESENTATION OF DATA AND THE OUTPUTS
The last phase is the creation of so-called layouts, outputs from GIS, for presentation of project results. Next it
was made an overview of all data layers that are located in the created GIS. In the end the animation of flight
over the given territory and 3D visualization were created whose materials were ortho-photos and DTM.
This article contains two figures as the example of outputs. They demonstrate the outputs of GIS of the District
of Brno-Jundrov. In the first figure is the structure of this GIS with 13 thematic layers and illustration of 16 data
layers. The second figure is about the spatial analysis - finding of the optimal place to live. Seven parameters of
this place were identified using the questionnaire and two parameters were given by legislation. In the figure are
processing model of this analysis in ArcGIS interface and results of questionnaire.
5. CONCLUSIONS
The created GIS, including all of its outputs, was passed to the local authority (or the municipal authority) and
the users were familiar with the content of the system. Because the software ArcGIS is for small municipalities
unaffordable, the individual data sets were converted to the format *.pmf (Publish Map File). This file can be
viewed in ArcReader application, which is freely available on the ESRI website. This application has a similar
interface as ArcInfo, but there are editing functions suppressed. This browser was installed and briefly
introduced to users.
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
70
Fig. 1. The Structure of GIS of the District of Brno-Jundrov
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
71
Fig. 2. The Model of Spatial Analysis and Results of Questionnaire
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
72
Among the most useful layers for representatives of municipality and citizens are undoubtedly plotting of
utilities, cadastral map, town-planning documentation or information about the environment. The information
system serves as support at making a decision and related processes for self-government municipalities. Can be
used as a suitable material in the case of town planning or as a basis for construction activities. The results of the
work may also serve to inform local citizens or visitors to the municipality. Problems can occur at all phases of
creating. For data collection the problems are especially reluctance to provide data, then the deficiency or
absence of data. For data processing the aggravating factor can be laborious converting analog to digital form.
The analog form of the materials occurs mainly in the office of smaller municipalities. For GIS of city districts
data were obtained mostly on the BCM in digital form. Further complications may occur during data
transformation. In the final phase of the project the training of a representative of users was always necessary,
because unfortunately most workers on the offices of municipality have no experience with such an information
system, as opposed to regional offices where there are separate departments with GIS staffers. The biggest
problem is the data update. Because it is assumed that an update will not be often under way (maximum once
a year), it can be solved a one-off contract or again farming out the master’s thesis. Some data layers can be
solved from standpoint of updates at least through the IMS or WMS services (cadastral map, ortho-photomaps,
ZABAGED etc.). The most common reason why some smaller municipalities or city districts still do not use GIS
is financial unavailability. Other reasons are lack of awareness of GIS, the supply of use GIS exceeds the need of
the office, or the possibility that so far municipalities only think about the introduction of the GIS [8]. So the
main target of this project was and still is the successful creation of a GIS for the needs of small communities
without charge, as well as to familiarize them with issues of GIS and convince them that this system is benefit to
its users.
REFERENCES
[1]
BARTONĚK, D.; POSPÍŠIL, L. GIS pro potřeby malých obcí. In: Geodetický a kartografický obzor, 55
(97), 2009. p. 97-99.
OPATŘILOVÁ, I. GIS Městské části Brno-Jundrov. (Master’s thesis), 73 p. 2010. Brno : Brno University
[2]
of Technology, Faculty of Civil Engineering, Institute of Geodesy.
ČEPERA, D. Enviromentální zhodnocení Městské části Brno-Jih na bázi GIS. (Master’s thesis), 48 p.
[3]
2009. Brno : Brno University of Technology, Faculty of Civil Engineering, Institute of Geodesy.
ČERNÝ, M. Enviromentální zhodnocení MČ-ti Brno-Jih (Komárov) na bázi GIS. (Master’s thesis), 54 p.
[4]
2009. Brno : Brno University of Technology, Faculty of Civil Engineering, Institute of Geodesy.
ANDIEL, J. GIS obce Jinačovice. (Master’s thesis.), 59 p. 2010. Brno : Brno University of Technology,
[5]
Faculty of Civil Engineering, Institute of Geodesy.
TRATINOVÁ, J. GIS malých obcí. (Master’s thesis.), 59 p. 2010. Brno : Brno University of Technology,
[6]
Faculty of Civil Engineering, Institute of Geodesy.
VYBÍRALOVÁ, A. GIS pro potřeby malých obcí. (Master’s thesis.), 52 p. 2008. Brno : Brno University
[7]
of Technology, Faculty of Civil Engineering, Institute of Geodesy.
KULÍČKOVÁ, Š. Marketingový výzkum ve vybrané firmě. (Master’s Thesis.), 116 p. České Budějovice :
[8]
University of South Bohemia, Faculty of Economics, Department of Management.
ADDRESS
Ing. Irena OPATŘILOVÁ
Institute of Geodesy
Faculty of Civil Engineering
Brno University of Technology
Veveří 95
602 00 Brno
Czech Republic
Email: [email protected]
Ing. Dalibor BARTONĚK
Institute of Geodesy
Faculty of Civil Engineering
Brno University of Technology
Veveří 95
602 00 Brno
Czech Republic
Email: [email protected]
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
73
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
74
CZECH SPACE TECHNOLOGY “KNOW-HOW” ENTERING THE INTERNATIONAL
SPACE STATION
Marek Šimčák
European Polytechnic Institute Kunovice
Abstract: Nowadays, at the beginning of the second decade of the 21st century we are the witnesses
of a remarkable development in many technological branches. No doubt to say that just the Space
technologies belong to the most dynamic ones having an increasing impact on the functioning of the
present world in a substantive way. Space technologies are becoming a natural component of many
sectors of human activities. This paper is focused on presenting an extraordinary success of the
Czech space electronics industry that opens a door to the International Space Station (ISS) which is
often considered as the most challenging technical project of the human being ever. At present,
sophisticated computations and design activities of a space electronic module called European
Laser Timing instrument (ELT) are running under responsibility of a team of Czech researchers
and engineers. This instrument will form an integral part of the ACES payload to be operating in
the Columbus science laboratory, the Europe’s key contribution to the ISS. It is my great pleasure
to present here also the fact that the European Polytechnic Institute has become an active player
within the Czech space educational activities. Within a framework of a new course called
“Electronics in the space devices” established in 2010 the first student team project “Introduction
to the ISS systems” was made.
Keywords: International Space Station (ISS), European Space Agency (ESA), Atomic Clock
Ensemble in Space (ACES), European Laser Timing instrument (ELT)
INTRODUCTION
The International Space Station (see Figure 1) is a unique, orbiting research laboratory that has been designed
and built-up in the frame of the worldwide peaceful technology co-operation. The main ISS partners are United
States, Russia, European countries associated in ESA, Japan and Canada. This international co-operation is
considered as the largest partnerships in the history of science. Since the first module of the ISS was launched in
1998, the Station has circled the globe 16 times per day at 28 000 km/h speed at an altitude of about 370 km,
covering a distance equivalent to the Moon and back daily. Once complete, the Station will be as long as
a football field, and will have as much living space as a five-bedroom house.
The key contribution of Europe is the multipurpose science laboratory called Columbus. This is a place where
scientists can send experiments to be carried out in weightless conditions and where major technological and
science achievements are made. This participation is also a great stimulus for European industry performing the
development and manufacturing of cutting-edge space systems and hardware.
It is great to say that prototype development activities of the first real Czech space hardware intended for the ISS
have been started not later then just one year after completion of the Czech formal memberships in ESA. That
time the concrete Contract between EADS Astrium, the leading ESA prime contractor and the Czech Space
Research Centre (CSRC), the Czech private engineering company, was officially signed. Entering this
challenging project was possible based on satisfying all the strict criteria of the evaluation process supervised by
the ESA experts.
PRESENT ACTIVITY OF THE CZECH REPUBLIC IN ESA
The Czech Republic became the 18th member state of the European Space Agency (ESA) on the 14th November
2008. This moment can be considered as one of the most important milestones waving the long way to the new
visionary achievements of the Czech space technologies and industry in the near and distant future. Accessing to
the ESA convention was not an easy process covering not only the PECS transition programme (2005-2008). It
must be mentioned also extensive background in space science and technology based on substantial participation
in the Inter-cosmos programme and also the strong interest of the recent Czech governments.
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
75
The country does not have a National space agency yet, but it has been proposed and recommended in the
strategic document called “Czech Space Plan” recently approved by the Czech government. At present, several
Czech official institutions and authorities are active in the whole complex of the national space activities. Main
responsibility is delegated upon the Ministry of Transport co-operating together with the Ministry of Education,
Youth and Sports, the Czech Space Office and the Czech Space Alliance. The Czech Space Office (CSO) is an
official contacting point especially for the academic issues, science, education, and research activities while the
Czech Space Alliance (CSA) represents great majority of the active space industry.
The Czech Space Alliance plays a key role to the success of the country in ESA. It is an industrial association of
the Czech space companies with proven skills while some of them have more then 15 years’ track record in
space including ESA. The alliance was established in 2006 under the auspice of CzechTrade, the export
promotion agency of the Ministry of Industry and Trade. Among the main alliance goals it could be mentioned
especially representation and promotion the interest of the space industry to the national decision makers and
stakeholders, the national and international media, co-operation with the ministries and all the other official
entities supporting space activities in formulation of the space policy and creation of suitable conditions for the
growth of the Czech space industry. Moreover, it is also important to present the skills of the CSA’s members at
international events helping them to develop business relationships with potential partners inside ESA and other
space subjects.
Figure no.1: International Space Station, © ESA
EUROPEAN SPACE AGENCY
ESA is an international organization of 18 member states, including the Czech Republic, which can be called as
a gateway to space. More then 2000 people try to fulfill the main ESA’s mission which is to shape the
development of Europe’s space capability and ensure that investment in space continues to deliver benefits to the
citizens of Europe and the world. The basic act of ESA is the Convention for its establishment, signed on 30th
May 1975, in Paris. By coordinating the financial and intellectual resources of its members, it can undertake
programmes and activities far beyond the scope of any single European country. ESA's budget for 2010 is almost
€4000 million. ESA operates on the basis of geographical return, i.e. it invests in each member state, through
industrial contracts for space programmes, an amount more or less equivalent to each country’s contribution.
Among the other, the main ESA’s domains are for example science and robotic exploration for space discovery,
the Earth observation, microgravity research, satellite navigation, telecommunication & integrated applications,
innovative technology for everyday life, launchers industry, and human spaceflight. All these activities are
performed in several ESA centers, as follows:
ESA's headquarters are in Paris, France;


EAC, the European Astronauts Centre in Cologne, Germany;

ESAC, the European Space Astronomy Centre, in Villanueva de la Canada, Madrid, Spain;

ESOC, the European Space Operations Centre in Darmstadt, Germany;

ESRIN, the ESA centre for Earth Observation, in Frascati, near Rome, Italy;

ESTEC, the European Space Research and Technology Centre, Noordwijk, the Netherlands.
A new ESA centre has opened in the United Kingdom, at Harwell, Oxfordshire. ESA also has liaison offices in
Belgium, USA and Russia; a launch base in French Guiana and ground/tracking stations in various parts of the
world.
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
76
Within the context of the European space business, ESA is in charge of managing all the procedures to attribute
and place the contracts through the procedure “Invitation-To-Tender” called also the ITT system, up to the final
approval of contracts given to specific industry entities. As for the Czech Republic, in 2009 ESA announced its
historically first call for proposal titled AO6052. The second call has just been opened under the title AO6647.
Nowadays all the Czech companies having any ambitions to enter the European space business are busy to make
their best proposals to ESA.
ESA CONTRIBUTION TO THE INTERNATIONAL SPACE STATION
The ISS can be characterized also as a versatile permanently inhabited research institute in Low Earth Orbit
enabling a large observation platform in outer space for scientific research and applications. It also serves as
a test centre to facilitate introduction of new technologies. This permanently human occupied outpost in outer
space should also serve as a stepping stone for further space exploration. Once completed, the ISS will have the
following parameters:
Dimensions: width=108 m, length=74 m, height=45 m


Pressurized volume: 1 200 m3

Total mass at completion: ~450 000 kg

Orbital altitude: 370-460 km

Orbital velocity: 7.7-7.6 km/s (~27 500 km/h)
The first launch became reality on the 20th November 1998. The launch vehicles connecting the Earth with the
ISS are provided by 4 of the 5 participating partners (ESA: Ariane-5 launcher, Japan: H-IIA launcher, Russia:
Proton and Soyuz launchers, USA: Space Shuttle).
As already mentioned, the key contribution of Europe is the Columbus module (see Figure 2). This research
laboratory is permanently attached to the ISS and provides internal payload accommodation for experiments in
the field of multidisciplinary research into material science, fluid physics and life science. In addition, an
external payload facility hosts experiments and applications in the field of space science, Earth observation and
technology. The Columbus module is characterized with the following parameters:
Dimensions: length: 6 871 mm, largest diameter: 4 477 mm


Total internal volume: 75 m3

Payload mass: 2 500 kg

Launch mass: 12 775 kg

Supported crew: 3 persons

Cabin temperature: 16° and 27°C
There is a lot of flight hardware in Columbus, for example: Biolab, Fluid Science Laboratory, European
Physiology Module, European Drawer Rack, European Transport Carrier, Data and mission computers
Command/Measurement Units, High-rate multiplexer, Mass Memory Unit, Video Cameras and many other
scientific instruments.
Figure no.2: Columbus module, © ESA
CZECH PARTICIPATION AT THE ESA’S MISSION CALLED “ACES”
The whole ISS including the Columbus module is in the permanent development to increase its benefit for the
complex human being applications. Among many others currently running projects, development of the Atomic
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
77
Clock Ensemble in Space (ACES) is one of the exciting challenges of the present time. The ACES is an ESA
mission in fundamental physics based on the performances of a new generation of atomic clocks operated in the
microgravity environment of the ISS.
The ACES payload accommodates two atomic clocks called PHARAO (a primary frequency standard based on
samples of laser cooled cesium atoms developed by CNES) and SHM (an active hydrogen maser for space
applications developed by Spectra time under ESA contract). The performances of the two clocks are combined
together to generate an onboard timescale with the short-term stability of SHM and the long-term stability and
accuracy of the cesium clock PHARAO. Finally, it will provide precise orbit determination of the ACES clocks.
One of the main objectives of the ACES mission consists in maintaining a stable and accurate onboard timescale
which will be used to perform space-to-ground as well as ground-to-ground comparisons of atomic frequency
standards.
The ACES clock signal will be transferred on ground by a time and frequency transfer link in the microwave
domain (MWL). MWL compares the ACES frequency reference to ground clocks worldwide distributed,
enabling fundamental physics tests and applications in different areas of research. ACES will test a new
generation of atomic clocks using laser-cooled atoms. Comparisons between distant clocks both space-to-ground
and ground-to-ground will be performed worldwide with unprecedented resolution. These comparisons will be
used to perform precision tests of the special and general theory of relativity. In addition, ACES will demonstrate
a new type of 'relativistic geodesy' which, based on a precision measurement of the Einstein’s gravitational redshift, will resolve differences in the Earth gravitational potential at the level of tens of centimeters. ACES will
also contribute to the improvement of the global navigation satellite systems (GNSS) and to their future
evolutions; it will perform time transfer and ranging experiments by laser light; it will exploit the GNSS signal
for reflectometry measurements and contribute to the monitoring of the Earth atmosphere through radiooccultation experiments.
The European Laser Timing instrument (ELT) is one of the subsystems of the ACES (see Figures 3 and 4). The
ELT will extend the ACES operational capabilities for an additional time and frequency link in the optical
domain. It will be an optical link between the ACES payload and established Satellite Laser Ranging (SLR)
stations providing (1) two-way laser ranging and (2) one-way laser ranging to be able to perform a space-toground time transfer via ELT with a time error below 48ps (23ps as target performance).
Figure no.3: ELT Instrument
Figure no.4: ELT Instrument
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
78
It is an exciting fact that the ELT instrument was designed at the Czech Technical University and the prototype
is coming through the development phase in cooperation with the CSRC Company. Once completed, this
instrument will be installed at the Columbus External Payload Facility called CEPF (see Figure 5).
Figure no.5: ACES Payload installation at the CEPF of Columbus , © ESA
EPI AND ITS SPACE EDUCATIONAL ACTIVITES
At present, it is obvious that the “Space Age” is entering also into the Czech education area. Except both the
famous traditional state universities, the Czech Technical University and the Brno University of Technology, we
can see significant activities coming from the private university sector too. The European Polytechnic Institute is
most probably the first Czech private educational centre where the space technologies have been introduced to
the students. A new course called “Electronics in the space devices” has been established since 2010. During the
opening period the main emphasis was put on the various topics related to the space activities. For example, it
can be mentioned the basic space terminology, the Czech Republic as a new ESA member state and its role in
the space technologies in the past, at present and in the near future, introduction to the Czech space hardware, the
USA and Russian space competition history, the main technical requirements to the hardware operating in the
space, brief summary of the main space physics rules, the main Czech space authorities, the ESA introduction
and its business calls to the Czech industry, and the International Space Station. The ISS, its history,
development, internal structure and its functionality was the topic of the first student team project performed in
the frame of this new space-related course.
CONCLUSIONS
Main aim of this paper was to present a new growing phenomenon within the international space activities which
is an emphatic entrance of the Czech space technology “know-how” into the European space business including
the most challenging international space project ever, called the ISS. To make the positive meaning of this
national activity underlined, the main aspects, priorities, and the key players of the European and international
space were introduced. The ACES / ELT project with the Czech participation was presented in more detail.
Finally, the first steps of the European Polytechnic Institute towards the strategic and rapidly growing space
technologies were described.
REFERENCES
[1]
KUBALA, P. Mezinárodní vesmírná stanice ISS, Kralice na Hané : ComputerMedia Publishing, 2009,
ISBN 978-80-7402-033-9.
BŘINEK, J.; KOZÁČEK, Z. CSRC Proposal to the ACES ELT Instrument, 2009.
[2]
HELM, A.; HUMMELSBERGER, B. ACES Completion Phase Technical Preparation for Kick-off of
[3]
CSRC Subcontract, 2010.
RAUSCH, T. ACES Supplier surveillance audit CSRC, 2010.
[4]
KODET, J.; PROCHÁZKA, I.; KIRCHNER, G.; KOIDL, F. ELT - detector package tests, Czech
[5]
Technical University in Prague, Institute for Space Research, Observatory Lustbuehel, Lustbuehelstr. 46,
A-8042 Graz, Austria, Graz Observatory August 2010.
All about ESA – Space for Europe. European Space Agency, an ESA Communication Production, 2009
[6]
[online] Available from: http://www.esa.int/esaMI/About_ESA/SEMONSEVL2F_0.html.
European Space Agency [online] Available from: http://www.esa.int/
[7]
Czech Space Office. [online] Available from: http://www.czechspace.cz/
[8]
Czech Space Alliance. [online] Available from: http://www.czechspace.eu/
[9]
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
79
[10]
Odbor kosmických technologií a družicových systémů. Ministerstvo dopravy ČR. [online] Available from:
http://www.spacedepartment.cz/
ADDRESS
Ing. Marek Šimčák, Ph.D.
European Polytechnic Institute, Ltd.
Osvobozeni 699
686 04 Kunovice
Tel. +420 572 549 018
fax +420 572 549 018
Email: [email protected]
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
80
FIRST RESULTS OF CELLULAR LOGICAL PROCESSOR USED IN GENETICALLY
PROGRAMMING PROCESS
Petr Skorkovský
Areva - NP
Abstract: Paper tries to introduce an implementation of genetically programming process highly
efficient for time and for computing resources, written completely in assembly language. Currently
the author is working on the coding of the program. A preliminary version of the application is
already finished and in the text below, several experiences coming from the development process
are described. Another important part of the paper is the description of first results from the
operation of the preliminary version of the application, demonstrated with one fully working
example of evolutionary process.
Key words: genetically programming, assembly language, efficient computing, cellular automaton.
SHORT HISTORY OF THE RESEARCH
On the beginning of my research at autumn 2008, I was thinking about genetically programming processes and
how to increase their performance. As first improvement and increase of evolution speed I tried to define a new,
universal language for coding of wide range of algorithms which could be evaluated fast and with a high
efficiency.
Theoretical base (supported by a mathematical background) of the new “language” for universal coding of
algorithms, was introduced on the beginning of 2009. Inspiration by cellular automata is evident and processing
of the algorithm coded with the “language” can be compared to the processing of a cellular automata.
An algorithm described by this “language” is represented by cells, logic functions coded inside them and
connections with other cells at the same time. Each of the cells contains binary information which can be
propagated to other cells and it is equivalent to a flow and propagation of binary information through
a simulation of a logic circuit in discrete steps, with every possible complexity. The cellular representation of the
algorithm, called “Cellular processor of logical functions” is then simply converted to a sequence of bits –
a binary vector, which is further evaluated by the genetic algorithm.
Most powerful option to handle large amount of bits and bytes and effective manipulation of them are algorithms
coded in assembler language. At the summer 2009 started coding of the assembler program for genetically
evolution of algorithms which are represented by Cellular processor of logical functions and the application is
called “GenAlg”.
At the summer 2010, after one year of development during evenings at home (Development of this application is
not part of my profession, only at free time), the first draft of the application has been running. Since then I am
improving all program’s subparts to gain faster convergence during the evolution, to increase the effectiveness,
to implement new useful features and to develop good examples of problem solving based on the usage of the
application.
In following parts of the paper, detailed information is given about the core of algorithm representation - the
“Cellular processor of logical functions”, short outline of the whole genetically programming loop is described
and as the main part of the paper, improvements and experiences coming from the application operation are
introduced.
PROGRAM REALIZATION
Development tools:

For coding of my implementation of genetic programming application I use assembly language of 32-bit
CPU x86 family instruction set. The resulting program is compatible with Windows OS family and the
user interface is programmed to use standard Win32 API functions. For compilation of the application's
source code the Microsoft's MASM32 SDK V10 must be used.
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
81
Kernel of the algorithm – Cellular processor of logical functions:
Basic construction unit of the Cellular processor of logical functions is one Cell Bn,k, which in
cooperation with other Cells forms one-dimensional cellular automaton with the absolute Cell address
range < 0, nmax >.
Each of the Cell Bn,k is coded in 32 bits and carries the binary information:

1 bit: the last valid value calculated during
yn , k  02 , 12 
the
current
step
“k”
,

1



,
11 bits: the relative link from the first other Cell an = < -(nmax+1)/2 , +(nmax+1)/2 - 1> ,
11 bits: the relative link from the second other Cell bn = < -(nmax+1)/2 , +(nmax+1)/2 - 1>,
8 bits: the logical function Fn coded by the 8-bit table (described by one byte)
Fn = [fn,0, fn,1, fn,2, fn,3, fn,4, fn,5, fn,6, fn,7 ] = < (0)10 , (255)10 >.
bit:
the
yn , k 1  02 , 12 
previous
value
valid
in
previous
step
“k-1”
in
(1)
in
(2)
(3)
(4)
(5)
During each of the iteration step „k“, all new output values of all Cells Bn,k are calculated from previous output
value of Cell Bn+an,k (address of the Cell A is calculated from n + an), previous output value of the Cell Bn+bn,k
(address of the Cell B is calculated from n + bn) and the previous output value of the Cell Bn,k itself.
All these three binary values are used as an address (0 … 7) to get a new output value (fn,0 … fn,7) of the Cell Bn,k
from the 8-bit table Fn.
User interface – screenshots:
Two screenshots are taken from the program as a demonstration:
Figure No. 1: Evolution operations.
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
82
Figure No. 2: Evolution is running.
Current progress of program’s development separated into program steps, coded with assembler routines
(inspired by [1] and [2]):
1: Definition of genetic algorithm’s parameters, processing of function definition test cases : coding

finished

2: Generation of the first random population : coding finished

3: Calculation of fitness for all members of the population : coding finished

4: Sorting of members of the population according to their fitness : coding finished

5: Testing of termination conditions : coding finished

6: Selection of population members, removal of unsuccessful members : coding finished

7: Definition of pairs of survived population members and number of offspring : coding finished

8: Selection of crossing operators to generate children from defined pairs as the combination of genetic
information : coding finished

9: Random mutation operators to modify genetic information of new members of the population : coding
finished
EXPERIENCES FROM APPLICATION DEVELOPMENT
Selection of population members
During first versions of my application I used for selection of population members exponential function, which
can be used as a probability function.
After several time I have experimented with my application, I have seen that the effectivity of the selection
algorithm has problems with the genetically diversity of members inside the population. Very soon after small
amount of generations, there were many copies of one, two or three members inside the population with the
different genetic information and the progress of the evolution was too slow.
I experimented after with different other types of selection functions and with one, the genetically diversity was
much better than before as only simple exponential function was used. As a starting point I used the previous
exponential function and then I added effect of the sinus function. This type of selection leads to:
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
83
Figure Nr. 3: The selection function corresponds (normalized to 1000 candidates and pmax = 400) to
p(x) =400- 0,73*(400*(1-(EXP(-x/200)))+150*SIN(2*PI*(1000-x+50)/300))
This function leads to 3 local maximum: On places 215, 523 and 825. Around these points the probability
function gives to members of a generation higher chance that they will pass to the next generation and this type
of behavior improves the genetically diversity.
Parallelism of several evolutions
When experimenting with the application, the progress of the evolution was not fast enough, the genetically
diversity has been still not so big and it was easy to be stopped in some local extremes.
The number of parallel running evolutions has been increased. The main strategy is, to mix members between
parallel running evolutions after some number of evolution steps.
Currently I use 8 parallel evolution processes and their members are mixed after certain number of evolution
steps were passed.
EXAMPLE OF FIRST ACHIEVED RESULTS FROM EVOLUTIONARY PROCESS
For reasons of tuning the application, I have selected one very simple example and it is the 3-to-8 bit binary
decoder, but without enable input (see figure Nr.4).
Figure Nr. 4: The 3-to-8 bit decoder (Enable input is not used here)
The size of every binary vector in a generation was determined to 64 cells and 1000 vectors are in one evolution
process. There are 8 evolution processes running in parallel, so there are 8000 vectors processed in each of the
evolution step.
Function definition file was prepared which is needed to determine the selection criteria for calculation of fitness
of all vectors in a generation. All possible combinations of inputs and outputs are coded here. 8 steps are defined
for the transition process of each of the test case (outputs combinations are not observed here as the result is not
stable) and 10 steps are used as criteria of stable result. The total of 704 points can be achieved for the fitness
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
84
(See figure Nr.5).
Figure Nr. 5: Definition file describing behavior of the 3-to-8 bit decoder (Enable input is not used here)
After the evolution process was running for around 5 hours and around 9233 generation (evolution) steps passed,
the maximum fitness was achieved - 704 points. One vector from the winning set of all vectors with the
maximum fitness was isolated, see Figure Nr. 6:
Figure Nr. 6: One isolated binary vector which was generated from the evolution process
To prove the correct behavior of the resulting vector, binary outputs were generated by cellular processor based
on the vector from the figure Nr.6 with use of several test cases (Figure Nr. 7, 8, 9). First three cells from the left
are dedicated to three inputs of the 3-to-8 bit decoder (in reverse bit order) and last 8 bits are dedicated for 8
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
85
outputs (in natural bit order). First 8 steps of cellular processing have unstable outputs as the transition process is
going on and steps 9 to 21 have stable outputs already:
Figure Nr. 7: Results of cellular processing with test case 1
Figure Nr. 8: Results of cellular processing with test case 5
Figure Nr. 9: Results of cellular processing with test case 7
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
86
CONCLUSION
It is necessary to find more improvements in all of steps of the evolution cycle, as it is believed that there is still
a possibility to gain better performance in the future. As next, some good examples for testing the application
should be found in the future to be able to compare the performance of the application to other already existing
programs which are based on more traditional concepts. The example of 3-to-8 bit decoder is very simple and it
is necessary to keep in mind, that other examples of evolutionary process with the use of cellular processor may
be much more difficult and time consuming (very long search for a vector with maximum fitness points).
REFERENCES:
[1]
MAŘÍK, V.; ŠTĚPÁNKOVÁ, O.; LAŽANSKÝ J. Umělá inteligence (4). Academia : Praha, 2003.
ZELINKA, I.; OPLATKOVÁ, Z.; ŠEDA, M.; OŠMERA, P.; VČELAŘ, F. Evoluční výpočetní techniky,
[2]
Principy a aplikace. BEN - technická literature : Praha, 2009.
ADDRESS
Ing. Petr Skorkovský
Areva - NP organizační složka
JE Dukovany 269
675 50 Dukovany
mobil phone: +420 777084778
e-mail: [email protected]
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
87
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
88
COMPLEX CHARACTERIZATION OF FERROMAGNETIC MATERIAL'S
DEGRADATION BY MAGNETIC ADAPTIVE TESTING
Gábor Vértesy1, Ivan Tomáš2
1
Hungarian Academy of Sciences
Academy of Sciences of the Czech Republic
2
Abstract: Tensile tests of plastically deformed specimens were performed in order to investigate the
relation between variation of magnetic characteristics and residual plastic strain. Deformed steel
(CSN 12050) specimens were investigated by the method of Magnetic Adaptive Testing (MAT),
measuring and evaluating series of minor magnetic hysteresis loops. It was shown that an efficient
combination of the MAT parameters yields a reliable and unambiguous correlation with residual
strain of the specimens even though all relations between the strain and each of the individual
MAT-parameters were non-monotonous.
1. INTRODUCTION
Magnetic measurements are frequently used for characterization of changes in ferromagnetic materials, because
magnetization processes are closely related to their microstructure. This makes the magnetic approach an
obvious candidate for non-destructive testing (NDT), for detection and characterization of any defects in
ferromagnetic materials and in products made of such materials. A number of techniques have been suggested,
developed and currently used in industry, which are mostly based on detection of structural variations via the
classical parameters of major hysteresis loops [1]. As an example, magnetic hysteresis measurements were used
to monitor changes in the parameters due to low cycle fatigue in low carbon steel and AISI 4340 samples, with
the overall objective to develop an NDT tool for detecting failure [2]. It was found that the coercivity and
remanence do depend on fatigue, but they do not change monotonously as a function of the residual lifetime.
Such a behavior can cause a problem, as from a single non-monotonous functional behavior of a parameter it is
not always easy to decide whether the measured value of the parameter of an unknown sample corresponds to
the ascending or descending region of the calibration curve.
An alternative, more sensitive and more experimentally friendly approach to this topic was considered recently
[3,4] based on magnetic minor loops measurement. The method of Magnetic Adaptive Testing (MAT)
introduces a large number of magnetic descriptors to diverse variations in non-magnetic properties of
ferromagnetic materials, from which those, optimally adapted to the just investigated property and material, can
be picked up.
The purposes of this work are to investigate plastic deformation in industrial steel specimens by this
nondestructive magnetic method, and in particular to show that the multi-parametric character of MAT is able to
solve the above mentioned problem. Namely to determine unambiguously magnitude of the steel deformation
from combination of two (or more) parameters obtained from a single measurement, even though both the
parameters are non-monotonous.
2. EXPERIMENTAL
Flat samples (115x10x3 mm3) of commercial steel CSN 12050 (used for high pressure pipelines) were loaded by
application of tensile stress in Instron 8872 testing machine by feed rate of v=0.0015 mm/s. The samples were
plastically deformed up to the strain values ε = 0%, 0.1%, 0.2%, 0.9%, 1.5%, 2.3%, 3.1%, 4%, 7% and 10%. The
stress-strain diagram is shown in Fig. 1.
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
89
600
Stress [MPa]
500
400
300
200
100
0
0
1
2
3
4
5
6
7
8
9
10 11
Strain [%]
Fig. 1 The stress-strain diagram of the CSN 12050 steel. Circles indicate the measured samples.
All samples were measured by the MAT method [3]. A specially designed Permeameter [5] with a magnetizing
yoke was applied for measurement of families of minor loops of the magnetic circuit differential permeability.
The samples were periodically magnetized with step-wise increasing amplitudes. The Permeameter worked
under full control of a PC computer. The experimental raw data were processed by an evaluation program, which
interpolated the data into a regular square grid of elements, ij(hai,hbj), of a -matrix with a pre-selected fieldstep. Each ij-element represents one “MAT-descriptor” of the investigated material structure variation. The
consecutive series of -matrices, each taken for one sample with a value of the strain, k, of the series of the
more-and-more deformed steel, describes the magnetic reflection of the material plastic deformation. The series
of matrices is processed by another program, which normalizes them by a chosen reference matrix, and arranges
all the mutually corresponding elements ij of all the evaluated -matrices into a table of ij()-degradation
functions.
The program also calculates relative sensitivity of each degradation function with respect to , and draws their
“sensitivity map” in the plane of the field coordinates (hai,hbj) by a scale of colors or shades of gray. By
integration and/or differentiation of permeability along the field, ha, B-matrices and/or ’-matrices can be
obtained. The B- and ’-matrices contain in principle the same information as the –matrices, however, the
corresponding B()- and ’()-degradation functions are different and sometimes advantageous.
3. RESULTS AND DISCUSSION
Evaluation of all the types of matrices, -, B-, and ’-, revealed definite correlation between the MAT
descriptors and the plastic deformation. The degradation functions depend significantly on  and they typically
and mostly have a maximum at around 7% strain. Consider the most sensitive ’-degradation functions. There
are two significant regions of the sensitivity map, from which different degradation functions with different
dependence on plastic deformation can be evaluated. The sensitivity map of ’-degradation functions for the
investigated series of samples is shown in Fig. 2. The most sensitive (white in the figure) area is around ha=40,
hb=150 A/m. Very sensitive and stable ’-degradation functions can be taken from this area.
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
90
800
Minor loop amplitude, hb [A/m]
700
600
500
400
300
200
100
0
-100
-300 -200 -100 0
100 200 300 400 500
Magnetizing field, ha [A/m]
Fig. 2 Map of relative sensitivity of the ’ij()-degradation functions.
The optimum ’-degradation function determined from this region is shown in Fig. 3. Cross of lines in the
sensitivity map shows the center of this area. All the ’ij values are normalized by the reference (the not
deformed sample). Note that MAT descriptors of the most deformed samples are about 25 times larger than
those of the not deformed one. However, based on this graph only, the plastic deformation of an unknown sample
cannot be unambiguously determined, because the dependence is not monotonous.
The ’-degradation functions, taken from another area (also marked by crossing lines in the sensitivity map)
show different characteristics, see the example in Fig. 4. This auxiliary ’-degradation function is not as
sensitive as that of Fig. 3. However, even though it is also not monotonous in the whole region of the strain, it is
monotonous in the important 2 – 10% strain region, and it is sensitive enough to help to recognize whether the
high-value-descriptor of Fig. 3 belongs to the ascending or descending part of the curve.
The optimum, most sensitive
'-degradation function
25
ha40, hb150
20
15
10
5
0
0
2
4
6
8
10
Actual strain [%]
Fig. 3 The optimum ’ij()-degradation function from the (ha=40, hb=150 A/m) area.
Combination of the two non-monotonous parameters, which have been found out from the same single
measurement, made it possible to determine unambiguously the actual strain of any unknown sample.
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
91
The auxiliary '-degradation function
2,4
ha400, hb600
2,2
2,0
1,8
1,6
1,4
1,2
1,0
0,8
0,6
0
2
4
6
Actual strain [%]
8
10
Fig. 4 The auxiliary ’()-degradation function from the (ha=400, hb=600 A/m) area.
4. CONCLUDING REMARKS
The above outlined result illustrates one of the capabilities of the multi-parametrical method of Magnetic
Adaptive Testing. The MAT degradation functions characterize structural changes of ferromagnetic materials
easily, reliably, and without the necessity to magnetic saturate the material.
ACKNOWLEDGEMENTS
The work was supported by Hungarian Scientific Research Fund (project A08 CK 80173), by project
No.101/09/1323 of the Grant Agency of the Czech Republic and by the Hungarian-Czech Bilateral
Intergovernmental S&T Cooperation project.
REFERENCES
[1]
JILES, D.C. Magnetic methods in nondestructive testing. In: BUSCHOW, K. H. J. et al., Encyclopedia of
Materials Science and Technology. Elsevier Press : Oxford, p. 6021, 2001.
DEVINE, M.K.; KAMINSKI, D.A.; SIPAHI, L.B.; JILES, D.C. Journal of Materials Engineering and
[2]
Performance, 1 (1992) p. 249.
TOMÁŠ, I.; J.Magn.Magn.Mat., 268 (2004), p. 178.
[3]
VÉRTESY, G.; TOMÁŠ, I.; MÉSZÁROS, I. J.Magn.Magn.Mat. 310 (2007), p. 76.
[4]
TOMÁŠ, I.; PEREVERTOV, O. JSAEM Studies in Applied Electromagnetics and Mechanics 9, ed.
[5]
TAKAGI, T.; UEASAKA, M. IOS Press : Amsterdam, p. 5, 2001.
ADDRESS
Gábor VÉRTESY
Research Institute for Technical Physics and Materials Science
Hungarian Academy of Sciences
H-1525 Budapest
P.O.B. 49
Hungary
Ivan TOMÁŠ
Institute of Physics
Academy of Sciences of the Czech Republic
Na Slovance 2
182 21 Praha
Czech Republic
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
92
SECTION 2
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
93
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
94
OSCILLATION OF SOLUTION OF A LINEAR THIRD-ORDER DISCRETE
DELAYED EQUATION
Jaromír Baštinec, Josef Diblík, Alena Baštincová
Brno University of Technology
Abstract. A linear third-order discrete delayed equation x( n)   p ( n) x( n  2) with a positive
coefficient p is considered for n   . This equation is known to have a positive solution if p
fulfils an inequality. The goal of the paper is to show that, in the case of the opposite inequality for
p , all solutions of the equation considered are oscillating for n   .
Key words and phrases. Discrete delayed equation, oscillating solution, positive solution,
asymptotic behavior.
Mathematics Subject Classification. Primary 39A10, Secondary 39A11.
1. INTRODUCTION
The existence of a positive solution of difference equations is often encountered when analysing mathematical
models describing various processes. This is a motivation for an intensive study of the conditions for the
existence of positive solutions of discrete or continuous equations. Such analysis is related to an investigation of
the case of all solutions being oscillating (for relevant investigation in both directions we refer, e.g., to [?]–[?]
and to the references therein). In this paper, sharp conditions are derived for all the solutions being oscillating for
a class of linear second-order delayed discrete equations.
We consider the delayed third-order linear discrete equation
x(n)   p (n) x(n  2)
(1)



where n  Z a  {a a  1…} , a  N is fixed, x ( n)  x( n  1)  x (n) , p  Z a  R  (0) .


A solution x  x ( n)  Z a  R of (1) is positive (negative) on Z a if
nZ
x(n)  0 ( x(n)  0 ) for every

a .



A solution x  x ( n)  Z a  R of (1) is oscillating on Z a if it is not positive or negative on Z a1 for arbitrary
a1  Z a .
Definition 1.1 Let us define the expression ln q t ,
q  1 , by ln q t  ln(ln q 1 t ) , ln 0 t  t where
t  exp q  2 1 and exp s t  exp  exp s 1 t  , s  1 , exp0 t  t and exp 1 t  0 ( instead of ln 0 t , ln1 t , we
will only write t and ln t ) .
In [2] a delayed linear difference equation of higher order is considered and the following result related to
equation (1) on the existence of a positive solution solution is proved.
Theorem 1.2 Let a  N be sufficiently large and
p ( n) 
q  N . If the function p  Z a  R  satisfies
4
1
1
1
1
1
 2


 …

2
2
2
27 9n
9( n ln n)
9(n ln n ln 2 )
9( n ln n ln 2 n ln 3 n)
9(n ln n ln 2 n…ln q n) 2
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
95
(2)

for every n  Z a
then there exists a positive integer a1  a and a solution
x  x(n) , n  Z a1 of

equation (1) such that x( n)  0 holds for every n  Z a1 .
It is the open question whether all solutions of (1) are oscillating if inequality (2) is replaced by the opposite
inequality
4
1
1
1
1
1

(3)
p ( n) 




9n 2
27
assuming
9( n ln n) 2
  1 and n
9(n ln n ln 2 n) 2
9( n ln n ln 2 n ln 3 n) 2
…
9(n ln n ln 2 n…ln q 1 n) 2

9(n ln n ln 2 n…ln q n) 2
is sufficiently large.
Below we give a partial answer related to this open question. Namely, we prove that if inequality
p ( n) 
4
1
1

 2

2
27 9n
9(n ln n)
9(n ln n ln 2 n) 2
(4)
holds where   1 and n is sufficiently large, then all solutions of (1) are oscillatory. The proof of our main
result will use a consequence of one of Y. Domshlak’s results [8, Theorem 4]:
r be fixed natural numbers such that r  s  2 . Let { (n)}1 be a bounded sequence
of positive numbers and  0 be a positive number such that there exists a number   (0 0 ) satisfying
Lemma 1.3 Let s and
0
i

 ( n) 
n  s 1
i


2
 i  Z sr1
   ( n) 
 i  Z rr12

 s 1

 (i)  0 i  Z 
r
r 1
Then, if
i2
  (n)  0 i  Z
n  i 1

a

i2
  (n)  0 i  Z
n i

a
(5)

(6)
p(n)  0 for n  Z ss12 and
  l 2

 sin    (i )   sin (n  2)
n
 
p (n)  R     i l l 21
n


l n2 
sin

 (i)

 sin    (i )  
i
n
1


i
l





(7)
for n  Z s  3 hold, then any solution of the equation
r
x(n  1)  x(n)  p(n) x(n  2)  0
r
has at least one change of sign on Z s  2 .
Moreover, we will use an auxiliary result giving the asymptotic decomposition of the iterative logarithm [7]. The
symbols “ o ” and “ O " used below (for n   ) stand for the Landau order symbols.
Lemma 1.4 For fixed
r   R ‚ {0} and fixed integer s  1 the asymptotic representation
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
96
lns (n  r )
r
r 2

1


lns n
n ln n…ln s n 2n 2 ln n…ln s n

r 2
r 2
…


2(n ln n) 2 ln 2 n…ln s n
2(n ln n…ln s 1 n) 2 ln s n

r 2 (  1)
r 3 (1  o(1))

2(n ln n…ln s n) 2 3n3 ln n…ln s n
holds for n   .
2. MAIN RESULT
In this part we give sufficient conditions for all solutions of equation (1) to be oscillatory as n   .
Theorem 2.1 Let a  N be sufficiently large, q  N and
  1 . Assuming that the function pZ a  R 

satisfies inequality (4) for every n  Z a , all solutions of (1) are oscillating as n   .
Proof. We set
 (n) 
and consider the asymptotic decomposition of
Lemma (1.4) (for
 (n  ) 

  1 , r  
1

n ln n ln 2 n
 ( n  )
when n is sufficiently large and
and s  1 2 ), we get
  R . Applying
1
(n  )ln(n  ) ln 2 (n  )
1
n 1  n  ln(n  )ln 2 (n  )
  ( n) 
1
ln n
ln 2 n


1  n ln(n  ) ln 2 (n  )
  2
 1 
  ( n)  1   2  O  3  
 n 
 n n


2
2
 1 
 1 
 2

 O 3 
2
 n 
 n ln n 2n ln n (n ln n)


2
2
2
 1 

 O  3  
 1 
 2

2
2
 n 
 n ln n ln 2 n 2n ln n ln 2 n 2(n ln n) ln 2 n (n ln n ln 2 n)
The right-hand side of inequality (7) we can rewrite in the form
  l 2

 sin    (i )   sin (n  2)
n
 
  i  l 1

n
l 2


l n2 
sin

 (i )
sin
(
)


i

 


i
n


1
i
l





„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
97
  l 2

sin    (i )  

n
sin (n  2)

  i  l 1


n
l 2



sin   (i ) l  n  2  sin    (i )  
i  n 1
 i l





sin   (n  1)   (n)  
sin (n  2)

sin   (n  1)   (n)   sin   ( n  2)   ( n  1)   ( n)  
sin   (n)   (n  1)  

sin   (n  1)   (n  2)  
sin   (n  1)   (n)   (n  1)   sin   (n)   ( n  1)   ( n  2)  
sin   (n)   (n  1)  
sin (n  2)

sin   (n  2)   (n  1)   (n)   sin   (n  1)   (n)   (n  1)  

sin   (n  1)   (n  2)  
sin   (n)   (n  1)   (n  2)  
 ()
Recalling the asymptotical decomposition of sin x when x  0 :
sin x  x  O( x3 )
and utilizing
lim  (n  j )  0  (n  j )  O( (n))  (n)  O( (n  j ))
n 
we get
() 
 (n  2)  O  ( (n))3 
  (n  2)   (n  1)   (n)    O



( (n))3 
  (n)   (n  1)    O ( (n)))

  (n  1)   (n)   (n  1)    O ( (n))



3





3


  (n  1)   (n  2)    O ( (n))

  (n)   (n  1)   (n  2)    O ( (n))



3






3


 (n  2)
 (n)   (n  1)

 (n  2)   (n  1)   (n)  (n  1)   (n)   (n  1)

 (n  1)   (n  2)


 1  O  ( (n)) 2    ()
 (n)   (n  1)   (n  2)
Using previous the decompositions, we have
() 


4
1
1
1
2
 2



O

2
2
2 
27 9n
9(n ln n)
9(n ln n ln 2 n)
(
n
ln
n
ln
n
)


2
Finalizing our decompositions, we see that
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
98
R


4
1
1
1
2
 2



O

2
2
2 
27 9n
9(n ln n)
9(n ln n ln 2 n)
 (n ln n ln 2 n) 
It is easy to see that inequality (7) becomes
p ( n)  R 
 2 
4
1
1
1
 2



O

2 
27 9n
9(n ln n) 2 9(n ln n ln 2 n) 2
 (n ln n) 
(8)
and will be valid if (see (4))
 2 
4
1
1

4
1
1
1
 2






O

2 
27 9n
9(n ln n) 2 9(n ln n ln 2 n) 2 27 9n 2 9(n ln n) 2 9(n ln n ln 2 n) 2
 (n ln n) 
or
  1  O( 2 )
(9)
for n   . If n  n0 where n0 is sufficiently large, then (9) holds for sufficiently small   (0 0 ] with  0
  1 . Consequently (8) is satisfied and the assumption (7) of Lemma (1.3) holds for n  Z n .
q  n0 in Lemma (1.3) be fixed and r  q  2 be so large that inequalities (5) hold. This is always
fixed because
Let
possible since the series
0


n  q 1
 ( n)
is divergent. Then Lemma (1.3) holds and any solution of equation (1)
r
has at least one change of sign on Z q  2 . Obviously, inequalities (5) can be satisfied for another couple of
(q r ) , say (q1 r1 ) with q1  r and r1  q1  2 sufficiently large and by Lemma (1.3) any solution of
r
equation (1) has at least one change of sign on Z q11  2 . Continuing this process we get a sequence of intervals
(qn  rn ) with lim n  qn   such that any solution of equation (1) has at least one change of sign on Z qrnn  2 .
This fact concludes the proof.
3. EXAMPLE
Let
1
x(n)   x(n  2)
8
(10)
In this case
p ( n) 
1 4
 
8 27
so all conditions of Theorem 1.2 hold and equation (10) has a positive solution. It is easy to verify that
x ( n) 
1
2n
is such a solution.
Let
x(n)  
3  (2) n 1  4( n  1) 
(2) n  128  4n 1 
x(n  2)
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
99
(11)
In this case
p ( n) 
3  (2) n 1  4( n  1) 
n


(2) 128  4
n 1 



4
1
1
1
 2


2
27 9n
9(n ln n)
9(n ln n ln 2 n) 2

for n  Z 9 . All conditions of Theorem 2.1 hold and all solutions of equation equation (11) are oscillating. One
of such solutions is:
x ( n) 
1
1


n
4
(2) n
ACKNOWLEDGEMENT
The paper was supported by grants P201/11/0768, P201/10/1032 of the Czech Grant Agency (Prague), by the
Council of Czech Government MSM 0021630529, MSM 00216 30503 and by Grant FEKT-S-11-2-921 of
Faculty of Electrical Engineering and Communication, BUT.
REFERENCES
[1]
AGARVAL, R. P.; ZAFER, A. Oscillation criteria for second-order forced dynamic equations with mixed
nonlinearities. Adv. Difference Equ., 2009, Article ID 938706, 20 p. 2009.
BAŠTINEC, J.; DIBLÍK, J.; ŠMARDA, Z. Existence of positive solutions of discrete linear equations
[2]
with a single delay. J. Difference Equ. Appl., Vol. 16, Issue 9 (2010), p. 1047 - 1056.
BEREZANSKY, L.; BRAVERMAN, E. On existence of positive solutions for linear difference equations
[3]
with several delays, Adv. Dyn. Syst. Appl., 1, 2006, p. 29 - 47.
BEREZANSKY, L.; BRAVERMAN, E. Oscillation of a logistic difference equation with several delays.
[4]
Adv. Difference Equ., 2006, Article ID 82143, 12 p. 2006.
BOHNER, M.; KAROUS, B.; ÖCALAN, Ö. Iterated oscillation criteria for delay dynamic equations of
[5]
first order. Adv. Difference Equ., 2008, Article ID 458687, 12 p. 2008.
CHATZARAKIS, G. E.; KOPLATADZE, R.; STAVROULAKIS, I. P. On existence of positive solutions
[6]
for linear difference equations with several delays. Adv. Dyn. Syst. Appl., 68, 2008, p. 994 -1005.
DIBLÍK, J.; KOKSCH, N. Positive solutions of the equation x’(t)= -c(t)x(t-T) in the critical case.
[7]
J. Math. Anal. Appl., 250, 2000, p. 635 - 659.
DOMSHLAK, Y. Oscillation properties of discrete difference inequalities and equations: The new
[8]
approach. Funct. Differ. Equ., 1, 1993, p. 60 - 82.
DOMSHLAK, Y.,STAVROULAKIS, I. P. Oscillation of first-order delay differential equations in
[9]
a critical case. Appl. Anal., 61, 1996, p. 359 - 371.
[10] DOROCIAKOVÁ, B.; OLACH, R. Existence of positive solutions of delay differential equations. Tatra
Mt. Math. Publ., 43, 2009, p. 63 - 70.
[11] GYÖRI, I.; LADAS, G. Oscillation Theory of Delay Differential Equations. Clarendon Press : UK, 1991.
[12] HANUŠTIAKOVÁ, L.; OLACH, R. Nonoscillatory bounded solutions of neutral differential systems.
Nonlinear Anal., 68, 2008, p. 1816 - 1824.
[13] KIKINA, L. K.; STAVROULAKIS, I. P. A survey on the oscillation of solutions of first order delay
difference equations, Cubo, 7, No 2, 2005, p. 223 - 236.
[14] MEDINA, R.; PITUK, M. Nonoscillatory solutions of a second-order difference equation of Poincaré
type, Appl. Math. Lett., 22, No 5, 2009, p. 679 - 683.
[15] STAVROULAKIS, I. P. Oscillation criteria for first order delay difference equations, Mediterr. J. Math.,
1, No 2, 2004, p. 231 - 240.
ADDRESS
doc. RNDr. Jaromír BAŠTINEC, CSc.
Department of Mathematics
Faculty of Electrical Engineering and Communication
Brno University of Technology
Technická 8
616 00 Brno
Czech Republic
telefon: +420 541 143 222
email: [email protected]
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
100
prof. RNDr. Josef DIBLÍK, DrSc.
Brno University of Technology
Technická 8
616 00 Brno
Czech Republic
telefon: +420 541 143 155
email: [email protected], [email protected]
Mgr. Alena BAŠTINCOVÁ
Department of Mathematics
Faculty of Electrical Engineering and Communication
Brno University of Technology
Technická 8
616 00 Brno
Czech Republic
email: [email protected]
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
101
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
102
COUNTABLE EXTENSIONS OF THE GAUSSIAN COMPLEX PLANE DETERMINATED
BY THE SIMPLEST QUADRATIC POLYNOMIAL
Jaromír Baštinec, Jan Chvalina
Brno University of Technology
Dedicated to the memory of our colleague and friend Josef Zapletal
Abstract: There is solved a certain modified problem motivated by the Einstein’s special relativity
theory - usually called the problem of a realization of structures. In particular it is show that for
any topology on the Gaussian plane of all complex numbers monoids of all continuous closed
complex functions and centralized of Douady-Hubbard quadratic polynomials are different. There
are also constructed various extensions of the complex plane allowing the above mentioned
realization for centralizers of extended simplex quadratic function in the complex domain.
Key words and phrases. Gaussian plane of complex numbers, continuous closed complex functions,
Douady-Hubbard polynomials, topology on Gaussian plane.
It is well-know fact that models based on quadratic functions belong to use full and simplex non-linear systems
yielding natural tools for description of many laws of nature; let us remind e.g. the gravitation, forces of
electrical charges, energy of a moving body, the Einstein relationships between mass and energy ( E  mc ) ,
electric output expressed through intensity of a direct current and resistance, etc.
2
Moreover, recall that quadratic functions or quadratic polynomials belong to basic content of mathematical
education on various types of schools. As mentioned above, these polynomials form background of very
applicable non-linear models because they are deeply composed into the physical principles of the real word.
On the other hand there is an old classical problem occused in a discussion between Cornelius J. Ewerett, John
von Neumann, Edward Teller and Stanislaw M. Ulam in the year 1948 which core is lying in the Einstein’s
special relativity theory. In general, the classical realization problem can be simply formulated in this way:
Given a concrete category C , a set C and a group G of permutations of the set X . Does there exists an
object ( X  )  C such that the automorphism group Aut( X )  G ? A certain importand impuls cames
from the relativity theory as the question whether it is possible to change locally euclidean topologies in
mathematical models of space-time by metrics or more generaly by topologies under the supposition of
preservation of homomorphism groups.
The above mentioned problem can be considered as a question in the sense Felix Klein’s “Erlangener Program”
and, clearly, it can be modified for any concrete category or for arbitrary pairs of concrete categories. It is to be
noted as a certain specification of the above realization problem — consists in the question under which
conditions one mathematical structure can be substituted by the other one with the same carrier such that actual
monoids of mappings carying morphisms of different categories coincide. More concretely in the book [12] there
is solved the following problem on pages 40–84:
f : X  X (or in another words a characterization of
a monounary algebra ( X  f ) ) such that there exists a quasi-ordering  on X with the property
C ( f )  SI ( X ) (or End( X  f )  SI ( X ) ), where C ( f )  {gX  X  g  f  f  g} is the
centralizer of f (i.e. the endomorphism monoid of the monounary algebra ( X  f ) ) within the full
transformation monoid of the set X and SI ( X ) is the monoid of all strongly isotone self maps of the quasiordered set ( X ) , i.e. such mappings f ( X )  ( X ) that for an arbitrary pair of elements x , y  X


we have f ( x)  y if and only if there is an element x  X with property x  x and f ( x ')  y . Denoting
Find a characterization of a set transformation
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
103
by [ x )  { y  X ; x  y} i.e. the principal end generated by the element x  X it can be easily shown that
f is a strongly isotone self maps of ( X ) if and only if f ([ x) )  [ f ( x)) for any element x  X . This
concept is motivated by investigations of Saul Aaron Kripke — [35] and the answer of the above formulated
question is contained in the below presented theorem. It is to be noted that professor Kripke has made
fundamental contributions to a variety areas of logic, and his name is attached to a corresponding variety of
objects and results.
Kripke semantics (also known as relational semantics or frame semantics) is a formal semantics for non-classical
logic systems created in the late 1950s and early 1960s by Saul A. Kripke. A Kripke frame or modal frame is
a pair W , R , where W is a non-empty set, and R is a binary relation on W . Elements of W are called
nodes or worlds, and the relation R is known as the accessibility relation. This is a binary relation between
possible words which has very powerful uses in the formal/theoretic aspects of modal logic. As in the classical
model theory, there are methods for constructing a new Kripke model from other models. The natural
homomorphisms in Kripke semantics are called p -morphisms (or pseudo-epimorphisms, but the latter term is
rarely used). A p -morphism of Kripke frames W , R and W ', R ' is a mapping fW  W  such that f
preserves the accessibility relation, i.e. xRy implies f ( x ) Rf ( y ) , and whenever f ( x ) Ry there is a node
y  W such that xRy and f ( y )  y . Notice that p -morphisms are special kind of so called bisimulations
[35].
In monography [12] chapt. I, § 3 p -morphisms are called strongly isotone mappings or strong homomorphisms
and such mapping can be characterized ([12], Proposition 3.3) as mappings satisfying the condition:
For any x  W , there holds R( f ( x ))  f ( R ( x)) . In words – the f -image of principal R -end generating
by the node x equals to the R -end generated by the image f ( x ) .
Let us recall a realization theorem which is crucial for the main result of this contribution. By SI ( A p ) we
denote the moniod of all strong endomorphisms of the quasi-ordered set ( A p ) , i.e.
 ( p( x))  p( ( x))
  SI ( A p )
whenever
for any x  A .
Let A   be an infinite set and fA  A be a mapping. Thus ( A f ) is an infinite unar. It can be shown
easily – see [12], p.23, that the relation
f
on A defined by x
f
y if and only if there exists a pair of
nonnegative integers m n  N 0 such that f ( x )  f ( y ) is an equivalence (called also the Kuratowskim
n
Whyburn equivalence in literature). The classes of this equivalence are called orbits of f . The transformation
f is connected if it has the only orbit. The following notation is overtaken from [12].
Let f has the only orbit, for the transformation f we introduce realization types denoted ret ( f ) and defined
as there follows:
The transformation f is of realization type
ret( f )   1 if the set X has one element, i.e., ( X  f ) is a loop;
ret( f )   2 if ( X  f ) is two elements cyclic unar, i.e., X  {a b} , f (a)  b , f (b)  a ;
ret( f )   3 if f is a constant mapping and card X  2 ;
ret( f )   4 if ( X  f )  ( Z  v) where v is an unary operation vZ  Z defined as follows:
v(m)  m  1 for each m  Z ;
ret( f )   5 if f is an acyclic surjection which is not a bijection;
ret( f )   6 if ( f ( X ) ff ( X ))  ( Z  v) and f ( X )  X ;
ret( f )   7 if ( f ( X ) ff ( X ))  ( N  v) and f 1 ( f ( x))  X \ f ( X ) for any x  X \ f ( X ) .
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
104
If the transformation
f is not of any type  1 … 7 we put ret( f )   0 . Let the transformation f is not
connected and ( X  f ) 
7
 ( X   f ) is its orbital decomposition. Then we set ret( f )    i i , where  i
 A
i 0
is a cardinal number of all components ( X   f ) of unar ( X  f ) , for which ret( f )   i or
i  0
if
{  A ret( f )   i }   .
EXAMPLE 1
Let p : R  R , q : R  R are functions defined as follows: p ( x)  2 x  1 , q ( x)  x for all x  R . It is
2
0
easy to verify that ret( p )  2
For an unar
 4 , ret(q)   1   3  2  6
0
hold.
( A f ) we define a binary relation p f  A  A in this way: For x y  A there is [ x y ]  p f
whenever there exists a non-negative integer n  1 such that y  f ( x ) . It is easy to see that p is a reflexive
n
A , i.e., a quasi-ordering. Moreover the relation p f is antisymmetric, i.e. an ordering
on A , if and only if the transformation fA  A has at most one-element cycles.
and transitive relation on
THEOREM 1
Let f : A  A be a mapping of the realization type ret( f ) 
7
   . Let T
i 0
set N (8)
i i
be a tolerance relation on the
  {1 2… 7} , which is a reflexive and symmetrical cover of the binary relation
{[n1] n  N (8)}  {[2 3]}  {[4 m] m  5 6 7}
The following conditions are equivalent:
The coefficient  0  0 and for each pair of non-zero coefficient  i ,  j from the combination
7

i 0
i i
we have [i j ]  T .
There holds End ( A f )  SI ( A p f ) .
There exists a preorder p  A  A ( i.e. a reflexive and transitive binary relation p on the set A)
such that End ( A f )  SI ( A p ) .
Following the above motivating ideas we will consider a certain modification of the above realization problem
concerning quadratic complex polynomials. In detail, consider a quadratic polynomial
Conjugating

 ( z )  az 2  bz  d .
by an appropriate linear function h( z )   z   , we get  h    h


1 


 z 2  c . So, from
dynamical systems point of view, quadratic polynomials form a one-parameter family qc ( z )  y  c which is
2
usually called the Douady-Hubbard family of quadratic polynomials or briefly Douady-Hubbard polynomials.
Now, considering a monounary algebra, i.e. a unar, (C  qc ) then the centralizer
Z C (qc )   f  C  C  f (qc ( z ))  qc ( f ( z ))z  C 
which is the monoid of all solutions of the functional equation
 f ( z )
2
 f ( z 2  c)  c  0 z  C
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
105
or the endomorphism monoid End(C  qc ) of the given unar (C  qc ) , we are asking whether there exists
 on the Gaussian plane C such that the monoid of all continuous closed transformations
  (C  )  (C  ) coincides with Z C (qc ) . Eventually, whether there is a quasi-ordering (a reflexive and
transitive binary relation)  on C such that the monoid of all strongly isotone self-maps of (C ) coincides
with Z C ( qc ) . (In fact we consider equality of monoids of functions carrying corresponding morphisms). Using
a topology
the above realization theorem (Theorem 1) we show without any effort that the answer is negative in general.
Firstly, recall some basic notions from the iteration theory or - in the algebraic setting - from the theory of
monounary algebras - shortly unars - mentioned above, which are ordered pairs ( A f ) , where A is a non-
f is a mapping of the set A into A . The above defined binary relation f is an equivalence on
A and thus it determines a decomposition of A onto blocks which are called orbits of the mapping (or the
function) f . Orbits endowed with the corresponding restriction of the mapping f are called components of the
monounary algebra ( A f ) . Subalgebra of the algebra ( A f ) is a pair ( B g ) , where
  B  A f ( B)  B and g  f B . Instead of subalgebra we used the term subunar. A monounary
algebra - unar ( A f ) - is said to be disconnected if it is not connected; a unar ( A f ) is called connected if for
m
n
any pair a b  A there exist non-negative integers m n  N 0 such that f ( a )  f (b) . So, a component
empty set and
of a unar is it maximal connected subunar. If
( Ai  fi ) i  I 
is the system of all component of the unar
( A f ) we write
( A f )   ( Ai  f i )
iI
and this formula is called the orbital decomposition of the unar
( A f ) or of the transformation f  A  A .
( A f ) is said to be cyclic if it contains a subset
Ci  Ai  Ci  {a1… ak } such that f (a j )  a j 1 j  1 2… k  1 f (ak )  a1 . The number k is called
A component ( Ai  f i ) or an orbit Ai of the unar
the order (or the period) of the cycle Ci and its points a j are termed as periodic points. Cycles of the order 1
are called loops and its elements are fixed points of f  A  A .
Now consider the Douady-Hubbard family of unars (C  qc ) c  C  c  0 , i.e. qc ( z )  z  c for z  C .
2
Solving the quadratic equation qc ( z )  z , i.e.




z 2  z  c  0 , we obtain that the polynomial qc has exactly




two fixed points z01  05 1  1  4c   z02  05 1  1  4c  , which evidently coincide for c  025 .
In this case the polynomial qc , i.e. z  z  025 has just one fixed point z0  05 .
2
Recall that a closed mapping of a topological space is a mapping under which an image of a closed set is also
closed. The monoid of all continuous closed self-mappings of a topological space ( X  ) will be denoted by
Ccl( X  ) .
THEOREM 2
For any topology

on the Gaussian plane C ( i.e. the field of all complex numbers ) the monoid Ccl( X  )
is different from every monoid
 f ( z )
2
Z C (qc ) c  C  c  0
( i.e. the solution monoids of equations
 f ( z 2  c)  c  0 c  C  c  0 ) .
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
106
PROOF
Let (C  qc ) 

i I
( K i  qi ) be the orbital decomposition of the unar (C  qc ) . There exists a countable subset
I1  I such that any orbit K i  i  I1 contains a cycle of the order 2n for suitable n  N 0 . That cycle is
n
formed by periodic points which are roots (zero points) of the polynomial of the degree 2 . Thus
ret( K i  qi )   0 for any i  I1 , consequently by Theorem 1 SI (C )  Z C (qc ) for any quasi-ordering  on
C . Assuming
Z C (qc )  Ccl(C  )
(1)
for some saturated (called also quasi-discrete - with the totally additive closure operator) topology
Since any topology

Then, setting
on C .
(not saturated only) satisfying the equality (1) cannot be T1 -topology - the only closed
singletons are fixed points of qc the just mentioned assignment
.

z1  z2
whenever
 
can be used for every such topology
z2  cl {z1} we get a quasi-ordering 
such that
SI (C  )  Z C (qc ) . Therefore Ccl(C  )  Z C (qc ) for any c  C  c  0 and any saturated topology 
on C .
Now we concern our attention to the following special case of the Douady-Hubbard family - namely the concrete
function q0 ( z )  z . It is to be noted that professor Bodil Brauner in his talk entitled by „Polynomial vector
2
fields of one complex variable“ on the conference denoted to the memory of Adrian Douady has considered as
one remarkable example the just polynomial q0 ( z )  z (with double critical point 0 ). So, we consider a unar
2
(C  q0 ) with the centralizer Z c (q0 )  { f  C  C  f ( z 2 )   f ( z )   z  C} . We are asking whether there
2
d  C  C  R0 such that
d ( z z )  0 d ( z u )  d ( z v)  d (v u ) for all z u v  C ) generating a topology  d on C such that
Z C (q0 )  Ccl(C  d ) .
is possible to endow the complex plane C
by a quasi-metric (i.e.
(C  q0 ) . We show that the unar (C q0 ) i.e. its
Cayley graph does not contain cycles except loops in points z0  0 z1  1 . Evidently generating points of
cycles consisting of more elements are fixed points of iterations of the function q0 of the order n  2 . Suppose
First of all, we should explain the orbital structure of the unar
on the contrary there exist numbers
Under the supposition
n
n  N  n  2 zn  C with the property q0n ( zn )  zn , i.e. zn2  zn  0 .
zn  0 we get zn2
n
1
 1 , thus for a suitable integer k we have
zn  cos
2 k
2k
 j sin n 
n
2 1
2 1
Asking whether there exists an integer m  2 such that
m
g 0m ( zn )  zn , i.e. zn2  zn which implies
2m 1 k
2k
 n
n
2 1 2 1
2m  1 , thus m  0 . Since for m  2 the above presented equality does not hold, we obtain that the
unar (C  q0 ) does not contain cycles with except fixed points.
Denote by T1 the set of all binary words over the alphabet {01} , thus all words of the form
1111111001101…, in general 11a1a2 …an , where ak  {01} k  1 2… n . Denote by  the empty
word contained in T1 . Define a mapping 1  T1  T1 in this way:
we obtain
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
107
1 (11)  1 ()  1 (11a1a2 …an 0)  1 (11a1a2 …an1)  11a1a2 …an 
11a1a2 …an 0  T111a1a2 …an1  T1 which are different from the word 11. Further,
suppose m  Z is an arbitrary integer, r is an arbitrary real number from the open interval (01) , i.e.
0  r  1 . Denote
for any two words
U r  {[m r ] m  Z }
i.e.
U r  Z  {r} and define an ordering U r on the set U r in such a way that [m r ] U r [n r ] if m  n .
(U r U r )  ( Z ) .
Evidently
Within
the
terminology
of
monounary
algebras
we
have
1 ([m r ])  [m  1 r ] .
Further suppose, that for every pair
[m r ]  Z  {01} the poset (Vm r Vm r ) is a regulary branched rooted
{rm11 rm110 rm111… rm11a1a2 …an 0 rma1a2 …an1…} , hence words over the alphabet
{r m 01} .The ordering Vm r is defined in such a way that
tree of words
(Vm r Vm r )  (T1 ‚ {}1 )
for any pair
[m r ]  Z  (01) . We define
(Tm r Tm r )  (U m r U m r )  (Vm r Vm r )
for any pair
[m r ]  Z  (01) , where U m r  Vm r   , Vm r  Vn r   for m  n . Then
 m r  T  n r  whenever there exists an integer
p  N 0 with the property m  p  n , thus r  m r    m  1 r  for any pair  m r   Z   01 .
r  Tm r  Tm r
Further
is a function such that r Tm r , where
m r
r  rm11a1a2 …an 0    r  rm11a1a2 …an1  rm11a1a2 …an , r  rm11   m r  .
Putting
0  0   0 , then evidently
 C q0   T0 0   T11    Tr r  
r  01
Using Theorem 1 we get the following result similar to Theorem 2:
THEOREM 3
For any quasimetric d on the complex plane C we have
Ccl  C  d    f  C  C  f (qc ( z ))  qc ( f ( z )) z  C 
REMARK 1
As in Theorem 2, the above Theorem 3 can be reformulated for a topology  instead of a quasimetric d .
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
108
REMARK 2
Theorem 3 can be proved in the direct way without use of Theorem 1 which will be done in the following
consideration.
d on the set C with the property
Ccl  C  d   Z C  q0  . Put  C  q0    K 0  0    K1 1     K t  t  , where  K s  s   Ts  s  for
Suppose on the contrary that there exists a quasimetric
t  01
any index s  01 . The above equality describes the orbital structure of the unar
(C q0 ) with orbits
K s  C . If z  K1 is an arbitrary number then we have
cld  z   q0n  z   n  N 0  
Indeed cld 1  1 (the constant function on C with the values 1 commutes with the function
according to supposition belongs into the monoid
n
… j 2  j 2
is a retract of the component
by the rule
 n 1
 …
q0 ) thus
Ccl(C  d ) and any subalgebra of ( K1 1 ) of the form
j
2
1  j   j  1  1  1
2
( K1 1 ) . In particular cl d 1  11 , thus the function h  C  C defined
h( z )  1  h(1) for any z  C \ K1 and h( z )  1 for any z  K1 , possess this property: if
X  C 1  X   , then
h  cld X   11  cld 11  cld h  X  ,
h  cl d 1   1  cld h 1 , h  cld      cld h    .
thus h  Ccl  C  d  . On the other hand
h  q0 ( j )   h  1  1  1   1  q0  h  1  , therefore
2
h  Z C (q0 ) , which contradicts the supposition.
It is to by noted that analysis in the domain concerning realization of the centralizer End ( R q ) of the function
q( x)  x 2  x  R is contained in [12], chapt. III,§1. Notice, that the equation f  x 2    f ( x)  is a special
2
case of the Bötcher equation
fg ( x)   f ( x) , for g ( x)  x 2 and   2 . Connection with other functional

equations of one real variable can be found in literature / cf. [12].
In more detail in the mentioned monography [12] there is proved following theorem ([12], Theorem 1.7,p.97)
THEOREM 4
  R   0 . There exists a topology  induced by
R with the property End ( R pn  )  Ccl ( R ) if and only if the number n is odd.
Denote pn   ( x )  x    x  R , where n  N and
n
a quasi-metric on the set
Similar analysis for generalizations of Dounady-Hubbard polynomials of the form
p( z )  z 2  c z  C  c  C
seems to be open up to now.
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
109
For the obtaining of a positive answer concerning the realization problem which is analogical to the result
contained in the papers [9], [10] there can be used several ways. Two of them we will present at the end of our
contribution. One way consists in deleting of a suitable subset of the Gaussian plane C , the other consists in
a suitable extension of C by objects which - of course - are not numbers.
I. The first construction. Denote





2 n
R  1  1   j 2  n  N 0     j   n  N 0 

n


1  C  R(1) . The set 1 is evidently q0 -invariant, i.e. q0 (1 )  1 . Denoting q1 the restriction
of the function q0 onto 1 we get (1  q1 ) is a subalgebra of the algebra (C  q0 ) and the following theorem
and
holds.
THEOREM 5
There exists a quasimetric
ń on the set 1 with the property
Ccl  1 ń    f  1  1 f  q1  q1  f  
PROOF
Since q1 (0)  0 q1 (1)  1 q1 (0)  0  q1 (1)  1 and the restriction of the function onto arbitrary other
1
component
1
K of the unar (1 q1 ) is surjective we have ret  q1   2 1  20  5 . By Theorem 1 there exists
a quasimetric (i.e. also a topology) on
1 with the above describe property.
II. The second construction. (The extension of the field C ). Denote
We define a function F  C  C
 n



n 1

 k 0


k 0
P
 n



 k 0
 z n  N
k


0 

and C
by rules F  z   z  q0 ( z ) for any number
2
F (1)  z  1 and F   z k    z k for every polynomial
n
z
k 0
k
CP.
z C ‚
1 ,
 P . Then ret  F    1   5  20  5 ,
therefore from Theorem 1 there follows immediately the following result:
THEOREM 6
There exist a quasimetric

on the set C with the property

 

Ccl C    f  C  C  F  f  f  F 
REMARK 3
 P F | P    N   (this is the Peano algebra with unary
operation  , called the successor operation   n   n  1 ) can be substituted by the analogical extension
formed by quaternions, e.g. K  1  i  j  k  2  i  j  k … n  i  j  k … . Then the set C  K
with a function G : C  K  C  K defined G ( z )  q0 ( z ) for any z  C ‚ 1 and
The above constructed extension with property
G (1)  1  i  j  k  G (n  i  j  k )  n  1  i  j  k  n  N , is the required extension.
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
110
III. The third construction (the second extension of the field C ):
Denote
n
S  {n  n  C  C  n ( z )  ( z  1) 2  n  01 2…} , i.e.
 2n  k
n ( z )     z  z  C  n  01 2…
k 0  k 
2n
Further put
 (0)  0 ( z )  z  1 ( z )  z 2  z  0 z  C and   n ( z )   n ( z )  , which means
2
 2n



 k 0

 2n  k  2  2n 1  k
    z    
z 
 k   k  0  k 
Then also ret      1   5  2
0
n 1
5 .
The other monounary algebra serving for extension of the algebra  C  q0  is the algebra  S  s  of polynomials.
Here
S   pn  n  N  ,
where
pn  z   z n  1
 2  C  S  q2  z   q0  z  for any z  C ‚
corresponding extension of the unar
and
s  pn   pn 1 .
Further,
1 q2 1  z  1 q2  pn   s  pn 
denoting
we obtain the
(C  q0 ) .
Using methods described in [12], chapter III we can prove that there exists at least countable many topologies
n
on the set C such that
Ccl  C  n  




 f  C  C  F  f  f  F  
In the form of a hypothesis describe these topologies
closure operator. Then for any subset
 n : Denote by cln  exp(C  exp(C
the corresponding
X  C we define
cln ( X )  X   F n ( X ) n  N  n  2 
where
F n is the n -th iteration of the mapping F  C  C .
It is to be noted that literature from references devoted to iteration theory and functional equations is under
number [2] -[10], [12], [15], [18], [25],[26],[31], basic theory of monounary algebras, in particular problems
connected with constructions of their homomorphisms is contained in [12], [27], [28]. Titles devoted to complex
polynomials including classical papers of A. Douady and J. H. Hubbard are [21] - [23], [30], [31] and
a modification of the realization problem can be find in [9], [10], [12], [18].
ACKNOWLEDGEMENT
This investigation was supported by the Council of Czech Government MSM 0021630529 and by Grant FEKTS-11-2-921 of Faculty of Electrical Engineering and Communication, BUT.
REFERENCES
[1]
BAŠTINEC, J.; CHVALINA, J.; CHVALINOVÁ,L. On a topologization of the Gaussian complex plain
with respect to centralizers of Douady-Hubbard polynomials. XXVIII International Colloquium on the
Management of Educational Process. UNOB : Brno, 2010, p. 32 - 41. ISBN 978-80-7231-733-2.
BAŠTINEC, J.; CHVALINA, J.; NOVÁK, M. Solving certain centralizer functional equations of one
[2]
variable with quadratic kernels. APLIMAT 2007, 6th International conference, Part II, STU : Bratislava,
p. 71 - 78. ISBN 978-80-969562-5-8.
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
111
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
[18]
[19]
[20]
[21]
[22]
[23]
[24]
[25]
[26]
[27]
[28]
[29]
[30]
BAŠTINEC, J.; CHVALINA, J.; NOVOTNÁ, J.; NOVÁK, M. On a group of normed quadratic
functions and solving certain centralizer functional equations. 7th International Conference APLIMAT
2008, STU : Bratislava, p. 73-80. ISBN 978-80-89313-03-7.
BERÁNEK, J.; CHVALINA, J. O iteračních odmocninách kvadratické funkce. Sborník prací
pedagogické fakulty UJEP v Brně, UJEP : Brno. Svazek 104, 1989, p. 7 - 19, ISBN 80-210-0136-4.
BERÁNEK, J. Iterativní teorie funkcí v úlohách. Sborník prací PdF MU, Řada M-F, No 4, Brno, 2000,
p. 3-19.
BERÁNEK, J.; CHVALINA, J. Iterativní teorie funkcí a školská matematika. Acta Fac. Ped. Univ.
Tyrnaviensis, Ser.C, 2002 ,No 6, p. 5-10.
BERÁNEK, J. Funkcionální rovnice.Brno : Masarykova univerzita, 2004, 74 p. ISBN 80-210-3422-X.
BINTEROVÁ, H.; CHVALINA, J.; CHVALINOVÁ, L. Discrete quadratic dynamical systems and
conjugacy of their generating functions. Proc. 5th International Con. APLIMAT. Bratislava : STU,
Faculty of Mechanical Engineering, 2004, p. 283-288.
CHVALINA, J.; MATOUŠKOVÁ, K. O jedné vlastnosti nejjednodušší kvadratické funkce I. Sborník
prací pedagogické fakulty UJEP v Brně, Svazek 104, 1989, p. 55 – 72. ISBN 80-210-0136-4.
CHVALINA, J.; MATOUŠKOVÁ, K. O jedné vlastnosti nejjednodušší kvadratické funkce II. Sborník
prací pedagogické fakulty MU v Brně, Svazek 121, 1991, p. 79 – 87, ISBN 80-210-0350-2.
CHVALINA, J. Diskrétní orbitální struktura zobrazení a funkcí. Proc.8th Brno Conference on Teaching
of Math. 1992 (Discrete Mathematics). Brno, 1992, p. 23-28.
CHVALINA, J. Functional Graphs, Quasiordered Sets and Commutative Hypergroups. Brno : MU Brno,
1995, 205 p. ISBN 80-210-1148-3.
CHVALINA, J.; MOUČKA, J. Automata in connection with elementary functions and set mappings.
XVIII. Mezinárodní kolokvium o řízení osvojovacího procesu.Vyškov : VVŠ PV, 2000, p. 111-117.
CHVALINA, J.; MOUČKA, J. Kripke-type compatibility of transformations of general sets of
alternatives with cognitive and incomplete preferences. Proc. XXVII International Colloquium of the
Management of Educational Process. Univ. of Defence : Brno, 2009, p. 97-103.
ISBN 978-80-7231-650-2.
CHVALINA, J.; CHVALINOVÁ, L.; FUCHS. E. Discrete analyssis of a certain parametrized family of
quadratic functions based on conjugacy of those. Proc. of the 6th International Conference The Decidable
and the Undecidable in Mathematics Education. Brno : Masaryk University, The Hong-Kong Institut of
Education, 2003, p. 43-48.
CHVALINA, J.; CHVALINOVÁ, L.; NOVÁK, M. On a certain tree of two-dimensional functionspaces
growing from a linear space of continuous functions. Proc. XXVII International Colloquium of the
Management of Educational Process. Univ. of Defence : Brno, 2009, p. 86-96. ISBN 978-80-7231-650-2.
CHVALINA, J.; CHVALINOVÁ, L. Realizability of the endomorphism monoid of a semi-cascade
formed by solution spaces of linear ordinary n-th order differential equations. APLIMAT, Journ. Appl.
Mathematics, Vol. III. No. 2, 2010, p. 211-223. ISSN 1337-6365.
CHVALINA, J.; NOVÁK, M. On isomorphism of multiautomata determined by certain quadratic
functions. 4th Conf. on Mathematics and Physics at Tech. Universities. Proc. Univ. of Defence : Brno,
2005. p. 67-76, CD-ROM.
CHVALINA, J.; NOVÁK, M. Minimal countable extension of certain cascade with the ordered phase-set
and strongly isotone endomorphisms. Sixth Conf. on Mathematics and Physics at Tech. Universities.
Proc. Part I, Univ. of Defence : Brno, 2009, p. 133-138. ISBN 978-80-7231-667-0.
CHVALINA, J.; BAŠTINEC, J. Akce monoidu nezáporných celých čísel prostřednictvím některých
kvadratických funkcí. DIDZA 2006, p. 1-8, ISBN 80-8070-556-9.
CHVALINA, J.; BAŠTINEC, J. O nejjednodušší kvadratické funkci v komplexním oboru. DIDZA, 2009,
p. 1-11, ISBN 978-80-554-0049-5.
DOUADY, A.; HUBBARD, J. H. Itération des polynômes quadratiques complexes. Comptes Rendus
Acad. Sci. Paris, Vol. 294, 1982, p. 123-126.
DOUADY, A.; HUBBARD, J. H. On the dynamics of polynomial-like mapping. Annales scientifiques de
l’É.N.S., 4e série, tome 18, No. 2, 1985, p. 287-343.
HOLMGREN, A. R. A First Course in Discrete Dynamical Systems. Springer-Verlag : New York –
Berlin – Heidelberg. 1996.
NEUMAN, F. Funkcionální rovnice. Matematický seminář, SNTL : Praha, 1986.
NEUMAN, F. Algebraic aspects of transformations with an application to differential equations,
Nonlinear Analysis. 2000, p. 505-511, ISSN 036-546X.
NOVOTNÝ, M. Mono-unary algebras in the work of Czechoslovak mathematicians. Arch. Math. : Brno
26, 1990, p. 155-164.
NOVOTNÝ, M. Über Abbildungen von Mengen. Pacific J. Math. 13, 1963, p. 1359-1369.
SION, M.; ZELMER, G. On quasi-metrizability. Canad. J. Math. 19, 1967, p. 1243-1249.
TARGONSKI, G. Topics in Iteration Theory. Vandenhoeck et Ruprecht : Göttingen - Zürich, 1981.
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
112
[31]
[32]
[33]
[34]
[35]
[36]
[37]
WILSON, W.A. On quasi-metric spaces. Amer. J. Math. 53, 1931, p. 657-684.
YUNPING J. Local connectivity of the Mandelbrot set at certain infinitely renormalizable points.
Complex Dynamics and Related Topics, New Studies in Advanced Mathematics, 2004, The Internat.
Press, p. 236-264.
Complex Quadratic Polynomial. Wikipedia the free encyclopedia. [online] Available from:
http://en.wikipedia.org/wiki/Complex_quadratic_polynomial.
Mandelbrot
set.
Wikipedia
the
free
encyclopedia.
[online]
Available
from:
http://en.wikipedia.org/wiki/Mandelbrot_set.
Kripke semantics, Kripke structure. Wikipedia the free encyclopedia. [online] Available from:
http://en.wikipedia.org/wiki/Kripke_semantics_Kripke_structure.
ZAPLETAL, J. Podpůrné metody rozhodovacích procesů. Ekonomicko-správní fakulta MU : Brno, 1998,
164 p. ISBN 80-210-1943-3.
ZMEŠKAL, O.; VALA, M.; WEITER, M.; ŠTEFKOVÁ, P. Fractal-cantorian geometry of space-time.
Chaos, Solitons and Fractals, 42, 2009, p. 1878-1892.
ADRESSES:
doc. RNDr. Jaromír Baštinec, CSc.
Department of Mathematics FEEC BUT
Technická 8, 616 00 Brno, Czech Republic
E-mail: [email protected]
Telephone: + 420 541 143 222
Prof. RNDr. Jan Chvalina, DrSc.
Department of Mathematics FEEC BUT
Technická 8, 616 00 Brno, Czech Republic
E-mail: [email protected]
Telephone: + 420 541 143 151
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
113
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
114
SOLUTIONS AND CONTROLLABILITY ON SYSTEMS OF DIFERENTIAL EQUATIONS
WITH DELAY
Jaromír Baštinec, Ganna Piddubna
Brno University of Technology
Abstract: In the paper is described the history of investigation of systems of differential equations,
existence of solution and stability of solutions, systems with delay and controllability of systems
with delay.
Key words: system of differential equations, solution, stability, controllability.
1. INTRODUCTION
Individual results for functional-differential equations were obtained more than 250 years ago, and systematic
development of the theory of such equations began only in the last 90 years. Before this time there were
thousands of articles and several books devoted to the study and application of functional differential equations.
However, all these studies are consider separate sections of the theory and its applications (the exception is wellknown book L.E. Èlsgolts, representing the full introduction to the theory, and its second edition published in
1971 in collaboration with S.B. Norkin [22]). There were no studies with single point of view on numerous
problems in the theory of functional-differential equations until the book by J. Hale (1977) [21].
Interpretation of solutions of functional-differential equations
x (t )  f ( x(t ), t )
(1)
as integral curve in the space RxC by N.N. Krasovskii (1959) [11] served as such single point. This
interpretation is now widespread, proved useful in many parts of the theory, particularly the sections of the
asymptotic behavior and periodicity of solutions. It clarified the functional structure of the functional-differential
equations of delayed and neutral type, provided an opportunity to the deep analogy between the theory of such
equations and the theory of ordinary differential equations and showed the reasons for not less deep differences
of these theories.
Classic work on the theory of functional, integral and integro-differential equations is a work by Spanish
scientist V. Volterra [7]. His book "The Theory of functional, integral and integro-differential equations” first
released in Spanish in 1927, then significantly revised version of it released in English in 1929. The last edition
was released in U.S. in 1959 and the book released in 1982 is a translation.
Biggest part of the results obtained during 150 years before works by V. Volterra were related to special
properties of very narrow classes of equations. In his studies of “predator-prey” models and studies on viscosityelasticity V. Volterra, got some fairly general differential equation, which include past states of system:
x (t )  f ( x(t ), x(t   ), t ),  0 .
(2)
In addition, because of the close connection between the equations and specific physical systems V. Volterra,
tried to introduce the concept of energy function for these models. Then he used the behavior of energy function
to study the asymptotic behavior of the system in the distant future.
In late 1930 and early 40s N. Minorsky (1942) very clearly pointed out the importance of considering the delay
in feedback mechanism in his works on stabilizing the course of a ship and automatic control its movement.
Several books describing the current state of the object have been released at the end of 1940s and the beginning
of 1950. A.D. Myshkis [15] introduced general class of equations with retarded arguments and laid the
foundation for general theory of linear systems. In 1972 he systematized ideas in the paper "Linear differential
equations with retarded argument". Richard Bellman [4] showed in his monograph a broad applicability of
equations that contain information about the past in such fields as economics and biology. He also presented
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
115
a well-constructed theory of linear equations with constant coefficients and the beginning of stability theory. The
most intensive development of these ideas presented in the book of R. Bellman and K. Cooke [5], "Differentialdifference equations” (1967). The book describes the theory of linear differential-difference equations with
constant and variable coefficients:
(3)
x (t )  f ( x(t ), x (t ),.., x ( n ) (t ), x(t   1 ), x(t   2 ),.., x(t   m ), t ), i  0, i  1, , m.
Considerable attention is paid to asymptotic behavior of the solutions, as well as the stability theory of linear and
quasi-linear equations. Most of the results in this area belongs to these authors.
Large number of problems and examples of the specific problems of the theory probability, economics, nuclear
physics, etc. are essential part of the book.
The book "Ordinary differential-difference equation" (1961) by U.S. scientist E. Pinney [17] is devoted to
differential-difference equations, otherwise known as the equations with deviating argument. The focus of the
book is on linear equations with constant coefficients, which are most often encountered in the theory of
automatic control. The book also presents a new method for studying equations with small nonlinearities found
by the author. In particular, this method is applied in control theory of Minorskii equation.
N.V. Azbelev, V.P. Maksimov, L.F. Rakhmatulina "Introduction to theory of functional differential equations"
(1991) [1] and K.B. Sabitov "Functional, differential and integral equations. Textbook for university students
majoring in “Applied Mathematics and Informatics" and the direction of “Applied Mathematics and Computer
Science" (2005) [20] are relatively new works to the theory. In first the authors try to generalize as much as
possible subclasses of systems differential, integro-differential and difference equations with the operator
approach. The second manual presented a purely functional, ordinary differential, integral equations and
differential equations in partial derivatives and classical methods of solving them.
2. DYNAMICAL SYSTEMS STABILITY
N.N. Krasovskii in his book on the theory of stability (1959) [11] introduced the theory of Lyapunov functionals,
noting the important fact: some problems for such systems become more visual and easier to solve if the motion
is considered in a functional space, even when the state variable is a finite-dimensional vector. The paper
discusses some problems in the nonlinear systems of ordinary differential equations solutions stability theory.
The justification of the Lyapunov functions method is adequately addressed, the existence of functions is
clarified. Also the possibility of applying the method to study the systems described by different from ordinary
differential equations apparatus is proved.
B. P. Demidovich worked on the theory of stability systematization in this period. In his book "Lectures on the
Mathematical Theory of Stability"(1967) [10] a theory of stability framework for ordinary differential equations
and some related questions are stated. Addition introduces the basics of the almost periodic functions theory and
their applications to differential equations.
Also, the theory of stability during this period was researched by the Belarusian scientists. In 1970 the work of
E.A. Barbashin "Lyapunov functions" [2] was published. The book outlines a course of lectures on the Lyapunov
functions method, read in the Belorussian University named after Lenin. Emphasis is placed on methods of
constructing Lyapunov functions for nonlinear systems. Methods of the region of attraction estimation, solutions
estimation, management time, integrated quality control criteria were presented. Sufficient criteria for asymptotic
stability in general, absolute stability criteria were recounted. A large number of Lyapunov functions for
nonlinear systems of second and third order were presented. The case when the nonlinearity depends on two
coordinates of points in phase space was examined. The problem of constructing vector Lyapunov functions for
complex systems was also investigated.
In the narrow direction differential equations stability theory is developing in this period by scientists J.A.
Daletskii, M.G. Crane. In 1970 they published monograph "Stability of differential equations solutions in
Banach space" [9], which set out a theory of higher Lyapunov exponents and general Bohl indicators for linear
non-stationary and close to the nonlinear equations, Lyapunov second method and its interpretation in the spaces
with an infinite and definite metric, Floquet's theorem and the localization theorem on the spectrum of the
monodromy operator, expansion of the operator logarithm of the in a row, theory of canonical equations with a
periodic Hamiltonian, central stability zone, Lyapunov’s stability signs and their various generalizations; FuchsFrobenius theory, exponential splitting of the non-stationary linear equations solutions, exponential dichotomy,
integral manifolds theory, researches by Bohl, Bogoliubov et al., generalization of the asymptotic methods of
Birkhoff, Tamarkin et al. All these questions are studied for differential equations in Banach or Hilbert spaces.
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
116
In 1980 a group of Belgian scientists consisting of H. Rusch, P. Abets, M. Laloy wrote the monograph, "Direct
method of Lyapunov in stability theory" [19]. This paper investigates the stability of ordinary differential
equations solutions by the direct method of Lyapunov. Much attention is paid to applications for various
mechanical systems, nonlinear electrical circuits, problems of mathematical economics. Along with the classical
results the monograph presents a series of issues, namely: stability of some variables; theorem about equilibrium
and stationary motions stability and their circulation; theorems on the stability of equilibria and stationary
motions and their treatment; the instability theorems, based on the concept of sector and expeller; classification
of differential equations solutions properties (stability, attraction, limitations, etc.); classification of properties of
solutions of differential equations (stability, attraction, limitations, etc.); attraction for autonomous and
nonautonomous differential equations; comparison method; Vector Lyapunov functions; one-parameter family
of Lyapunov functions.
The paper of G.A. Leonov "Chaotic dynamics and the classical theory of motion stability" (2006) [13] is
relatively new work in the theory of dynamical systems motion stability.
3. DYNAMICAL SYSTEMS WITH DELAY
The future of many processes in the world around us depend not only on the present state, but is also
significantly determined by the entire pre-history. Such systems occur in automatic control, economics,
medicine, biology and other areas (examples can be found in R. K. Miller "Nonlinear Integral Volterra
Equations". Mathematical description of these processes can be done with the help of equations with delay,
integral and integro-differential equations. Great contribution to the development of these directions is made by
R. Bellman, S.M.V. Lunel, A.D. Myshkis, J.C. Hale.
Classical works in the field of differential equations with retarded argument are work by A.D. Myshkis "Linear
differential equations with retarded argument "(1972) and J.C. Hale "The Theory of Functional Differential
Equations" (1984).
4. DYNAMICAL SYSTEMS OF NEUTRAL TYPE.
There is also a large number of applications in which retarded argument is included not only as a state variable,
but also in its derivative. This is so-called differential-difference equations of neutral type:
x (t )  f ( x(t ), x(t   ), x (t   ), t ),  0.
(4)
Problems that lead to such equations are more difficult to find, although they often appear in studies of two or
more oscillatory systems with some links between them. R. Bellman, K. Cooke, O.P. Germanovich raised
questions regarding the systems of neutral type in their works related to paper "Linear periodic equation of
neutral type and their applications "(1986) and other.
Work "Linear periodic equation of neutral type and their applications "(1986) by O. P. Germanovich [8] is
devoted to linear periodic equations of neutral type with a finite number of concentrated delays which are
rationally commensurable with the period of coefficients. The book examines the Floquet Theory for such
equations. The method to formulate a sufficient conditions for the existence of Floquet solutions are also
proposed. Application of this method allows us to obtain an asymptotic representation for Floquet solutions and
their multipliers, to define limit points of multipliers, to establish some properties of the system of Floquet
solutions. The approach developed in this paper, is illustrated by differential-difference equation that describes
wave phenomena in a long line with the parametric conditions on the boundary.
5. OPTIMAL DYNAMIC SYSTEMS CONTROL
The challenge of providing restrictions imposed on the movement of a dynamic system remains important task
for theory and practice of management for a long time. The most well-known approaches to solving this problem
are based on the maximum principle and dynamic programming method of Bellman. Moreover, in these
approaches, first of all, we seek the optimal control, which in addition to the optimality should also ensure that
the specified limits. However, the effective management of the system is not necessarily optimal, which allows
to speak of a certain narrowness of these approaches. In this case, the procedure of synthesis is quite complex
and is ineffective in high-dimensional system. Direct approaches to the synthesis of restrictions control on the
system movement are also known. Methods of numerical synthesis (F.P. Vasiliev "Optimization Methods", R.F.
Gabasov and F.M. Kirillova, "Constructive methods of optimization" [31], R.P. Fedorenko "Approximate
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
117
solution of optimal control problems», E. Polak "Algorithms and consistent approximation"), methods based on
Lyapunov function (V.M. Kuntsevich, M.M. Lychak, V.I. Vorotnikov, V.V. Rumyantsev), methods of inverse
dynamics (D.P. Krutko, S.V. Taranenko, V.I. Vasyukov, S.A. Gorbatenko) may be classified these. The use of
numerical approaches, despite their virtually unlimited applicability to a wide variety of classes of dynamical
systems is dependent on the construction of efficient approximate models, which is a rather complex problem in
itself. In addition, the required solutions search procedure often leads to unusual or extreme problems of mixed
algebraic inequalities, which have no effective solutions. Application of methods based on Lyapunov function is
related to the problem of forming Lyapunov function and the solution of Lyapunov equations or inequalities.
This problem is most easily solved for linear systems, and in more general cases with a fairly arbitrary
constraints its solution is linked with considerable difficulties. Use of inverse dynamics methods is connected
with serious difficulties due to the problem of choosing the desired motion, which must meet given limitations.
Numerical dynamic programming procedures are based on the Bellman’s principle of optimality, which reads:
“An optimal policy has the property that whatever the initial state and initial decision are, the remaining
decisions must constitute an optimal policy with regard to the state resulting from the first decision.“
In the traditional principle of optimality of Bellman, optimality is understood in the sense of extreme value of the
selected scalar criterion. However, currently most important problems can not be reduced to one-criterion
formulation, so the problem of Bellman’s principle of optimality and numerical schemes of dynamic
programming generalization for the case of a broader interpretation of the concept of optimality is on the agenda.
The main drawback of the approach that consists in the direct synthesis of dynamic programming method, for
example, for the case of several criteria, is considered by some authors (D.A. Velichko, V.V. Sysoev), to be the
issue of proportionality and, consequently, a lack of computing resources. For example, in (D.A. Velichko
"Methods of multicriteria search for the best options of equipment and technology composition for the
production line) it is shown theoretically that such approach becomes ineffective when the number of criteria is
more than three due to avalanche-like increase in the number of conflicting decisions. However, the Pareto set is
rarely commensurable with the total number of options, although it is easy to think of a process example in
which all possible trajectories will be Pareto optimal, in the real-world conditions, such examples do not occur
often. Therefore, a theoretical assessment of the difficulties presented in the case of the exhaustive search is
overstated, and drawn conclusions are particular cases.
6. EXAMPLE OF THE LAST RESULT
Let us have the control system of differential equations
x (t )  A0 x(t )  A1 (t   )  Bu (t )
(5)
n
Where x(t )  R , t  0,   0, x(t )   (t ) if    t  0, A0 , A1 are square commutative matrices, B is the constant
T
n  n  , u (t )   u1 (t ), , un (t ) 
matrix of dimension 
is the control vector-function.
THEOREM
For a linear stationary system with delay (5) to be controllable it is necessary that the following condition holds:
t  (k  1) , k  1, 2, and det S k  0 , where

.
S k  B; e  A0 A1 B; e2 A0 A12 B;; e ( k 1) A0 A1k 1 B
REFERENCES
[1]
AZBELEV, N. V.; MAKSIMOV, V. P.; RAKHMATULINA, L. F. Introduction to theory of functional
differential equations. Nauka. 1991. 278 p.
BARBASHIN, E. A. Lyapunov functions. Nauka. 1970. 240 p.
[2]
[3]
BEKLORYAN, L. A. Introduction to the theory of functional differential equations. Group approach:
[monograph]. Faktorial Press. 2007. 288 p.
[4]
BELLMAN, R. Stability theory of solutions of differential equations. Foreign Literature PH. 1954. 216 p.
[5]
BELLMAN, R.; COOKE, K.; KENNET, L. Differential-difference equations. World. 1967. 548 p.
[6]
BORISENKO, C.D.; KOSOLAPOV, V. N.; OBOLENSKII, A. U. Stability of processes with continuous
and discrete perturbations. Naukova dumka. 1988. 198 p.
[7]
VOLTERRA, V. The Theory of functional, integral and integro-differential equations. Nauka. 1982.
304 p.
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
118
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
[18]
[19]
[20]
[21]
[22]
[23]
[24]
[25]
[26]
[27]
[28]
[29]
[30]
[31]
[32]
[33]
GERMANOVICH, O. P. Linear periodic equation of neutral type and their applications. LGU. 1986.
106 p.
DALETSKII, J. A.; CRANE, M. G. Stability of differential equations solutions in Banach space. Nauka.
1970. 534 p.
DEMIDOVICH, B. P. Lectures on the Mathematical Theory of Stability. Nauka. 1967. 472 p.
KRASOVSKII, N. N. The theory of motion control. Linear systems. Nauka. 1968. 475 p.
KURBATOV, V. G. Linear differential-difference equations. Voroneg. 1990. 168 p.
LEONOV, G. A. Chaotic dynamics and the classical theory of motion stability: [monograph]. Igevsk.
2006. 167 p.
MITROPOLSKII, U. A. Differential equations with delay argument. Naukova dumka. 1977. 299 p.
MYSHKIS, A. D. Linear differential equations with delay argument. Nauka. 1972. 352 p.
NORKIN, S. B. Second order differential equations with delay argument. Some questions in the theory of
vibrations of systems with delay. Nauka. 1965. 354 p.
PINNEY, E. Ordinary differential-difference equation. Foreign Literature PH. 1961. 248 p.
RAZUMIHIN, B. S. Stability of delay systems. Nauka. 1988. 112 p.
RUSCH, H.; ABETS, P.; LALOY, M. Direct method of Lyapunov in stability theory. World. 1980. 300 p.
SABITOV, K. B. Functional, differential and integral equations. Textbook for university students
majoring in “Applied Mathematics and Informatics" and the direction of “Applied Mathematics and
Computer Science. High school. 2005. 702 p.
HALE, J. Theory of Functional Differential Equations. World. 1984. 421 p.
ÈLSGOLTS, L. E.; NORKIN, S. B. Introduction to the theory of differential equations with delay
argument. Nauka. 1971. 296 p.
BOICHUK, A.; DIBLÍK, J.; KHUSAINOV, D. Y.; RŮŽČKOVÁ, M. Boundary Value Problems for
Delay Differential Systems, Advances in Difference Equations, vol. 2010, Article ID 593834, 20 p., 2010.
doi:10.1155/2010/593834.
BOICHUK, A.; DIBLÍK, J.; KHUSAINOV, D.; RŮŽČKOVÁ, M. Fredholm's boundary-value problems
for differential systems with a single delay, Nonlinear Analysis, 72 (2010), p. 2251-2258.
ISSN 0362-546X.
DIBLÍK, J.; KHUSAINOV, D.; LUKÁČOVÁ, J.; RŮŽIČKOVÁ, M. Control of oscillating systems with
a single delay, Advances in Difference Equations, Volume 2010, Article ID 108218, 15 p.
doi:10.1155/2010/108218.
DIBLÍK, J.; KHUSAINOV, D.; RŮŽIČKOVÁ, M. Controllability of linear discrete systems with
constant coefficients and pure delay, SIAM Journal on Control and Optimization, 47, No 3, 2008,
p. 1140-1149. DOI: 10.1137/070689085, Available from: http://link.aip.org/link/?SJC/47/1140/1.
ISSN Electronic 1095-7138, ISSN Print: 0363-0129.
KHUSAINOV, D.; IVANOV, A. F.; SHUKLIN, G. V. About a single submission of solution of linear
systems with delay. Differential equations. 2005. No.41,7. p. 1001-1004.
KHUSAINOV, D.; SHUKLIN, G. V. About relative controllability of systems with pure delay. Applied
Mechanics. 2005. No. 41,2. p. 118-130.
KOLMANOVSKII, V. B.; NOSOV, V. R. Stability and periodical regimes of controlled systems with
aftereffect. Nauka. 1981. 448 p.
KOLMANOVSKII, V. B. About stability of nonlinear system with delay. Mathematical Notes Ural
University. 1970. p. 743-751.
GABASOV, R. F.; KIRILOVA, F. M. Qualitative theory of optimal processes. Moskva : Nauka. 1971.
508 p.
KRASOVSKII, N. N. Inversion of theorems of second Lyapunov’s method and stability problems in the
first approximation. Applied Mathematics and Mechanics. 1956. p. 255-265.
MARCHENKO, V. M. For controllability of linear system with delay. ASR : USSR. 1977. p. 1083-1086.
ADRESSES:
Doc. RNDr. Jaromír Baštinec, CSc.
Department of Mathematics
Faculty of Electrical Engineering and Communications
Brno University of Technology
Technická 8
616 00 Brno
Czech Republic
Tel. +420 541 143 222
email: [email protected]
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
119
Mgr. Ganna Piddubna
Department of Mathematics
Faculty of Electrical Engineering and Communications
Brno University of Technology
Technická 8
616 00 Brno
Czech Republic
Tel. +420 541 143 218
email: [email protected]
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
120
THE CALCULATION OF ENTROPY OF FINANCIAL TIME SERIES
Petr Dostál, Oldřich Kratochvíl
Brno University of Technology
European Polytechnic Institute, Ltd.
Abstract: A number of variations of calculation of entropy have been developed to measure the
regularity of time series. The article deals with the calculation of approximate entropy used in
physic. Some measures of time series can be made in financial branch. The article presents the used
method of calculation programed in MATLAB. The case study is dedicated to evaluation of indexes
of eastern states, such as Poland, Hungary, Estonia, Lithuania, Slovenia and Czechia.
Keywords: Entropy, financial time series, program, MATLAB
1. INTRODUCTION
The calculation of approximate entropy is possible to use in case of evaluation of time series including economy
and financial ones. The value of entropy is „high“(approximate to 1) in case when the time is randomness. The
value of entropy is „low“ (approximate to 0) in case of deterministic time series (for example sinusoidal wave).
It means that the entropy inform us about the level of determinism or randomness of time series and thus the
“stability” of phenomena “producing” such time series, for example stock market, state economy etc. The
calculation of entropy is described in literature [3,4]. The article presents the used method of calculation
programed in MATLAB. The case study is dedicated to evaluation of indexes of eastern states, such as Poland,
Hungary, Estonia, Lithuania, Slovenia and Czechia. The other possibilities of measure of time series are by
Hurst exponent. See lit. [1].
2. USED METHODS
The calculation is as follows. The parameters r and m are the filter and the size of the template vector one. Time
series with N observations Xm, n = 1, …. , N create embedding vector vi, each made up of m consecutive values
of X, v(i) = [Xi, ... , Xi+k] with k = 1, . . . , m − 1. For i, j = 1, . . . , N − m + 1, each template vector v(i) is
compared to all conditioning vectors v( j) (including itself) by computing L1 norm. According to index k, the
distance between the vector elements is measured sequentially according the formula
d [v (i ), v ( j )]  max X i  k  X j  k .
k  0 ,..., m
Quantification of the regularity of a particular pattern, count the relative frequency of distances between the
template vector v(i) to all vectors v(j), that lie within the neighborhoods r. The formula is as follows
Cim (r ) 
N  m 1
1
 (r  d [v(i), v( j )]),
N  m  1 j 0
where  (·) represents the binary Heaviside function, where  (·) = 1 if the condition r > L1 is satisfied, else
 (·) = 0. The average over i of the log amount of vectors v(j) for which r > L1 is calculated according formula
im (r ) 
N  m 1
1
lnC im (r ).

N  m  1 i 0
Finally, the conditional probabilities and ensure positivity is calculated according formulas
Ep(r, m, n)   m (r )   m1 (r ) and Ep(r,0, n)   1 (r ).
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
121
3. THE CALCULATION PROGRAMS
The fixed parameter is r. The program Entropy.m programmed in MATLAB do the calculation of value of
entropy. See prog.1.
clear all
EN=(xlsread('DostalEn','Data'));
[ro,co]=size(EN);
for i=1:6
pp=[EN(1:ro,i)];
pot=[EN(1:ro,i)];
subplot(2,3,1)
if i==1 plot(pp,'Color',('red'));end;
subplot(2,3,2)
if i==2 plot(pp,'Color',('green'));end;
subplot(2,3,3)
if i==3 plot(pp,'Color',('black'));end;
subplot(2,3,4)
if i==4 plot(pp,'Color',('cyan'));end;
subplot(2,3,5)
if i==5 plot(pp,'Color',('blue'));end;
subplot(2,3,6)
if i==6 plot(pp,'Color',('yellow'));end;
r=.5;
[n,p] = size(pp);
a = 0;
b = 0;
c = zeros(n,p);
for k=1:n
for j=1:p
e(:,j) = pp(k,j);
end
d = (abs( e - pp ) <= r );
if p == 1
cpp = d;
else
cpp = all(d');
end
pc = sum(cpp);
a = a + log(pc);
inds = find(cpp);
pt = sum( abs( pot(cpp) - pot(k) ) < r );
b = b + log(pt);
end
ep(i) = (a - b)/n;
end
ep
Prog.1 Program for calculation of entropy
4. CASE STUDY
The case study presents the calculation of entropy of time series of six indices of Hungary-Budapest (BUX price
index), Lithuania-Riga (HSBC price index), Estonia-Tallinn (OMXT price index), Czechia-Prague SE PX price
index), Slovakia-Bratislava (SAX 16 price index) and Poland-Warsaw (General index price index). See fig.1
from left top to right down. The data were taken from 25.2.2000 to 25.2.2010.
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
122
Fig.1 Searched time series
The results of entropy after calculation are as follows. See res.1.
ep = 0.0015
Hungary
0.0045
Lithuania
0.0043
0.0463
Estonia
Czechia
Res.1 Values of calculated entropy
0.0036
Slovakia
0.0033
Poland
The case study represents the indices from the more stable to less one Hungary, Poland, Slovakia, Estonia,
Lithuania, Slovakia and Czechia. The results can help the decision makers to solve the problem in which country
to invest. Further search will be done to compare results with Hurst exponent.
5. CONCLUSION
The conclusion is as follows: Approximate entropy measures the relative frequency which blocks of length m
that is close and remain close together in the next incremental comparison m+1. Small values of calculated
entropy imply strong regularity or determinism, in a time series; large value of entropy implies substantial
irregularity or randomness. Approximate entropy has also been found to be a robust measure of regularity when
applied to time series that contain noise and outliners. A number of variations of calculation of entropy have
been developed but the general principle of measuring of regularities of time series is similar. The approximate
entropy metric do not assume that only obvious pattern such as seasonality and trend are important, bur revel any
form of regularity. Nor are they distorted when the series means is close to zero.
The use of calculation of approximate entropy will be further searched for the support of decision making in
which country to invest.
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
123
LITERATURE
[1]
DOSTÁL, P. Pokročilé metody analýz a modelování v podnikatelství a veřejné správě. (The Advanced
Methods of Analyses and Simulation in Business and Public Service - in Czech). CERM : Brno, 2008,
ISBN 978-80-7204-605-8.
DOSTÁL, P. Advanced Economic Analyses. Brno : CERM, s.r.o., 2008, 80 p. ISBN 978-80-214-3564-3.
[2]
PINCUS, S., SINGER, B. Randomness and Degrees of irregularity. In: Proceeding of National Academy
[3]
of Sciences, 93, p. 2083-2088, 1996.
CATT, P. Forecastability: Insight from Physics, Graphical Decomposition, and Information Theory. In:
[4]
Foresight, The International journal of Applied Forecasting. Issue 13, Spring, 2009, p. 24-33.
THE MATHWORKS. MATLAB - User’s Guide. The MathWorks, Inc., 2008.
[5]
ADDRESSES:
Prof. Ing. Petr Dostál, CSc.
Brno University of Technology
Faculty of Business and Management
Department of Informatics
Kolejní 4
612 00 Brno
Tel.: +420 541 143714
Fax.: +420 541 142 692
Email: [email protected]
Ing. Oldřich Kratochvíl, h. prof., Ph.D. Dr.h.c., MBA
European Polytechnic Institute, Ltd.
Osvobození 699
686 04 Kunovice
Tel.: +420 572 548 035
Email: [email protected]
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
124
TO PERFORMANCE MODELING OF PARALLEL ALGORITHMS
Ivan Hanuliak 1, Peter Hanuliak 2
1
2
European polytechnic institute, Kunovice
Polytechnic institute, Dubnica nad Vahom
Abstract: The using of parallel principles is the most effective way how to increase the performance
of applications (parallel algorithms). In this sense the paper is devoted to complex performance
modelling of parallel algorithm. Therefore the paper describes the developing steps of parallel
algorithms and then he summarised the basic concepts for complex performance modelling of
parallel algorithms. Current trends in high performance computing (HPC) and grid computing
(Grid) are to use networks of workstations (NOW) as a cheaper alternative to traditionally used
massively parallel multiprocessors or supercomputers. In such parallel systems workstations are
connected through widely used communication networks and cooperate to solve one large problem.
Each workstation is threatened to a processing element as in a conventional multiprocessor system.
To make the whole system appear to the applications as a single parallel computing engine
(a virtual parallel system), run-time environments such as OpenMP, PVM (Parallel virtual
machine), MPI (Message passing interface) and Java are often used to provide an extra layer of
abstraction.
Keywords: parallel
issoeffectivity
computer,
parallel
algorithm,
performance
modelling,
complexity,
1. INTRODUCTION
Parallel organisations of processors, cores or independent computers [5] and a use of various forms of parallel
processes [6, 18] at developing parallel algorithm are dominant nowadays. For example in the area of the
technical equipment it is speeding up of an individual processor performance through a use of pipeline principle
in combination with blowing up its resource capacity and caches. In a field of programming tool a parallel
support is also in two levels. One level builds operation systems and supporting system programming tools.
Other level creates user developing programming environments supporting development of modular parallel
algorithms. Such a parallel support goes recently up to the program modularity in the form of parallel objects
based on OOP (Object oriented programming).
There has been an increasing interest in the use of networks (Cluster) of workstations connected together by high
speed networks for solving large computation intensive problems. This trend is mainly driven by the cost
effectiveness of such systems as compared to massive multiprocessor systems with tightly coupled processors
and memories (Supercomputers). Parallel computing on a cluster of workstations connected by high speed
networks has given rise to a range of hardware and network related issues on any given platform. Load
balancing, inter processor communication (IPC), and transport protocol for such machines are being widely
studied [1, 3, 6]. With the availability of cheap personal computers, workstations and networking devises, the
recent trend is to connect a number of such workstations to solve computation intensive tasks in parallel on such
clusters.
Network of workstations (NOW) [5, 6, 13] has become a widely accepted form of high performance computing
(HPC). Each workstation in a NOW is treated similarly to a processing element in a multiprocessor system.
However, workstations are far more powerful and flexible than processing elements in conventional
multiprocessors (Supercomputers). To exploit the parallel processing capability of a NOW, an application
algorithm must be paralleled. A way how to do it for an application problem builds its decomposition strategy.
This step belongs to a most important step in developing parallel algorithm [3, 7] and their performance
modelling and optimization (Effective parallel algorithm).
2. PARALLEL ALGORITHMS
The role of programmer is for the given parallel computer and for the given application task (Complex
application problem) to develop the effective parallel algorithm. This task is more complicated in those cases, in
which he must create the conditions for the parallel activity, that is through dividing the input algorithm to many
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
125
mutual independent parts (Decomposition strategy), which are named processes or threads. In general
development of the parallel algorithms include the following activities [6, 15]

decomposition - the division of the application into a set of parallel processes and data

mapping - the way how processes and data are distributed among the nodes of a parallel system

inter process communication (IPC) - the way of cooperation and synchronization

tuning – performance optimisation of a developed parallel algorithm.
The most important step is to choose the best decomposition method for given application problem [7]. To do
this it is necessary to understand the concrete application problem, the data domain, the used algorithm and the
flow of control in given application. When designing a parallel program the description of the high level
algorithm must include, in addition to design a sequential program, the method you intend to use to break the
application into processes or threads (Decomposition strategy) and distribute data to different nodes (Mapping).
The chosen decomposition method drives the rest of program development.
Quantitative evaluation and modelling of hardware and software components of parallel systems are critical for
the delivery of high performance. Performance studies apply to initial design phases as well as to procurement,
tuning, and capacity planning analysis. As performance cannot be expressed by quantities independent of the
system workload, the quantitative characterization of resource demands of application and of their behaviour is
an important part of any performance evaluation study. Among the goals of parallel systems performance
analysis are to assess the performance of a system or a system component or an application, to investigate the
match between requirements and system architecture characteristics, to identify the features that have
a significant impact on the application execution time, to predict the performance of a particular application on
a given parallel system, to evaluate different structures of parallel applications. To the performance evaluation
we briefly review the techniques most commonly adopted for the evaluation of parallel systems and its metrics.
3. THE ROLE OF PERFORMANCE
Quantitative evaluation and modelling of hardware and software components of parallel systems are critical for
the delivery of high performance. Performance studies apply to initial design phases as well as to procurement,
tuning and capacity planning analysis. As performance cannot be expressed by quantities independent of the
system workload, the quantitative characterisation of resource demands of application and of their behaviour is
an important part of any performance evaluation study [4, 10]. Among the goals of parallel systems performance
analysis are to assess the performance of a system or a system component or an application, to investigate the
match between requirements and system architecture characteristics, to identify the features that have
a significant impact on the application execution time, to predict the performance of a particular application on
a given parallel system, to evaluate different structures of parallel applications [12]. In order to extend the
applicability of analytical techniques to the parallel processing domain, various enhancements have been
introduced to model phenomena such as simultaneous resource possession, fork and join mechanism, blocking
and synchronisation. Modelling techniques allow to model contention both at hardware and software levels by
combining approximate solutions and analytical methods. However, the complexity of parallel systems and
algorithms limit the applicability of these techniques. Therefore, in spite of its computation and time
requirements, simulation is extensively used as it imposes no constraints on modelling.
Evaluating system performance via experimental measurements is a very useful alternative for parallel systems
and algorithms. Measurements can be gathered on existing systems by means of benchmark applications that aim
at stressing specific aspects of the parallel systems and algorithms. Even though benchmarks can be used in all
types of performance studies, their main field of application is competitive procurement and performance
assessment of existing systems and algorithms. Parallel benchmarks extend the traditional sequential ones by
providing a wider a wider set of suites that exercise each system component targeted workload.
3.1. METHODS OF PERFORMANCE MODELLING
There have been developed several concepts to evaluate parallel algorithms. Tradeoffs among these performance
factors are often encountered in real life applications. To the performance evaluation we can use following
methods:

analytical
o asymptotic analysis [2, 6, 9, 11]
o application of queuing theory [6, 19]
o Petri nets [6, 19]

simulation methods [6, 14]

experimental
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
126
o
o
benchmarks [6, 4]
direct performance measuring of parallel algorithms [8, 9, 12].
Modelling performance via experimental measurement is a very useful alternative for parallel algorithms.
Measurements can be gathered on existing systems by means of benchmark applications that aim at stressing
specific aspects of the parallel systems and algorithms. Even though benchmarks can be used in all types of
performance studies, their main field of application is competitive procurement and performance assessment of
existing systems and algorithms. Parallel benchmarks extend the traditional sequential ones by providing a wider
a wider set of suites that exercise each system component targeted workload. The Parkbench suite especially
oriented to message passing architectures and the SPLASH suite for shared memory architectures are among the
most commonly used benchmarks.
4. PERFORMANCE EVALUATION
4.1. PERFORMANCE CONCEPT
Let O(s, p) be the total number of unit operations performed by p processor system for size s of the
computational problem and T(s, p) be the execution time in unit time steps. Assume T(s, 1) = O(s, 1) in a one
processor system (Sequential system). The speedup factor is defined as
S ( s, p) 
T ( s , 1)
T ( s, p )
It is a measure of the speedup factor obtained by given algorithm when p processors are available for the given
problem size s. Since S(s, p) ≤ p, we would like to design algorithms that achieve S(s, p) ≈ p.
4.2. EFFICIENCY CONCEPT
The system efficiency for a p-processor system is defined by
E ( s, p ) 
S ( s, p )
T ( s, 1)

p
p T ( s, p )
A value of E(s, p) approximately equal to 1 for some p, indicates that such a parallel algorithm, using
p processors, runs approximately p times faster than it does with one processor (Sequential algorithm).
4.3. THE ISOEFFICIENCY CONCEPT
The workload w of an algorithm often grows in the order O(s), where s is the problem size. Thus, we denote the
workload w = w(s) as a function of s. In parallel computing is very useful to define an isoefficiency function
relating workload to machine size p needed to obtain a fixed efficiency when implementing a parallel algorithm
on a parallel system. Let h (s, p) be the total overhead involved in the algorithm implementation. This overhead
is usually a function of both machine size and problem size. The workload w(s) corresponds to useful
computations while the overhead h(s, n) are useless times attributed to parallelisation, synchronisation and
communication delays. In general, the overheads increase with respect to increasing values of s and p. Thus the
efficiency is always less than 1. The question is hinged on relative growth rates between w(s) and h(s, p). The
efficiency of a parallel algorithm is thus defined as
E ( s, p ) 
w( s)
w( s)  h( s, p)
With a fixed problem size the efficiency decreases as p increase. The reason is that the overhead h(s, p) increases
with p. With a fixed machine size, the overload grows slower than the workload. Thus the efficiency increases
with increasing problem size for a fixed-size machine. Therefore, one can expect to maintain a constant
efficiency if the workload is allowed to grow properly with increasing machine size. For a given algorithm, the
workload might need to grow polynomial or exponentially with respect to p in order to maintain a fixed
efficiency. Different algorithms may require different workload growth rates to keep the efficiency from
dropping, as p is increased. The isoefficiency functions of common parallel algorithms are polynomial functions
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
127
of p, i. e. they are O(pk) for some k ≥1. The smaller the power of p in the isoefficiency function the more scalable
the parallel system. We can rewrite equation for efficiency E(s, p) as
E(s, p) = 1 / (1+ (h (s, p) / w(s))
In order to maintain a constant E, the workload w(s) should grow in proportion to the overhead h(s,p). This leads
to the following relation
w( s ) 
E
h( s, p )
1 E
The factor C = E / 1-E is a constant for a fixed efficiency E. Thus we can define the isoefficiency function as
follows
fE (p) = C. h (s, p)
If the workload grows as fast as fE (p) then a constant efficiency can be maintained for a given parallel algorithm.
We illustrate the concepts of performance evaluation on the example of DFFT parallel algorithm.
5. MODEL
Generally model is the abstraction of the system (Fig.1.). The functionality of the model represents the level of
the abstraction applied. That means, if we know all there is about the system and are willing to pay for the
complexity of building a true model, the role of abstraction is near nil. In practical cases we wish to abstract the
view we take of a system to simplify the complexity of the real system. We wish to build a model that focuses on
some basic elements of our interest and leave the rest of real system as only an interface with no details beyond
proper inputs and outputs. The „real system“ is the real process or system that we wish to model. In our case it is
the process of performance of parallel algorithms (PA) for used parallel computers (SMP, NOW, GRID).
The basic conclusion is that a model is a subjective view of modeller’s subjective insight into modelled real
system. This personal view defines what is important, what the purposes are, details, boundaries, and so one.
Therefore the modeller must understand the system in order to guarantee useful features of the created model.
5.1. MODEL CONSTRUCTION
Generally the development of real model include following steps (Fig. 1.)

define the problem to be studied as well the criteria for analysis

define and/or refine the model of the system. This include development of abstractions into mathematical,
logical or procedural relationships

collect data input to the model. Define the outside world and what must be fed to or taken from the model
to “simulate” that world

select a modelling tool and prepare and augment the model for tool implementation

verify that the tool implementation is an accurate reflection of the model

validate that the tool implementation provides the desired accuracy or correspondence with the real world
system being modelled

experiment with the model to obtain performance measures

analyse the tool results

use the findings to derive designs and improvements for the real world system.
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
128
Problem
Problem description
analysis
Graphical
illustration
No
Essential
properties
Yes
Store
Real model
Model improvement
Formalisation
(Mathematical model)
No
Accuracy
Yes
End
Fig.1: Flow diagram of model development
6. PARALLEL ALGORITHMS
To this time known results in performance modelling on the in the world used classical parallel computers with
shared memory (supercomputers, SMP and SIMD systems) or distributed memory (Cluster, NOW, Grid) mostly
did not consider the influences of the parallel computer architecture and communication overheads supposing
that they are lower in comparison to the latency of executed complex calculations.
In this sense modelling and performance analysis of parallel algorithms (PA) is rationalised to the analysis of
complexity of own calculations, that mean that the function of control and communication overheads are not
a part of derived relations for execution time T (s, p). In this sense the function in the relation for isoefficiency
suppose, that dominate influence to the overall complexity of the parallel algorithms h (s, p) has the calculation
complexity. Such assumption has proved to be true in using classical parallel computers in the world
(supercomputers, massively multiprocessors – shared memory, SIMP architectures). To map mentioned
assumption to the relation for asymptotic isoefficiency w(s) means that


 
w(s)  max Tcomp, h (s, p)  Tcomp  max Tcomp
In opposite at parallel algorithms (PA) for the temporally used parallel computers on the basis NOW, SMP, Grid
is for performance modelling necessary to analyse complex all the parts of overheads and that are

architecture of parallel computer

own calculations (Tcomp)

communication latency (Tcomm)
o start - up time
o data transmission
o routing

parallelisation latency (Tpar)

synchronisation latency (Tsyn).
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
129
Execution time
Communication
time
Processing
time
Number of processors
Fig. 2. Relations among execution time components
Taking into account all this kinds of overheads the total parallel execution time is
T (s, p)complex   Tcomp  Tpar  Tcomm  Tsyn 
,
where Tcomp, Tpar, Tcomm, Tsyn denote the individual overheads for calculations, parallelisation overheads,
communication and synchronisation overheads. The more important overhead parts build in the relation for
isoefficiency the used the overhead function h (s, p), which influence in general is necessary to take into account
in performance modelling of parallel algorithms. In general nonlinear influence of h (s, p) could be at
performance parallel algorithm modelling dominant (Fig. 2.). Then for asymptotic isoefficiency analysis is true

,
w (s)  max Tcomp , h (s, p)
where the most important parts for dominant parallel computers (NOW, Grid) in overhead function h (s, p) is the
influence is Tcomm (Communication overheads).
7. EXPERIMENTAL COMPLEX PERFORMANCE EVALUATION
In NOW and Grid as a new forms of asynchronous parallel systems [16, 17, 20], we have to take into account all
aspects that are important for complex performance modelling according the relation T (s, p) complex = f
(architecture, computation, parallelisation, communication, synchronisation). In such a case we can use
following solution methods to get a complex performance

direct measurement - real measure of T (s, p) complex including its overhead components for developed PA

analytic modelling - to find T (s, p) complex on basis of closed analytical expressions for individual
overheads

simulation - simulation modelling on concrete developed parallel algorithms on the concrete parallel
system.
For suggested direct measuring of complex DPA performance evaluation in a NOW we used the structure
according Fig. 3.
8. SUMMARY
The paper deals with complex performance evaluation of parallel algorithms. Due to the dominant using of
parallel computers based on the standard PC (Personal computers) in form of NOW and Grid, there has been
great interest in performance modelling of parallel algorithms in order to achieve optimised parallel algorithms
(Effective parallel algorithms). Therefore this paper summarises the used methods for complex performance
analyses which can be solved on all types of parallel computers (supercomputer, NOW, Grid). Although the use
of NOW and Grid parallel computers should be in some parallel algorithms less effective than the used of
massively parallel architectures in the world (Supercomputers) the parallel computers based on NOW and Grid
belong nowadays to dominant parallel computers.
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
130
particular
calculation
WS1
P166
WS2
MMX233
WS3
Celeron300
Call for calculation
return of the result
Ethernet
WS4
P180Pro
Fig. 3. The measure on the NOW (Ethernet network)
REFERENCES
[1]
ANDREWS, G. R. Foundations of Multithreaded Parallel and Distr. Programming. Addison Wesley, 664
p., 2000.
ARORA, S.; BARAK, B. Computational complexity - A modern Approach. Cambridge University Press :
[2]
Cambridge, 573 p., 2009.
DASGUPTA, S.; PAPADIMITRIOU, CH. H.; VAZIRANI, U. Algorithms. McGraw-Hill, 336 p., 2006.
[3]
FORTIER, P.; HOWARD, M. Computer system performance evaluation and prediction, Digital Press,
[4]
544 p., 2003.
HANULIAK, I. Parallel architectures – multiprocessors, computer networks. Book centre : Žilina, 187
[5]
p., 1997.
HANULIAK, I. Parallel computers and algorithms. ELFA : Košice, 327 p., 1999.
[6]
HANULIAK, I. To the role of decomposition strategy in high parallel algorithms. In: Kybernetes. No.
[7]
9/10,
Vol. 29, p. 1042-1057, 2000.
HANULIAK, I.; HANULIAK, P. To performance evaluation of DPA in NOW, In: Proc. GCCP 2005, p.
[8]
30 – 39, November 29 – December 2, 2005, SAV Institute of Informatics : Bratislava.
HANULIAK, I.; HANULIAK, P. Performance evaluation of iterative parallel algorithms. In:
[9]
Kybernetes. Volume 39, No. 1, p. 107 – 126, 2010.
[10] HANULIAK, I.; HANULIAK, J. To performance evaluation of distributed parallel algorithms. In:
Kybernetes. Volume 34, No. 9/10, West Yorkshire, p. 1633-1650, 2005.
[11] HOLUBEK, A. Performance prediction of parallel systems. In: ICSC 2010. EPI, Ltd. : Kunovice, 2010.
[12] HOLUBEK, A. Performance Prediction and Optimization of Factorization and Prime Numbers
Algorithms. In: JSC 2010, Wien, 2010.
[13] KIRK, D. B.; HWU, W. W. Programming massively parallel processors. Morgam Kaufmann, 280 p.,
2010.
[14] KOSTIN, A.; ILUSHECHKINA, L. Modeling and simulation of distributed systems, 440 p., 2010,
Imperial College Press.
[15] KUMAR, V.; GRAMA, A.; GUPTA, A.; KARYPIS, G. Introduction to Parallel Computing. Addison
Wesley, 856 p., 2001.
[16] KUMAR, A.; MANJUNATH, D.; KURI, J. Communication Networking. Morgan Kaufmann, 750 p.,
2004.
[17] PATERSON, D. A.; HENNESSY, J. L. Computer Organisation and Design. Morgan Kaufmann, 912 p.,
2009.
[18] QUINN, M. J. Parallel Programming in C with MPI and Open MP. McGraw-Hill Publishers, 544 p.,
2004.
[19] STALLINGS, W. Computer Organisation and Architecture – Designing for Performance. Prentice Hall,
815 p., 2003.
[20] WILLIAMS, R. Computer System Architecture – A Networking Approach. Addison Wesley, 680 p.,
2001.
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
131
ADDRESS
Prof. Ing. Ivan Hanuliak, CSc.
European Polytechnic Institute
Osvobozeni 699
686 04 Kunovice
[email protected]
Ing. Peter Hanuliak, PhD.
Polytechnic Institute
Dukelska stvrt 1404/613
018 41 Dubnica nad Vahom
[email protected]
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
132
OPTIMIZATION DATA STRUCTURES IN PARALLEL PRIME NUMBER
ALGORITHM.
Andrej Holúbek
University of Žilina
Abstract: This article shows importance of efficient data- structure used for parallel prime number
algorithms. It is focused on speed-up, efficiency and memory usage of data-structures. This article
contains a mathematical model of efficiency of data-structures used for parallel prime number
algorithms.
Keywords: Parallel systems, efficiency, memory usage, data-structure
1. INTRODUCTION
Optimization of parallel systems is becoming more and more wide area of research. One part of optimization is
using efficient data-structures in parallel computing. This work is focused on using efficient data-structures in
Sieve of Eratosthenes algorithm.
2. DATA-STRUCTURES
We were testing two alternatives of data-structures for Sieve of Eratosthenes algorithm. One alternative was List
of Integers [1] and second was array of Booleans [1]. We compare advantages and disadvantages of both of these
structures. Evaluation criteria were focus on memory usage, time of computation, complexity of algorithm using
each data-structure.
3. LIST OF INTEGERS
First we tested List of Integers. List is data-structure well known for its dynamic approach, simple insertion and
removing of elements. As Sieve of Eratosthenes algorithm is based on removing of elements, it was an
appropriate alternative of data-structure. Usage of this data-structure was simple using dynamic approach. At the
beginning List of Integer were initialized, with all natural numbers from 2 to N. Afterwards all elements divisible
by first prime number (2), were removed from the list. Than all elements divisible by next prime (3) were
removed from the list. Consequently for every prime k<√N all divisible elements were removed. For every next
prime, less elements has to be iterated. Therefore complexity of algorithm using list of integers is based of
iterated elements of list and it is following (1.1):
x
 ( n )   ( n ) 
n      pk 
 i 1  j i 
 ( n)
p
k
k 1
Figure 1 represents data structure list, full-filled with natural numbers from 2 to N.
2
3
4
N
Fig. 1: Data-structure List
Disadvantage of data-structure list is higher memory usage. Elements of list, in this case integer, use 4 bytes of
memory and pointers for next elements are also stored in 4 bytes of memory. Therefore total memory usage of
this data-structure is following (1.2):
x  (n  2).4  (n  2  1).4
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
133
Where (n-2).4 is memory usage of values of all natural numbers from 2 to n, and (n-2-1).4 is memory usage of
pointers for each element except last one.
4. ARRAY OF BOOLEANS
Next data-structure we tested was array of Booleans [2]. Array is a static data-structure and insertion and
removing of elements is therefore more advanced operation [3]. Advantage of array is its lower memory usage,
because it is not necessary to store pointers of elements. Also for Sieve of Eratosthenes algorithm it is not
necessary to store each natural number, instead only field tag is stored, depending on primality of each number.
If the number is prime the tag is 1 otherwise it is changed to value 0. Implementation of algorithm for this datastructure is as followed. At the beginning array of Booleans of size (n-2) is initialized for value 0 for each
element. Afterwards all folds of first prime (2) are marked with Boolean tag with value 1. Algorithm continues
for every prime k<√N and change tags from value 0 to 1 for all folds of primes. For every prime number, n/k
elements will be marked. Based on prime theorem, there are π(n) prime numbers smaller than n. Therefore (1.3)
elements have to be marked as composite number [4].
x
 (n)
n
p
i 1
i
Figure 2 represents data structure array, full-filled with natural numbers from 2 to N and composite numbers
are marked with tag 1 [5].
2
3
4
5
6
7
8
9
0
0
1
0
1
0
1
1
n-1
1
n
1
Fig. 2: Data-structure List
5. COMPARISON OF DATA-STRUCTURES
As evaluation criteria were chosen computation time, complexity and memory usage [6].
Figure 3 illustrate computation time necessary for computation of Eratosthenes sieve for increasing number n. It
is obvious that time to manage elements in list costs more time than changing tag in array.
array Fig. 3: Comparison of computation time for array and list
On Figure 4 is shown complexity of Eratosthenes sieve depending on chosen data-structure. Complexity for list,
increase faster with increasing number n.
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
134
array Fig. 4: Comparison of complexity for array and list
Figure 5 illustrates memory usage of both data-structure array and list. Memory usage of list is higher caused by
higher memory usage of Integer (4 bytes) compare to Boolean (1byte) and usage of pointers in list (4bytes) [7].
array Fig. 5: Comparison of complexity for array and list
6. CONCLUSION
As were shown before, the data-structure Array is better for use with Eratosthenes sieve algorithm. It was better
in all criteria. Therefore using array instead of list is recommended for prime number algorithm Sieve of
Eratosthenes.
REFERENCES
[1]
DALE, N. B. C++ plus data structures. Jones and Bartlett Publishers, 2007.
MCMILLAN, M. Data structures and algorithms using C♯. Cambridge University Press : Cambridge,
[2]
2006.
JUAN, Z. Distributed parallel data structure of a traffic network simulation based on object-oriented
[3]
programming. In: Urban transport XII : urban transport and the environment in the 21st century. p. 327335. 2006.
COJOCARU, A. C.; MURTY M. R. An introduction to sieve methods and their applications, Cambridge
[4]
Univ. Press : Cambridge, 2005.
FORD, W. H.; TOPP, W. R. Data structures with JAVA. Pearson : Prentice Hall, 2005.
[5]
HOLÚBEK, A. Modelovanie a optimalizácia paralelných algoritmov rozkladu veľkých čísiel na
[6]
prvočísla. ŽU : Žilina, 2008.
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
135
[7]
HATCHER, P. J.; QUINN, M. J. Data-parallel programming on MIMD computers. MIT Press, 1991.
ADDRESS
Andrej Holúbek
University of Žilina
Univerzitná 8215/1
010 26 Žilina
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
136
PROGRAMMING METHODS
Filip Janovič, Dan Slováček
European Polytechnic Institute, Ltd.
Abstrakt: This paper describes various methods of programming, it is clear direction and
development of programming techniques and by increasing the complexity of algorithms. Describes
a brief overview of the most widely used techniques
Keywords: Programming, methods of programming, algorithms, techniques of programming
HOME
The methodology is standardized for the selected approach to solving problems.
Choosing the right method depends on how we solve the problem and the individual choice of the programmer.
However, the selection has a very large impact on the complexity of the resulting code, the process of creating
applications, and sometimes the speed of a program.
This text does not seek encyclopedic classification methods, which offers programming languages, but only
imagine their characteristics, advantages and disadvantages.
CLASSIFICATION METHODOLOGY
Given the large number of programming languages has been defined: supertype paradigms, the paradigm and
type of special type of methodology.
Supertype basic paradigms. Basic supertype are:
Programming time - a program that takes into account the calculations processor, process for one or more

data sets

Distributed Programming - a program, taking into account the calculations performed without the sharing
of computing resources. There is one more task - communication and synchronization between systems

Parallel or concurrent programming - program development, taking into account the calculations
performed by more than one processor, synchronization and data sharing between processors, threads,
fibers or processes
Distributed programming is a special case of parallel programming. But parallel programming may not be
distributed programming. Although the supertype may be associated with a specific system architecture. They
are only oriented approach to creating applications for certain types of platforms. They do not generally affect
the type of methodology used, but often the choice of algorithm, or even forms of order, or how to achieve an
optimal outcome for the target platform.
Basic types of paradigms. Basic types of designs are:
Imperative programming


Functional Programming

Descriptive Programming

Logic Programming
The main types of methodologies. The main types of models are:
Linear Programming


Procedural Programming

Programming status

Functional Programming

Declarative Programming

Structured Programming

Object-oriented programming

Aspect-oriented programming
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
137




Logic Programming
Generic Programming
Extreme Programming
Event-Driven Programming
LINEAR PROGRAMMING
Linear programming is based on the assumption that the entire program is viewed as one contiguous block.
Program information may be on the same block, either as separate or distinct chunks. Implementation of this
program is a process, which includes operations in succession from the beginning to the end of the block.
In this methodology, there is no control concept, function, method or subroutine. The block can only move
through the instructions, conditional and unconditional jumps, often related to the conditional instruction, loop,
or direct jump to the label. A characteristic feature of linear programming languages support the ability to insert
the code tags or mark the place.
Linear programming is often used in assembly language, programming microcomputers, industrial control
systems, as well as in older versions of Pascal and Basic. It is also supported by almost all programming
languages of the seventies.
Linear programming is part of the mandatory-type methodology.
Advantages:
No need to use container,


Similarity form of object code and source.
Disadvantages:
Large size code


Often a large number of labels - to reduce the readability of code

Difficulties in analyzing the source code associated with the lack of dedicated teaching logic design

The inability to decompose the problem into smaller functions.
PROCEDURAL PROGRAMMING
This methodology is based on the assumption that it should be spread problems and develop programs so that the
block is the skeleton of the main program, which determines the order in the course of operations, while
operations are responsible for various sub-blocks in the form of procedures and functions. In connection with the
proximity of procedural programming and linear methods are often used interchangeably. In procedural
programs, often occur as linear programming instructions, and structural.
Data can either be included in the program block, and a separate space - depending on the platform. Unlike
structured programming, data are grouped. A characteristic feature is the large number of variables (such as
global and local) and frequent use of tables.
Linear programming is often used in a family assembly to programming microcomputers, as well as construction
analysis and computer programs, created a general-purpose languages (eg Pascal, C). It should be noted that this
method is often used in scripting languages and languages of Web services, such as the client side - Java Script,
PHP server-side. Procedural programming is part of the mandatory-type methodology.
Advantages:
Ability to disassemble code


Similarity form of object code and source.
Disadvantages:
A large number of variables


Separation of data and operations on them

Lack of clustering of data in an orderly manner
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
138
PROGRAMMING STATUS
This method is rarely used in personal computer programming. However, it is a principal in the programming of
the electronics industry, including the PLC (programmable logic controllers). It consists of a program, not
writing code, but by making a chart that describes the behavior of the system.
Programming state is also a table describing the state of the output from the electronic system on the basis of its
inputs. Such programming is often used for elements of the PLD.
Languages state are aimed at those who need to program the system to a specific type of job, and not
mathematically complex, or functional. The system, interpreted as an element to implement the program, they
must behave a certain way, or take appropriate steps in response to a state of self and surroundings.
Programming state is classified as a descriptive methodology.
Benefits
Dependence on the state of the system behavior


Work based on the idea of finite automata

Not understanding the language the programmer commands
Disadvantages
Without the additional support of another language is difficult to establish a computational complexity


The text data is
FUNCTIONAL PROGRAMMING
This methodology is widespread among programmers, whose approach to programming is based on the
mathematical structure.
The result is obtained from the function. This assumption includes many nested structures in which the result of
one function, no variables passed by value or by reference to the argument over the outer function.
Recursion is used, which calls the same function, which keeps the nested function calls with their arguments.
This methodology is commonly used in Python, F #, Ruby and Nemerle. Most languages, using the methodology
of structural or object-oriented, allows the use of this methodology, but such an approach is often used instead of
mixed code has gained popularity: the structural-functional or object-function.
Advantages:
Construction of mathematical functions based administration


Easy to verify the numerical

Simple procedures for the testing
Disadvantages:
Reduction of local variables


High availability queue for subsequent function calls and recursion
STRUCTURED PROGRAMMING
This methodology is another step in the development of procedural programming. The program consists of a
main structural unit, which determines the course of operations, while operations are responsible for the various
sub-blocks in the form of procedures and functions.
Data can be simple variable types, but most are grouped to form a complex type. Characteristic is a subtransmission data into a single variable of type complex, rather than multiple parameters of simple types.
Structured programming is often used in the implementation of application programming interfaces for operating
systems (eg Linux, Windows), which is used in object-oriented programming. This is the basic paradigm of the
Pascal, C, and many others.
Structured programming is part of the mandatory-type methodology.
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
139
Advantages:

The ability to easily disassemble the code

Similarity of machine code.

Small number of variables

Flexible grouping of related data
Disadvantages:
Separation of data and operations on them

OBJECT-ORIENTED PROGRAMMING
This methodology includes the analysis of object-oriented! Basic concepts of object-oriented paradigm include
class, object, method, field, encapsulation, inheritance, polymorphism.
SOU class object definition, which combines state of the object, object behavior according to available methods.
The building is a Class A copy that has its own unique set of conditions and behaviors.
The field is a variable associated with total or each class of objects, defines the current state of an object class.
Encapsulation is a mechanism for limiting access to fields or methods when trying to appeal to them from
outside the building, classrooms, library or program.
Inheritance is a mechanism for defining new classes based on existing ones. Polymorphism is a mechanism for
calling object methods.
Object-oriented program consists of a main block. Within this block to create a class of objects. Object-oriented
approach allows us to combine data and functions that operate on them in one piece. It also enables easy
separation of pieces of code, corresponding to descriptions of individual elements of fragments of reality.
Object-oriented programming is often found in connection with structural or separately. This is the basic
paradigm of C+ +, C #, Java and many others.
Advantages:
The ability to easily disassemble the code


The link between data and operations

Ability to restrict access to data or operations in certain areas of code
Disadvantages:

Frequent need to use a large amount of redundant code for defining classes

Need to make a detailed analysis and design class
GENERIC PROGRAMMING
This methodology, also known as general programming is based on the assumption that many parts of the code is
sometimes repeated unnecessarily just because they are used to process the same way different types of data. The
generic approach allows the creation of universal formulas.
Generic programming is usually part of a mandatory methodology, but also often occurs in languages that use
the term methodology.
Advantages:
Independent of the type variables


Purpose of the Code
Disadvantages:
Difficulties in finding and removing errors that occur with certain types of data


Misuse of developers of generic structures, are often considered substitutes macros
EVENT-DRIVEN PROGRAMMING
This methodology is based on the assumption that a system defines the possibility of certain events (eg pressing
the "Show" window, click the mouse).
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
140
This methodology is one of the most used methods to create applications with graphical user interface. (Eg,
Borland Developer Studio, Microsoft Visual Studio). In recent years, gaining popularity in programming web
applications.
It is assumed that all events are entered into the system by the vector interrupt - handling mechanism (hardware)
or the loop (software). These mechanisms are considered as the base system, independent of the programmer.
Event-driven programming is usually included in the mandatory type of methodology.
Advantages:
Easy to describe the interaction with the environment

Disadvantages:

The complexity of building systems
CONCLUSION
The above methods of programming and their current use, it is clear that programmers give priority lately using
object-oriented programming features. Given the issues and especially the quantity of programming code, this is
clearly much more efficient way of programming for large, demanding applications.
LITERATURE
[1]
ALLEN, M.; WILKINSON, B. Parallel Programming – Techniques and Applications Using Parallel
Computers. Prentice Hall, 431 p., 1999.
ANDREWS, G. R. Foundations of Multithreaded, Parallel, and Distributed Programming. Addison
[2]
Wesley, 664 p., 2000.
COHEN, A. Numerical analysis of wawelet methods, JAI Press, 354 p., 2003.
[3]
DEVLIN, K. Problémy pro třetí tisíciletí. 269 p., Praha, 2005.
[4]
EDMONDS, J. How to think about algorithms. Cambridge University Press, 472 p., 2010.
[5]
GOODRICH, M.; TAMASSIA, R. Algorithm Design: Analysis, and Internet Examples. 2002, John Wiley
[6]
& Sons.
ADDRESS
Ing. Filip Janovič
European Polytechnical Institute, Ltd.
Osvobození 699
686 04 Kunovice
Czech Republic
[email protected]
Mgr. Dan Slováček
European Polytechnical Institute, Ltd.
Osvobození 699
686 04 Kunovice
Czech Republic
[email protected]
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
141
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
142
COMPARISON OF AN ARMA MODEL VS. OF SVM AND CAUSAL MODELS
APPLIED TO WAGES TIME SERIES MODELING AND FORECASTING
Milan Marček1 Marek Horvath3 Dušan Marček1,2,3
1
Institute of Computer Science, Faculty of Philosophy and Science, The Silesian University Opava
2
The Faculty of Management Science and Informatics, University of Žilina
3
European Polytechnic Institute Ltd. Kunovice
Abstract: We investigate the quantifying of statistical and econometric structural model parameters
of average nominal wages in Slovak economy. SVM´s modelling approaches are used for automated
specification of a functional form of the model. We provide the fit of average nominal wages models
based on Box-Jenkins methodology and a causal model to the quarterly data over the period 19912006 in the Slovak Republic. We use this model as a tool to compare its approximation and
forecasting abilities with those obtained using SV regression´s method.
Key words: ARMA Processes, Mean Square Error, Box-Jenkins Methodology, SV Regression,
causal models
INTRODUCTION
Most models for the time series of wages have centered about autoregressive (AR) processes. In SVM a nonlinear model is estimated based on solving a Quadratic Programming (QP) problem. SVM learning actually
minimize confidence interval, estimation error (Vapnik-Chervonenkis dimension) and the empirical risk
(approximation error), whereas in the Box-Jenkins method the ARMA model orders to be fitted are determined
using an Akaike´s (AIC) or Bayesian (BIC) information criterion. Section 2 discusses building an ARMA model
and provides a fit of the ARMA model using the Matlab program. The major goal of sections 3 and 4 is to
develop a model based on the SVM approach and predict the wages. Section 3 briefly describes the framework
of SVM´s methods and support vector (SV) regressions within which our empirical investigation is conducted.
In Section 4 the approximation and prediction abilities of SVM method is compared with the statistical approach.
A section of conclusions will close the paper.
APPLICATION OF ARMA MODELING IN WAGES PREDICTION PROBLEM
To illustrate of the Box-Jenkins methodology [1], consider the wages time readings {xt} of the Slovak economy.
We would like to develop a time series model for this process so that a predictor for the process output can be
developed. The quarterly data were collected for the period January 1, 1991 to December 31, 2006 which
provides total of 64 observations (displayed in Fig. 1). To build a forecast model we define the sample period for
analysis x1, ..., x64, i.e. the period over which we estimate the forecasting model and the ex post forecast period
(validation data set), x 65, ..., x 68 as the time period from the first observation after the end of the sample period
to the most recent observation. By using only the actual and forecast values within ex post forecasting period
only, the accuracy of the model can be calculated.
To determine appropriate Box-Jenkins model, a tentative ARMA model in identification step is identified. In
order to fit a time series to data, first the data were transformed to a stationary ARMA type process, i.e. the data
have to be modeled by a zero-mean and constant variability. After eliminating trend and seasonal component, the
natural logarithms of the once differenced data y t = xt - xt  4 are shown in Fig. 2.
There are various methods and criteria for selecting an ARMA model. In this section we concentrate on model
identification by Hannan-Rissanen procedure [4]. The Matlab program developed in [7] selects as well as
estimates the model. Using this program the model for wages time series was tentatively identified as
ARMA(1,3) process with preliminary estimates of the model parameters as follows
y t  0.0017  0.46 y t 1  0.905 t 1  0.588 t  2  0.365
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
143
(1)
Figure 2. The wages data after transformation to
a stationary ARMA type process
Figure 1. Nominal wages (Jan 1991 - Dec 2006)
Instead looking at the correlation function we used the portmanteau test based on the Ljung-Box statistic. The
test statistic is [8]
2
K r (k)
Qh  N(N 2) e
k 1 N  k
which has an asymptotic chi square (  2 ) distribution with K-p-q degrees of freedom if the model is appropriate.
If QK   12 ,( K  p  q ) , the adequacy of the model is reject at the level  . The chi-square statistic applied to these
autocorrelation is 13.79, and so we have no evidence to reject the model.
Because the model is written in terms of a stationary time series to obtain a point forecast, the final model must
be rewritten in terms of the original data and then solved algebraically for wages wt . The forecasts obtained
from this model for time t  65,66,67,68 are shown in Fig. 3.
SUPPORT VECTOR MACHINE FOR FUNCTIONAL APPROXIMATION
This section presents quickly a relatively new type of learning machine – the SVM´s applied in the regression
(functional approximation) problems. For details we refer to [6], [10]. The general regression learning task is set
as follows. The learning machine is given n training data, from which it attempts to learn the input-output
n
relationship y  f (x ) , where { xi , y i    , i  1,2,..., n } consists of n pairs {
the ith input and
is
y i , xi } in1 . The xi denotes
y i is the ith output. The SVM´s considers regression functions of two forms [6]. The first one
n
f (x)  (i i*)(xi ,xj ) b
i 1
j 1
(2)
where  i ,  i are positive real constants (Lagrange multipliers), b is a real constant,  (. / .) is the kernel
function. Admissible kernels have the following forms:  (x i , x j )  x Ti x j (linear SVM´s)  ( x i , x j )  (x Ti x j  1) d
*

(polynomial SVM´s of degree d),  (x i , x j )  exp   x i  x j
2
2
 (radial basis SVM´s), where  is a positive real
constant and other (spline, b-spline, exponential RBF, etc.).
The second approximation function is of the form [6]
n
f (x, w )   wi i (x)  b
i 1
(3)
where  (.) is a non-linear function (kernel) which maps the input space into a high dimensional feature space. In
contrast to Eq. (2), the regression function f (x, w ) is explicitly written as a function of the weights w that are
subject of learning.
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
144
Figure 3. Forecasts of wages data (ARMA(1,3) model)
The SV regression approach is based on defining a loss function that ignores errors that are within a certain
distance of the true value. This type of function is referred to as an -insensitive loss function (see Fig. 4).
Figure. 4. The insensitive band for one dimensional linear (left), non-linear (right) function
Fig. 4 shows an example of one dimensional function with an -insensitive band. The variables  ,  * measure
the cost of the errors on the training points. These are zero for all points inside the -insensitive band, and only
the points outside the -tube are penalized by the so called Vapnik´s -insensitive loss function.
In regression, there are different error (loss) functions in use and that each one results from a different final
model. Fig. 5 shows the typical shapes of three loss functions [2]. Left: quadratic 2- norm. Middle: absolute error
1-norm. Right: Vapnik´s -insensitive loss function.
Formally this results from solving the following Quadratic Programming problem
 y i  w T  ( x)  b     i
 T
*
subject to w  (x)  b  y i     i

 i ,  i*  0

i  1,2, ..., n,
i  1,2, ..., n,
(5)
i  1,2, ..., n
where C is the value of capacity degrese. To solve (4), (5) one constructs the Lagrangian
=
n
n
1 T
w w  C  ( i   i* )    i (   i  y i  w T  (x i )  b)
i 1
i 1
2

n
n
  i (   i  y i  w  ( x i )  b)   (  i  i   i  i )
*
*
i 1
T
*
i 1
Figure 5. Error (loss) functions
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
145
*
(6)
by introducing Lagrange multipliers  i ,  i*  0 ,  i ,  i*  0 , i = 1, 2, ..., n. The solution is given by the saddle
point of the Lagrangian [3]
max min L p (w, b,  i ,  i* ,  i ,  i* ,  i ,  i* )
 , i* i ,  i* w , b, i , i*
(7)
subject to constrains
 L p

 w
 L p

 b
 L
 p
  i

 L p
  i*

n
 w   ( i*   i ) (x i ),
0
i 1
n
  ( i*   i )  0,
0
 0,
 0,
i 1
L p
 i
L p
 i*
(8)
 0  0   i  C , i  1,..., n,
 0  0   i*  C , i  1,..., n,
which leads to the solution of the QP problem:
max

*
 , i
n
n
1 n
*
*
*
*
T
 ( i   i )( j   j ) (x i x j )    ( i   i )   y i ( i   i )
i 1
i 1
2 i , j 1
(9)
subject to (8).
After computing Lagrange multipliers  i and  i* , one obtains of the form of (2), i.e.
n
f (x)   ( i   i* ) (x i , x j )  b
i 1
j 1
(10)
By substituting the first equality constraint of (8) in (3), one obtains the regression hyperplane as
f (x, w )  w T  (x)  b
(11)
Finally, b is computed by exploiting the Karush-Kuhn-Trucker (KKT) conditions [2]. From KKT conditions, we
obtain
b  y  n (   * ) (x , x )  
k
i
i
i
k

i 1

n
b  y k   ( i   i* ) (x i , x k )  
i

1

for  k  (0, C ),
for  k*  (0, C ).
(12)
EXPERIMENTING WITH NON-LINEAR SV REGRESSION
The average nominal wages Wt can be described by the following regression equation [9]
Wt  b  aWt 4  t
(13)
where a, b are the parameters,  t is the disturbance term. We demonstrate here the use of the SV regression
framework for estimating the model given by Eq. (13). If Wt exhibits a curvilinear trend, one important
approach for generating an appropriate functional non-linear form of the model is to use the SV regression in
which the Wt is regressed either against Wt-4 or the time by the form
n
Wˆt  wii (xt ) b
i1
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
146
(14)
where x t  (Wt 1 , Wt  4 , ...) is the vector of time sequence (regressor variable), or
n
Wˆ t   wi i (x t ) b
(15)
i 1
where x t  (1,2, ..., 64) is the vector of time sequence (regressor variable). Our next step is the evaluation of
the goodness of last three regression equations to the data inside the estimation period expressed by the Mean
Square Error MSEA, and the forecast summary statistics the MSEE for each of the models out of the estimation
period. These measures are given in Tab.1.
One crucial design choice in constructing an SV machine is to decide on a kernel. Tab. 1 and corresponding Fig.
6 shows SVM´s learning of the historical period illustrating the actual and the fitted values by using various
kernels.
Tab. 1 presents the results for finding the proper model by using the quantities MSEs. As shown in Tab. 1, the
structural model that generates the “best” forecasts is the model with MSEE = 58900 (Fig. 6b).
Table 1. SV regression results of three different choice of the kernel and the results of the dynamic model on the
training set (1991Q1-2006Q4). In two last columns the fit to the data and forecasting performance respectively
are analyzed. See text for details.
Fig.
6a
6b
6c
6d
3
MODEL
causal (14)
causal (14)
causal (14)
time s. (15)
statistical (1)
causal (13)
KERNEL
RBF
RBF
Exp. RBF
RBF

LOSS FUNCT.
1150 Qadratic
600 Quadratic
600 Quadratic
1.0
Quadratic
MSEA
15590
10251
3315
0.421
25289
44181
MSEE
62894
58900
70331
none
104830
44549
The results shown in Tab.1 were obtained using degrees of capacity C = 104. The insensitivity zone  and the
degrees of capacity are most relevant coefficients. To learn the SV regression machine we used partly modified
software [5]. Fuzzy systems offer methodologies for managing uncertainty in a rule-based structure. In this
section, RBF neural network structures are used (see Fig. 1b) as tools of performing fuzzy logic inference for
fuzzy system depicted in Fig. 1a. We propose the neural architecture according to the Fig. 1b whereby the
a priori knowledge of each rule is embedded directly into the weights of the network.
CONCLUSION
In this paper, we have examined the SVM´s approach to study linear and non-linear models on the time series of
average nominal wages in the Slovak Republic. For the sake of calculating the measure of the goodness of fit of
the regression model to the data, we evaluated five models. One model was based on causal regression and four
models on the Support Vector Machines methodology. The benchmarking was performed among an ARMA
model, versus SVM´s method. As it visually is clear from Table 1, too many model parameters results in
overfitting, i.e. a curve fitted with too many parameters follows all the small fluctuations, but is poor for
generalization.
Acknowledgement. This work was supported by the grant GAČR 402/08/0022 and VEGA 1/0667/10.
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
147
a)
b)
c)
d)
Figure 6. Training results for different kernels, loss functions and σ of the SV regression (see Tab 1)
REFERENCES
[1]
BOX, G. E.; JENKINS, G. M. Time Series Analysis. In: Forecasting and Control. Holden-Day : San
Francisco, CA 1976.
[2]
CRISTIANINI, N.; SHAWE-TAYLOR, J. An introduction to SVM. Cambridge University Press :
Cambridge, 2000.
[3]
FLETCHER, R. Practical methods of optimization. J-W and Sons : New York, 1987.
[4]
GRANGER, C. W. J.; NEWBOLD, P. Forecasting Economic Time Series. Academic Press : New York,
1986.
[5]
GUNN, S. R. Support Vector Machines for Classification and Regression. Technical Report, Image
Speech and Intelligent Systems Research Group, University of Southampton, 1997.
[6]
KECMAN, V. Learning and Soft Computing – Support Vector Machines. In: Neural Networks and Fuzzy
logic Models. Massachusetts Institute of Technology, 2001.
[7]
KOCVARA, B. Time Series Modeling Using Statistical (Econometric) Methods and Machine Learning.
Diploma work. University of Žilina : Žilina, 2007.
[8]
MONGOMERY, D. C.; JOHNSTON, L. A.; GARDINER, J. S. Forecasting and Time Series Analysis.
McGraw-Hill, Inc., 1990.
[9]
PÁLENÍK, V.; BORS, L.; KVETAN, V.; VOKOUN, J. Construction and Verification of Macroeconomic
Model ISWE97q3. In: Ekonomický časopis, 46, 1998, č. 3, p. 428-466.
[10] VAPNIK, V. The nature of statistical learning theory. Springer Verlag : New-York, 1995.
CONTACTS:
Prof. Ing. Dušan Marček, CSc.
The Faculty of Management Science and Informatics
University of Žilina
Univerzitná 8215/1
010 26 Žilina
Slovak Republic
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
148
Ing. Milan Marček
Institute of Computer Science
Faculty of Philosophy and Science
The Silesian University Opava
Bezručovo náměstí 1150/13
746 01 Opava
Czech Republic
Ing. Marek Horvath
European Polytechnic Institute Ltd. Kunovice
Osvobození 699
686 04 Kunovice
Czech Republic
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
149
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
150
PROCESSING OF UNCERTAIN INFORMATION IN DATABASES
Petr Morávek1, Miloš Šeda2
1
Brno University of Technology
European Polytechnical Institute
2
Abstract: The present paper deals with the processing of uncertain data in databases. These data
are selected out of vague requirements of customers purchasing laptops in traditional shops. The
objective of this work is to create an up-to-date application for laptop e-shops, supplemented with
a fuzzy expert system assisting in the selection of laptops in the particular online store to the
customers who do not have sufficient information on the technical parameters and the current
trends in laptops.
Keywords: fuzzy logic, linguistic variable, laptop purchase
INTRODUCTION
The primary task was to implement the system that will handle uncertain information in databases. To simplify
purchases on the Internet, a modern shop with expert knowledge fuzzy system was designed. This system is able
to advise customers entering electronic shops how to purchase the goods as well as salesmen to sell the goods.
It should intuitively evaluate vague customer requirements. As the development environment for this task, the
most common combination of PHP programming language and MySQL database was chosen because it is
licence free. The goods sold in the shop are laptops, because they are nowadays very popular, especially due to
their mobility.
UNCERTAINTY IN DATABASES
Uncertainty in the database was implemented using fuzzy logic, which provides possibilities of robust work with
uncertain data. The basic strength of fuzzy logic is easy work with natural language expressions and their
subsequent processing.
 Fuzzy logic - the usual practice is required to achieve the greatest accuracy, absolute precision is
essentially unattainable. The measured size of the table is, e.g., 3 m; accurate measuring devices will
automatically determine inaccuracy in dm, cm, mm, etc. Fuzzy logic applied in the described expert
system deals with uncertain data and is able to make a final decision [3].
 Natural language (linguistic variables) - we use the terms of ordinary human speech. The advantage
of these expressions is their intuitive understanding. E.g. if you want to learn something new, we do not
know the exact details on the data, but it is sufficient to have only a few vague words to understand. For
more details, see the following issues in the literature [2].
 Application functionality - Data processing that uses fuzzy logic is based on determination of the
interval, which the linguistic expressions belongs to, and then the application finds all laptops in it.
These laptops are automatically assessed by evaluating functions implemented in the application. The
laptops with the highest value of the evaluation functions are displayed to users.
A table of goods was proposed to implement the requirements mentioned above. It was necessary to focus on the
important parameters of laptops, which will be evaluated by the given fuzzy system. Further we will investigate
the way of linguistic variables design and the subsequent storage of fuzzy numbers for further processing.
LAPTOPS' FUZZY EXPERT SYSTEM
An expert system is a computer program capable of deciding about the given problem on the basis of
information knowledge obtained from experts.
TABLES OF EXPERT SYSTEM
For modelling of linguistic variables in natural language, Table 1 was proposed. For storage of the fuzzy ranking
laptops, Table 2 was designed [4].
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
151
Column Name
ID_LIN
HARDWARE
NAME
FROM
MIDDLE
IN
FUNCTION
Column Name
ID_F
ID_NOT
ID_L
VALUE
FUZZY
Column Type
int(10)
varchar(25)
Description
Identification of linguistic variables
Identification of the type of hardware that is defined by the
linguistic variable
varchar(30)
Name of the linguistic variable
varchar(30)
Beginning of linguistic variable function
varchar(30)
Middle of linguistic variable function.
varchar(30)
End of linguistic variable function.
enum('1','2','3')
Function definition of linguistic variables
1 = low, 2 = middle, 3 = maximum
Tab. 1 Linguistic variables
Column Type
int(10)
int(10)
int(10)
varchar(35)
varchar(35)
Description
Identification of fuzzy values
Identification of the laptop
Identification of the linguistic variable
Laptop variable which is assessed
Fuzzy ranking based on linguistic variables
Tab. 2 Fuzzy ranking
The above data tables are the most important. However, the application offers much more data tables, e.g., table
of laptops catalogue, users, orders, etc. More details, including their relation can be found in Fig. 1.
Fig. 1 Relationships of all tables in our database application
SELECTION OF LAPTOP PARAMETERS
When buying a laptop we consider several parameters, which are the most important for the user. Of course
these requirements can differ; it is therefore necessary to design such elements that will best suit most users. In
the described application the following parameters of laptops were designed:
 Brand - Achieving the lowest price is reflected in the quality of the laptops. However, there are
companies that take pride in quality. Therefore the brand was included into the decisions and it enables
to show whether the user prefers the guaranteed quality of a laptop.
 Screen size - Development of laptops brought about a progressive miniaturization of components and
thereby the laptops size reduction, enabling greater mobility, which is their main advantage.
 Processor - creates a brain of laptops; the better and faster the processor is, the faster the laptop.
 Hard disc - is a basic drive for the operating system, user data and programs. Nowadays the trend is to
have a hard disc with an extremely high capacity.
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
152





Operational memory - The speed is closely connected with the operational memory, the greater is the
amount of memory, and the larger is the space for running applications.
Graphics card - It is closely connected with the applications that are very graphics-intensive, i.e.
mainly videos and computer games.
Batteries - are joined with the mobility of a laptop, because we do not always have the possibility to
plug the electric power, and therefore the ability to operate as long as it is necessary is required.
Weight - closely relates to the size of laptops and mobility, thus the smaller the laptop is, the less
weight is achieved.
Price - closely relates to all of the above parameters and mainly depends on the laptop quality and size.
MODELLING OF FUZZY LINGUISTIC VARIABLES
A modelling system to define the linguistic variables of all laptops was proposed. The model contains the
following linguistic variables:
 Brand – best quality, quality, reasonable, , neutral, cheaper,
 Screen size - the largest, larger, normal, smaller, the smallest
 Processor - the fastest, fast, average, slow, the slowest,
 Hard disc – the largest space, enough space, average space, small space, the smallest space,
 Memory - the highest, middle-sized, the lowest,
 Graphics card – for work, for computer games,
 Batteries - large, average, small,
 Weight – the lightest, lighter weight, average, heavier, the heaviest,
 Price – the most expensive, more expensive, reasonable, cheaper, the cheapest.
PROPOSAL OF LINGUISTIC VARIABLES
Having designed all of the above linguistic variables, we can start introducing these variables into our expert
system. Each page shows the interval of the parameter that helps to modelling. Fig. 2 shows the form for
insertion (editing) of new linguistic variables and it contains the following boxes:
 Hardware - This box is pre-filled and cannot be changed. It serves to identify which laptop parameter
is modelled,
 Name of linguistic variables – Here we write the name of our linguistic variable for example, "the
smallest"
 The starting interval (“from” value) - here we write the value of the right interval margin, where the
degree of membership is "1" (“0”) and for higher values is less (greater) than “1” (“0”), the default
option for this parameter is 'n'
 The interval middle - here we write the value where the evaluation of the selected features begins
decrease (or increase)
 The ending interval (“to” value) – here we write the value of the right interval margin, where the
degree of membership is "1" (“0”) and for higher values is less (greater) than “1” (“0”), the default
option for this parameter is 'n'
For correct calculation of uncertainty, it is necessary to assign only numeric values to the interval parameters
“from”, “middle” and “to”.
Fig. 2 Insertion of a new linguistic variables
GENERATING FUZZY EVALUATION BASED ON LINGUISTIC VARIABLES
Generating fuzzy ranking on the base of linguistic variables was implemented by means of computational
functions. A part of this web page is an automatic control assessment of all laptops for given linguistic variables.
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
153
If by chance, a laptop and its parameter for the linguistic variables had not assigned their ranking, then the
application would indicate it.
Since assigning a fuzzy ranking to each new laptop would have been very laborious, it was necessary to
implement an automated system for its evaluation. Similarly, after deleting it is necessary to remove the laptop
and its ranking in the corresponding tables. An automated evaluation is processed in the following locations:
 When a new laptop is inserted, then it will be automatically evaluated and included into the expert
system,
 When a laptop is edited then this operation will also include an edition of its fuzzy ranking.
An automated fuzzy calculation was designed to facilitate the work with the expert system, because to upload or
edit new laptops may, e.g., a salesman, but we have to ensure that its ranking will be determined. Finally, we
have to remind the users that if a new linguistic variable is created (or updated) its ranking is not automatically
changed in the table "fuzzy" because it is assumed that the decision core of expert system will be looked after by
an expert who will monitor the fuzzy evaluation, whether it corresponds to the reality set and when he is satisfied
with his settings, then he will press the Save button, which confirms the contents change. This approach seems to
be preferred for reasons independent of the knowledge expert system for modelling, i.e. the database is updated
at the moment when an expert finishes his jobs.
Fig. 4 Calculation of fuzzy values
Fig. 3 Calculation of fuzzy values
SEARCH FOR LAPTOPS BASED ON FUZZY EXPERT SYSTEM
A knowledge-based expert system in the e-shop gives us great possibilities. Besides to the catalogue described in
natural language of users, it offers the information retrieval using natural language expressions also for those
users who do not have sufficient technical knowledge about the parameters of laptops. All of the above defined
laptop parameters are included in a simple explorer, where we can select all linguistic variables defined in the
previous part of this paper. Using them we can specify requirements the laptop should meet in the natural
language, what.
Search for vague information is dealt with so that we as "moderate" price and the script first find out the laptops
belonging to the linguistic variables and the output profile. The above described mechanism would work
perfectly for a single parameter search. Because our search engine has nine options from which the users can
select and specify their requirements, a compromise in the search has been proposed. For these cases searches
were used using multicriteria decision making. Each line of search engine must find out what actually the given
vague information means, and these laptops can be saved in an auxiliary table. We continue with the following
line, and these laptops are again added to the auxiliary table. Once you pass through the total of all the
parameters using multi-criteria decision-making from an auxiliary table, you can select a laptop that is then
listed to the user. We also solved what happens when the user selects a parameter. The solution is based on the
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
154
idea that when a user selects a parameter, this parameter is not important for him/her, i.e. the search algorithm
selects the laptops which, in this parameter, have a degree of membership greater than 0.5.
E-SHOP WITH FUZZY EXPERT SYSTEM
E-commerce allows you to select from the catalogue, in most cases, in some categories and detailed descriptions
of goods. Most e-shops offer a product search based on user requirements. In our case, expert knowledge system
to search our uncertain data was used. . This system also offers the possibility of ordering, delivery and payment
for goods. More sophisticated applications provide links to e-shops with the bookkeeping used in the shop,
showing the number of pieces in stock, discussions about the goods and payment by credit card over the Internet.
The e-shop is designed so that the administrator application should minimize labour and handling orders and
replenishment the need to look after the store personnel. The following table has been implemented in the eshop:
 administration system
 catalogue goods
 cart
 system for ordering goods,
 system to dispatch goods
 storage system
 discussion
In the e-shop, the administration system was implemented with five levels of rights: Anonymous, User, the right
of ordinary customers, Editor's law for vendors, Admin is the right to knowledge engineers (experts) and
SuperAdmin is the only account that the administrator or the owner of the application owns.
The e-shop offers the opportunity to purchase at the store or shipping address. We implemented the most modern
methods of paying for goods (cash, bank transfer, online payments and credit card). The ordering and deals
system was designed according to the latest trends.
The application is a storage system which informs the customer about the number of goods at five different
stores (off-site storage, Storage: Prague, Brno, Olomouc and Ostrava), and last but not least, there is a possibility
of discussion about any laptop between customers and staff. Figure 4 is a sample catalogue of laptops. For more
information, see the thesis [1].
Fig. 5 Search query results and a search
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
155
CONCLUSION
The goal of the paper was to design and implement an application for the processing of uncertain data in
databases. When determining the requirements the application should satisfy, it was decided that the uncertain
data will represented customer requirements considering the purchase of goods. As basic goods for the purchase,
laptops were chosen because nowadays they are very popular and people frequently prefer them because of their
mobility.
In our work we have created a shop with an expert knowledge system, which aims to advice clients who are in
the grasp of the parameters of laptops. For the other customers, a classical catalogue has been implemented in
the form known from various shops on the internet engaged in selling computers.
The development was primarily focused on simplicity, clarity and intuitive handling for both the users and
experts who will work with our model of fuzzy expert system. The design of the application was tailored to
needs of our knowledge expert fuzzy system.
Fig. 6 Sample survey catalogue laptops
Since the Internet is increasingly growing and every day new users from all over the world join it, it may be
regarded as the most popular tool of communication and information among people. With the increasing trend of
shopping on the internet, our proposed e-shop with fuzzy expert system could be implemented in all stores. The
reason why such applications have not been a part of Internet marketing yet is, in my opinion, especially that a
programmer who gets an order to implement an e-shop does not know that for sellers could be useful to have an
expert system which enables to advice customers and simplify their decision making. Another reason may be
that the implementing of an expert system in the e-shop framework will increase the price that a seller has to pay
for it, which is also an essential aspect of the ordering of Internet applications.
REFERENCES
[1]
MORÁVEK, P. Processing of uncertain information in databases. Diploma project, VUT : Brno, 2009,
78 p.
[2]
NOVÁK, V. Fundamentals of fuzzy modeling. Praha : BEN – technická literatura, 2003. ISBN 80-7300009-1.
[3]
GALINDO, J. Handbook of Research on Fuzzy Information Processing in Databases. Information
Science Reference, 2008. ISBN 978-1599048536.
[4]
GALINDO, J. Fuzzy Databases: Modeling, Design and Implementation. Idea Group Publishing, 2005.
ISBN 978-1591403241.
ADDRESS
Ing. Petr Morávek
Brno University of Technology
Faculty of Mechanical Engineering,
Institute of Automation and Computer Science
Technická 2
616 69 Brno
e-mail: [email protected]
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
156
Prof. RNDr. Ing. Miloš Šeda, Ph.D.
European Polytechnical Institute, Ltd.
Osvobození 699
686 04 Kunovice
e-mail: [email protected]
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
157
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
158
FROM THE RING STRUCTURE OF HYDROGEN ATOM TO THE STRUCTURE OF
GOLD
Pavel Ošmera
European Polytechnic Institute Kunovice
Abstract: This paper is an attempt to attain a better model of the nature’s structure using a vortexring-fractal theory. The aim of this paper is the vortex-ring-fractal modeling of the hydrogen atom,
which is not in contradiction to the known laws of nature. We would like to find some acceptable
quantum model of the hydrogen atom as levitating model with a ring structure of the proton and a
ring structure of the electron. It is known that planetary model of hydrogen is not right. The
quantum model is too abstract. Our imagination is that the hydrogen atom is a levitation system of
the ring proton and the ring electron.
Keywords: structure of hydrogen atom, quantum model of the hydrogen, vortex-ring-fractal theory
1 INTRODUCTION
Fractals seem to be very powerful in describing natural objects on all scales. Fractal dimension and fractal
measure are crucial parameters for such description. Many natural objects have self-similarity or partial-selfsimilarity of the whole object and its part [10].
Most of our knowledge of the electronic structure of atoms has been obtained by the study of the light given out
by atoms when they are exited. The light that is emitted by atoms of given substance can be refracted or
diffracted into a distinctive pattern of lines of certain frequencies and create the line spectrum of the atom. The
careful study of line spectrum began about 1880. The regularity is evident in the spectrum of the hydrogen atom.
The interpretation of the spectrum of hydrogen was not achieved until 1913. In that year the Danish physicist
Niels Bohr successfully applied the quantum theory to this problem and created a model of hydrogen. Bohr also
discovered a method of calculation of the energy of the stationary states of the hydrogen atom, with use of
Planck’s constant h. Later in 1923 it was recognized that Bohr’s formulation of the theory of the electronic
structure of atoms to be improved and extended. The Bohr’s theory did not give correct values for the energy
levels of helium atom or the hydrogen molecule-ion, H2+, or of any other atom with more than one electron or
any molecule. During the two-year period 1924 to 1926 the Bohr description of electron orbits in atoms was
replaced by the greatly improved description of wave mechanics, which is still in use and seems to be
satisfactory. The discovery by de Broglie in 1924 that an electron moving with velocity v has a wavelength
λ=h/mev [4]. The electron is not to be thought of as going around the nucleus, but rather as going in and out, in
varying directions, so as to make the electron distribution spherically symmetrical [4].
2 THE SPIN OF THE ELECTRON
It was discovered in 1925 that the electron has properties corresponding to its spin S. It can be described as
rotating about an axis of a ring structure of the electron [25]. The spin of the electron is defined as angular
momentum [21]:

 
S  me (re  ve )
(1)
For the spin on axis z:
Sz  N
me
1 h
reve 
N
2 2
(2)
where me is the mass of the electron, re is the radius of the electron and N is number of substructures inside the
structure of the electron. The torus structure with spin ½ can oscillate with nλ [25] (see Fig.1, Fig.2, and Fig.3):
2  2re  n
2re 
n

2
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
159
(3)
re 
Sz  N

4
(4)
me

1 h
reve  me
ve 
N
4
2 2
h

meve
where ve is rotation velocity of the electron [25] (see Fig.5). It is the similar result as de Broglie discovered.
Fig. 1 The torus structure with spin ½
Fig. 2 The vortex structure with spin ½
Fig. 3 The fractal structure with spin ½
3 THE MODEL OF HYDROGEN WITH A LEVITATING ELECTRON
Fig. 4 The levitating electron in the field of the proton (the fractal structure model of hydrogen H is simplified
[18]).
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
160
(5)
(6)
The new model of the hydrogen atom with a levitating electron was introduced in [18], [23]. There is attractive
(electric) force F+ and (magnetic) repellent force F- :
F  F  F 
e 2  1 d o2 



4 o  d 2 d 4 
(7)
The hydrogen atom can have the electron on left side or on right side [23]. The attractive force F+ is Coulomb’s
force. The repellent force F- is caused with magnetic fields of the proton and the electron (see Fig.4). A distance
between the electron and the proton in (7) is d. The electron moves between point d1 and point d2 (see Fig.8 and
Fig.9). The energy Ei required to remove the electron from the ground state to a state of zero total energy is
called ionization energy. The energy of the hydrogen atom in the ground state is E= – 13.6 eV. The negative sign
indicates that the electron is bound to the nucleus and the energy 13.6 eV must be provided from outside to
remove the electron from the atom. Hence 13.6 eV is ionization energy Ei for hydrogen atom. Calculation of
ionization energy from (7) was introduced in [22]:
E   Ei  
e 2  1 d o2 
e2 1 
d2 
1  o 2 
  3   
4 o  d 3d 
4 o d  3d 
(8)
The quantum number n that labels the electron radii ren also labels the energy levels. The lowest energy level or
energy state, characterized by n=1, is called the ground state. This state is described in equations (7) and (8).
Higher energy levels with n>1 are called excited states. For excited states we using postulates [22] and
presuppose following equations (9) and (10):
e 2  1 n 2 d on 
e 2  1 n 4 d o2 
e 2 1  n 4 d o2 
1  4 
 2  4  
 2  4  
4 o  d
d  4 o  d
d  4 o d 2 
d 
e 2  1 n 2 d on 
e 2  1 n 4 d o2 
e 2 1  n 4 d o2 


1 


 
En   Ein  





3d 2 
4 o  d 3d 3 
4 o  d 3d 3 
4 o d 
Fn  F  F 
(9)
(10)
where
d on  n 2 d o
(11)
To calculate quantum model of hydrogen we use radius re of the electron, which was derived in [14], [15], [17]:
 o e 2 vo2
re 
4 2 me ve2
(12)
1
c 2 c2
vo 

2 2 o  o
2
(13)
for
vo 
re 
1
e2
oe2 vo2
o e 2 c 2
 2




2
2
2
2
2
4 me ve 4 me 2ve 8  0 me ve
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
161
(14)
Fig.5 Vortex-fractal ring structure of the electron [15], [17]
The radius re of the electron in (14) was derived from the balance of Coulomb’s force and centripetal force [24].
On a circumference of a circle with re (see Fig.4 and Fig.5) have to be n of a half-wavelength as defined in (3)
and (6): nλ/2=nh/2meve (n is quantum number) [23]:
oe2 vo2 o e2 c 2

1
1 h
e2
2re  2


n n
2
2
2
2
4 me ve 2me ve 4 o me ve
2
2 meve2
e2
1
 nh
2 o ve
(15)
(16)
where ven is velocity of the electron if the electron has distance don and minimum energy Eon on level n:
ven 
1 e2
1

c
n 2 o h n 
(17)
For n=1 on the ground state the electron has maximal velocity vemax and has rotational energy Er:
ve max 
e2
2 o h


c  697km / s

(18)
where α is the couple constant:
e2
2 o hc
2 o hc 1

  137.036
e2


c
ve max
(19)
(20)
Energy Er of rotation of the electron if we use (17):
1 me
1 mee 4
2
Ern 
N  ven  2
2 N
n 8 2 o2 h 2
(21)
For quantum number n=1
Eio 
mee 4
 13.6eV
8 o2 h 2
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
162
(22)
Er 
Eio

2

13.6eV
2
 1.36eV
(23)
Equation (10) for:
d  d on  n 2 d o
En  
e 2 1  n 4 d o2 
e2
1 2

1 


2 
2
3d 
4 o n d o 3
4 o d 
(25)
We have the same size of energy if we multiply (25) by ¾ and (21) by π2 and then we can derive distance do:
Eno  
2
e2
1
1 me e 4
1 2 3
1 me e 4
eV








13
.
6
n2
n 2 8h 2 o 2
n 2 8 2 o 2 h 2 1
4 o n 2 d o 3 4
do 
 oh2
 rB
me e 2
The Bohr radius rB has the same size as the distance d o  5.29  10
(26)
(27)
11
m [4] in our vortex-fractal-ring model
[18], [23]. The radiation which is emitted by the hydrogen atom is produced when the electron undergoes a
transition from a higher-energy stationary state (with quantum number n2) to a lower-energy state (with quantum
number n1). The frequency f of the emitted photon is given by the equation:
1 1
mee 4  1
1
E  hf 
 3 2  2  2   Eio  2  2 
 8h  o  n1 n2 
 n1 n2 
hc
The line spectrum of hydrogen atom (Balmer series for n1=2) is on Fig.6:
Fig.6 The line spectrum of hydrogen atom
Hydrogen has simplest spectrum. This spectrum consists from a number of series of lines. Balmer series with
its wavelength λ [wavelength is in nm] is on Fig.6.
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
163
(28)
b)
a)
Fig.7 The force F between the electron and the proton depending on their distance d
a) for quantum number n=1, see equation (7)
b) for quantum number n={1, 2, 3, 4, 5, 6, 7}, see equation (9)
a)
b)
Fig.8 The ionization energy E of hydrogen depending on the distance d
a) for quantum number n=1, see equation (8)
b) for quantum number n={1, 2, 3}, see equation (10)
Radius re of the electron can be calculated with using equations (14) and (18):
ren 
1
e2
4n 2 2 0 h 2 n 2 0 h 2 

 2 
8 2 0 me ve2 8 2 0 me
e4
2e me 
e2
2

ren 

2
n2do 

2
d on
(29)
(30)
and the diameter De of the electron is:
Den  2re  n 2 do  don
For n=1:
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
164
(31)
re 

do
2
De  d o
(32)
(33)
To calculate distances d1, d2, and d3 we must solve the cubic equation (see Fig.8):
En  
e2 1  n 4 d o2 
e2
1 1

1 


 13.3eV
2 
2
3d 
4 o n d o 2
4 o d 
(34)
The cubic equation from (34) is:
3d 3  6n 2 do d 2  2n6 do3  0
(35)
If we include the rotation energy Ern from (21) of the electron to (34) we receive cubic equation:
e 2 1  n 4 d o2 
e2
e2
1 1
1 1 1

1 


2 
2
2
3d 
4 o n d o 2 4 o n d o 2  2
4 o d 
a)
b)
Fig.9 The position and size of the electron depending on the distance d and quantum number n
a) for quantum number n={1, 2, 3, 4, 5, 6, 7} and equation (35)
b) for quantum number n={1, 2, 3, 4, 5, 6, 7} and equation (37)
a)
b)
Fig.10 The position and size of the electron depending on the distance d and quantum number n
(3D quantum model of the hydrogen atom wit different n)
a) for quantum number n={1, 2, 3, 4, 5, 6, 7} and equation (35)
b) for quantum number n={1, 2, 3, 4, 5, 6, 7} and equation (37)
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
165
(36)
It leads to cubic equation:
3( 2  1)d 3  6 2 n 2 d o d 2  2 2 n6 do3  0
(37)
d1=3.82124*10-11, d2=9.48242*10-11, d3= - 2.72366*10-11
(38)
Solution equation (36) for n=1:
Solution equation (37) for n=1:
d1=3.99670*10-11, d2=8.30853*10-11, d3= - 2.69858*10-11
Fig.11 Structure of light as a ring particle or a wave energy structure
Figure 11 explains a particle structure of the photon and a wave behavior of the light ray, which consists from
more photons arranged in the series (a sequence or a string of vortex pairs). A vortex pair is created from “bath”
vortex VB an a “tornado’ vortex VT with flow of energy E.
Fig.12 Vortex-fractal ring structure of the electron with 42 subeletrons
4 RING STRUCTURE OF ATOMS
Fig.13 Vortex-fractal ring structure of the alpha particle
Fig.14 Vortex structure of light (see Fig.11)
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
166
(39)
Fig.15 Vortex structures of the photon
Fig.17: The structure of the electron with subrings
Fig.19 The structure of water (nucleus of oxygen
and hydrogen is enlarged)
Fig.16 Vortex structures of the electron
Fig.18: A scanned structure of the electron [5]
Fig.20 The structure of water in more real scale
Fig.21 The structure of the gold
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
167
5 CONCLUSIONS
The exact analysis of real physical problems is usually quite complicated, and any particular physical situation
may be too complicated to analyze directly by solving the differential equations or wave functions. Ideas as the
field lines (magnetic and electric lines) are for such purposes very useful. A physical understanding is
a completely nonmathematical, imprecise, and inexact, but it is absolutely necessary for a physicist [1]. It is
necessary combine an imagination with a calculation in the iteration process. Our approach is given by
developing gradually the physical ideas – by starting with simple situations and going on more and more
complicated situations. But the subject of physics has been developed over the past 200 years by some very
ingenious people, and it is not easy to add something new that is not in discrepancy with them. The vortex model
(see Fig.4 and Fig.5) of the electron was inspired by vortex structure in the PET-bottle experiment with one hole
connector ($3 souvenir toy, Portland, Oregon 2004) [13], our connector with 2 or 3 holes [7], [12] and levitating
magnet “levitron” (physical toy). The “ring theory” is supported by experiments in [5] and [6] too. Now we
realize that the phenomena of chemical interaction and, ultimately, of life itself are to be understood in terms of
electromagnetism, which can be explain by vortex-ring-fractal structure in different state of self-organization
inside gravum [22].
The electron structure is a semi-fractal-ring structure with a vortex bond between rings. The proton structure is
a semi-fractal-coil structure. The proton is created from electron subsubrings e-2 and positron subsubrings υ-2
which can create quarks u and d [24]. This theory can be called shortly “ring” theory. It is similar name like
string theory.
In the covalent bond pair of electrons oscillate and rotate around a common axis. There are two arrangements of
hydrogen: with a left and a right side orientation of the electron in their structure. Very important is symmetry
and self-organization of real ring structures.
Perhaps the decreasing width Δλ2n of spectrum lines on Fig.6 (as Δλ23 = λa - λb) depends on energy Eio in (22),
(28) and kinetic energy Er in (23). This energy can vary in the interval {Ea, Eb} for {λa, λb} and ΔEλ = Eb - Ea =
Eio/(20π2) = Er /20 = 0.069eV (for n1=2 a different size n2>n1). It can be caused by precession of the electron.
Acknowledgment: This work has been partially supported by the Czech Grant Agency; Grant No:
MSM21630529 and No.: 102/09/1668.
REFERENCES
[1]
FEYNMAN R. P.; LEIGHTON R. B.; SANDS M. The Feynman Lectures on Physics, volume I, II, III.
Addison-Wesley publishing company, 1977.
[2]
DUNCAN T. Physics for today and tomorrow. Butler & Tanner Ltd.; London, 1978.
[3]
HUGGETT S. A.; JORDAN D. A Topological Aperitif. Springer-Verlag, 2001.
[4]
PAULING L. General Chemistry. Dover publication, Inc. : New York, 1988.
[5]
MAURITSSON J. Attosecond Pulse Trains Generated using Two Color Laser Fields. [online] Available
form: online.itp.ucsb.edu/online/atto06/mauritsson/
[6]
LIM, T. T. FLUID MECHANICS GROUP. [online] Available form: serve.me.nus.edu.sg/limtt/
[7]
OŠMERA P. Vortex-fractal Physics. In: Proceedings of the 4th International Conference on Soft
Computing ICSC2006, January 27, 2006, EPI : Kunovice, Czech Republic, p. 123-129.
[8]
OŠMERA P. Evolution of Complexity. In: LI Z.; HALANG W. A.; CHEN G. Integration of Fuzzy Logic
and Chaos Theory; Springer, 2006 ISBN 3-540-26899-5.
[9]
OŠMERA P. The Vortex-fractal Theory of Universe Structures. In: Proceedings of MENDEL 2006, VUT
: Brno, Czech Republic. 2006. 12 p.
[10] OŠMERA P. Electromagnetic field of Electron in Vortex-fractal Structures. In: Proceedings of MENDEL
2006, VUT : Brno, Czech Republic. 2006. 10 p.
[11] OŠMERA, P. The Vortex-fractal Theory of Universe Structures. In: Proceedings of the 4th International
Conference on Soft Computing ICSC2006, January 27, 2006, EPI, s.r.o. : Kunovice, Czech Republic, p.
111-122.
[12] OŠMERA P. Speculative Ring Structure of Universe. In: Proceedings of MENDEL 2007. Prague, Czech
Republic. 2007. p. 105-110.
[13] OŠMERA P. Vortex-ring Modelling of Complex Systems and Mendeleev’s Table. In: Proceedings of
World Congress on Engineering and Computer Science. San Francisco, 2007, p. 152-157.
[14] OŠMERA P. From Quantum Foam to Vortex-ring Fractal Structures and Mendeleev’s Table. In: New
Trends in Physics. NTF 2007, Brno, Czech Republic, 2007, p. 179-182.
[15] OŠMERA, P. Vortex-fractal-ring Structure of Electron. In: Proceedings of the 6th International
Conference on Soft Computing ICSC2008, January 25, 2008, EPI, s.r.o. : Kunovice, Czech Republic.
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
168
[16]
[17]
[18]
[19]
[20]
[21]
[22]
[23]
[24]
[25]
OŠMERA P. Evolution of nonliving Nature. In: Kognice a umělý život VIII, Prague, Czech Republic,
2008,
p. 231-244.
OŠMERA, P. The Vortex-fractal-Ring Structure of Electron. In: Proceedings of MENDEL2008, Brno,
Czech Republic. 2008. p. 115-120.
OŠMERA, P. The Vortex-fractal Structure of Hydrogen. Proceedings of MENDEL2008. Brno, Czech
Republic. 2008. p. 78-85.
OŠMERA, P. Vortex-fractal-ring Structure of Molecule. In: Proceedings of the 4th Meeting Chemistry
and Life 2008. September 9-11, Brno, Czech Republic, 2008, ISSN 1803-2389, p. 1102-1108.
OŠMERA, P. Structure of Gravitation. In: Proceedings of the 7th International Conference on Soft
Computing ICSC2009. January 29, 2009, EPI, s.r.o. : Kunovice, Czech Republic, p. 145-152.
OŠMERA, P.; RUKOVANSKÝ, I. Magnetic Dipole Moment of Electron. In: Journal of Electrical
Engineering, No 7/s, vol. 59, 2008, Budapest, Hungary, p. 74-77.
OŠMERA, P. The Vortex-fractal Structure of Hydrogen. In: Proceedings of MENDEL2009, Brno, Czech
Republic. 2009. extended version on CD.
OŠMERA P. Vortex-ring fractal Structures of Hydrogen Atom. In: Proceedings of World Congress on
Engineering and Computer Science. San Francisco, 2009, p. 89-94.
OSMERA P. Vortex-ring-fractal Structure of Atoms. In: Journal IAENG, Engineering
Letters.
Vol.
18
Issue
2,
2010,
107-118,
[online]
Available
from:
http://www.engineeringletters.com/issues_v18/issue_2/index. html.
OSMERA P. Vortex-ring-fractal Structure of Atom and Molecule. In: IAENG Transaction on
Engineering Technologies. American Institute of Physics, vol. 4, New York, 2010, p. 313-327.
ADDRESS
Prof. Ing. Pavel Ošmera, CSc.
European Polytechnic Institute, Ltd. Kunovice
Osvobození 699
686 04 Kunovice
Czech Republic
[email protected]
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
169
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
170
NEURAL NETWORK MODELS FOR PREDICTION OF STOCK MARKET DATA
Jindřich Petrucha
Evropský polytechnický institute, Ltd.
Abstract: The article deals the possibility of using neural networks for data prediction. These data
represent the values of the shares on the NASDAQ stock market and are used as training set of
neural networks. In preparing model we use multiple time series. These series create a single test
set compiled using the PHP language. To implementation the SNNS (Stuttgart Neural Network
Simulator) simulator used in the Java version, which allows selection of different options for
learning to achieve good values of global error.
Keywords: Neural network, SNNS, prediction, time series, backpropagation with momentum,
learning rate, quick propagation, multiple series, PHP program.
1. INTRODUCTION
Using neural networks for business intelligence creates a space for looking at new ways of managing investment
opportunities. Finding new ways to invest is an important factor in innovation, which is supported by new tools
for decision process.
Neural networks are part of artificial intelligence that can be used in different areas of decision-making systems.
One of the parts that are used is the financial area, focusing on buying and selling shares. These shares are sold
on the stock market in different parts of the world. The financial aspect has the advantage that the investor is
trying to get the most transparent in the sale and purchase of shares. The individual movements of shares on
major stock exchanges are stored to computer servers such as finance.yahoo.com, reuters.com, Bloomberg and
so on. From the data that are available on the Internet we can obtain a value for the shares and create value for
the time series.
These time series are suitable for the analysis of statistical methods and also in terms of acquiring knowledge of
the movement in different times.
These time series are used as a source of data for training sets of neural networks, which allow to implement the
decision-making processes in the prediction of next data of time series. This prediction is on based the data from
the previous movements of the shares on stock market. The decision process uses either a long-term strategy of
buying shares or short-term buying shares. In both strategies we can help neural network. These processes are
important and need to find appropriate tools to support them.
2. MODEL OF NEURAL NETWORK
Preparation and selection of modeling environment may affect the resulting behavior of the model as possible
that this environment provides.
The neural network model has a three-level architecture type feedforward network with neurons connected to
each other. The first input layer of neurons is implemented with linear activation function, and serves as input
data. This layer is a floating window, which we shift the time series. The size of the input layer depends on the
experience and the size of the trained data set that represents a time series. In practice, it is appropriate to create
several models that differ in size each input layer and gradually reached compare the accuracy and speed of
neural networks. For our case we choose the value of the input layer size of one week, which represents 5 values
of time series data. This value is reflected a cyclical phenomena, which can sometimes be seen in the buying and
selling shares. Another option is to create further internal hidden layer in which the training process will be
stored.
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
171
Figure 1. Design input layer of neural network.
By searching factors that affect on the time series we can get the experience in applying simile decision-making
processes. This important factor neural network model captures the dependence between time series with each
other which may support the results of prediction. This is the input data, different time series, which affects the
basic time series. This data will be stored as an auxiliary value in the input layer neural network. This model is
more complex in terms of creating models from time series, but it captures some phenomena that significantly
influence the buying and selling shares. In our case, it will be a stock market index data, which show the global
investor sentiment in the stock market. This data is often displayed on the servers and to show the direction in
which purchases of progress. In the case of our data will be the value NYSE Exchange and NASDAQ Stock
Exchange. For auxiliary inputs use the same number of neurons as the basic time series of shares purchased. The
output of the neural network is a value representing the predicted value of the shares. The output of the neural
network is one of sigmoidal neuron activation functions. Output activation function may also be the function
tanh.
3 IMPLEMENTATION OF ARCHITECTURE AT SIMULATOR
To implementation this model in practice, we use a proven system SNNS, which allows setting of various
parameters during the learning phase the neural network.
Another option is the simulators that can be implemented directly on servers in the Internet environment. These
simulators can work directly online at the time of trading and are able to create a training set by accessing the
data resources of the server.
Neural network model is implemented using a neural network simulator SNNS (Stuttgart Neural Network
Simulator) version JNNS Java, which runs on the client computer. This simulator allows the general design of
a three-layer neural network. The first layer has 10 input neurons for time series models. In the second hidden
layer we use 8 neurons with sigmoid activation function. The output layer consists one neuron with sigmoid
activation function. The simulator allows to build this architecture by using visual design by hand or write a text
file into the simulator. The basic scheme of the simulator is shown in figure 2.
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
172
Figure 2. Model of neural network implemented in SNNS simulator
To apply the training patterns is necessary to prepare designs of two time series, this preparation is the standard
value of time series 0-1 at the interval and the data connection from one row to the model for the input layer of
neural networks.
For learning function we use too best function
The first is backpropagation with momentum. The formula for change the weight is:
∆wij(t+1)= η*oi*δj + α ∆wij(t)
(1)
Where is η learning rate parameter
Where δj = f”(netj) (tj - oj) if unit j is output unit
δj = f”(netj) ∑k ∆wij(t)
Where α is constant specifying the influence of the momentum to avoid oscillation.
For other learning function we use quick propagation.
∆(t+1)wij= (S(t+1)/ (S(t+1)- S(t+1))) ∆wij(t)
(2)
Where:
wij is weight between units i and j
∆(t+1) actul weight change
S(t+1) partial derivation of the error function by wij
S(t) the last partial derivative
In this process, you can use a separate program in PHP language, which assembles various patterns with shift by
the size of the input layer neural network. This is called the moving window which set of learning patterns.
Values of patterns are directly entered into a text file simulator with other necessary parameters learning. The
file of patterns for SNNS simulator is shown on other rows.
SNNS pattern definition file V3.2
generated at Mon Apr 25 15:58:23 1994
No. of patterns : 122
No. of input units : 10
No. of output units : 1
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
173
# Input pattern 1:
0.933747412008 0.919254658385 0.906832298137 0.931677018634 0.939958592133 1 0.935328960578
0.952245498963 0.958854828994 0.952078174152
# Output pattern 1:
0.952380952381
4. SUMMARY OF RESULTS:
The implemented model using the simulator allows JNNS realize neural network training process on the whole
set of training patterns. The output of neural network prediction is the value of the next period by the value of
global error in the following table 1.
Learning function
Global error MSE 5000 cycles
backpropagation with momentum
0,19
quick propagation.
0,38
Quicprop thru time
0,58
backpropagation thru time
0,8
Table 1. Learning function and MSE error.
The results demonstrate the achievement of global error in the range .In the case of a separate series of good
results can be achieved by learning neural network, but in practice this model does not capture the real state of
motion events.
5. CONCLUSION
Building proper learning sets for neural network is the key to predicting the correct values. Search the effect on
movements in the time series is a complex activity, because the clear links are not evident. It is necessary to
compare the predictions with the actual situation and continues to complement the training set to obtain actual
results. Using the training set with additional time series gives a good opportunity to explore new ways of
decision-making processes.
REFERENCES:
[1]
ZHANG, P: Neural Networks in Business Forecasting. Idea Group, Inc. 2004.
[2]
DOSTÁL, P. Pokročilé metody analýz a modelování v podnikatelství a veřejné správě. Akademické
nakladatelství CERM : Brno, 2008.
[3]
DOSTÁL, P. Neural Network and Stock Market, In: Nostradamus Prediction Conference. UTB : Zlín,
1999,
p. 8-13, ISBN 80-214-1424-3.
[4]
ZEEL, A. SNNS User manual, Version 4.2, University of Stuttgard.
[5]
SMITH, K.; GUPTA, J. Neural Networks in Business Techniques and Applications. United Kingdom :
IRM press, 2002. ISBN 1-59140-020-1.
[6]
NAOUALI, S.; QUAFAFOU, M.; NACHOUKI, G. Mining OLAP cubes: semantic links based on
frequent itemsets, 2004, ISBN: 0-7803-8482-2.
ADDRESS
Ing. Jindřich Petrucha. Ph.D.
Evropský polytechnický institute, Ltd.
Osvobození 699
686 04 Kunovice
Tel.: +420572549018
e-mail [email protected]
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
174
OMNI-WHEEL ROBOT NAVIGATION, LOCATION, PATH PLANNING AND
OBSTACLE AVOIDANCE WITH ULTRASONIC SENSOR AND OMNI-DIRECTION
VISION SYSTEM
Mourad Karakhalil1, Imrich Rukovanský2
1
Brno University of Technology
European Polytechnic Institute, Ltd. Kunovice
2
Abstract: In this paper, we present a multi-sensor cooperation paradigm between an Omnidirectional vision system and a panoramic range finder system. And Obstacle avoidance based on
ultrasonic sensors. A nonlinear controller design for an Omni-directional mobile robot is
presented. The robot controller consists of an outer-loop (kinematics) controller and an inner-loop
(dynamics) controller, which are both designed using the Trajectory Linearization Control method
based on a nonlinear robot dynamic model. The Trajectory Linearization controller design
combines a nonlinear dynamic inversion and a linear time-varying regulator in a novel way,
thereby achieving robust stability and performance along the trajectory without interpolating
controller gains. A sensor fusion method, Obstacle avoidance methods based on ultrasonic sensors
must account for the sensors' shortcomings, such as inaccuracies, crosstalk, and spurious readings.
Our work combines sensors around the robot, the vision system data is employed to provide
accurate and reliable robot position and orientation measurements, thereby reducing the wheel
slippage induced tracking error.
Keywords: Omni-directional vision system; Obstacle avoidance; ultrasonic sensor; Omnidirectional mobile robot
1. INTRODUCTION
In this paper, we address to use two Omni-directional mobile robot which move and avoid the obstacle, and
without collision together, methods for obstacle avoidance, and motion path planning which process the finding
a continuous path from an initial position to a prescribed final position (goal) without collision.
When a robot formation is forming or changing, the moving robot may collide with each other. Therefore, the
robot formation must have the mechanism providing the robots with the capability of obstacle avoidance.
Obstacle avoidance includes three important issues which are:
a) If the obstacle avoidance is needed?
b) When should the avoidance action begin?
c) Where should the robot go in order to avoid the obstacle?
The first issue concerns with the geometry relation between the robot and obstacles. The second issue is related
to the dynamics of the robot and is designed by the distance between the robot and obstacles.
The third issue is related to the size of the robot and a suitable candidate point is selected.
There are many popular obstacle avoidance methods: edge-detection, certainty grids, and Obstacle avoidance
methods based on ultrasonic sensors
2. DESCRIPTION
Three Omni Wheel: An Omni-directional mobile robot is a type of holonomic robots. It has the ability to move
simultaneously and independently in translation and rotation. The inherent agility of the Omni-directional mobile
robot makes it widely studied for dynamic environmental applications. This experimental robot uses three Omnidirectional wheels powered by hacked servos.
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
175
Figure1 - two example of Omni-directional wheels robot Rovio mobile robot [10]
3. EXPLAINTIONS
Top View Robot : schematic drawing of different movement possibilities and the corresponding speed vectors of
an autonomous robot with an Omni-directional drive based on three Omni-wheels.
Figure 2 - Robot Omni-directional movement speed vectors [9]
Speed vectors (colours information):
1. robot white
2. driven wheel speed green
3. compensated speed orange
4.overall wheel speed blue
5. center of rotation
4. LOCAL PATH PLANNING
Borenstein and Koren [1989] developed one of the earliest real-time obstacle avoidance methods, called the
virtual force field (VFF) method. The VFF the first algorithm that offered smooth, high-speed trajectories
without requiring the robot to stop worked in real-time and with actual sensory data, allowing a mobile robot to
traverse an obstacle course at average speeds of 0.4-0.6 m/s.
Virtual Force Field (VFH) is obstacle avoidance method combined the histogram grid world model with the
concept of potential fields, or force fields.
The original approach was called Vector Force Field (VFF) and was quite possibly. As shown in Figure 3, each
cell in the field of the vision of the robot (commonly via a sonar sensor) applies a virtual force to the robot.
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
176
Figure 3 - Vector Force Field (VFF) algorithm performing a data sweep in real-time [1]
Cells which do not associate with an obstacle or the target have a force of zero. The sum of the forces R (Ftarget
- Frepulsive) causes a change in the direction and speed of the robot in order to smoothly move around obstacles
and towards the target. Although VFF was revolutionary at the time of its proposal, it suffered from several
problems including operation in narrow hallways (as shown in Figure 4). The forces applied by either side of the
hall would cause an unstable oscillatory motion which resulted in collision. The algorithm behaved undesirably
in other situations such as those where two obstacles were very together and directly in front of goal.
Figure 4 – Unstable oscillatory motion of the robot using VFF in a narrow Halley
The shortcomings of the VFF algorithm lead to its optimization as the VFH algorithm. This optimization
involved the addition of a one-dimensional polar histogram to the existing two-dimensional Cartesian histogram
grid (as shown in Figure 5).[1]
Figure 5 – VFH algorithm utilizing a one-dimensional polar histogram [1]
This polar histogram creates a probability distribution for each sector (of angular width α) based on the density
of obstacles and several other factors. This normalization fixes the majority of the problems observed in the VFF
algorithm. Vector forces are no longer applied in a single line of action; instead, numerous blobs of varying
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
177
strengths push/pull the robot towards a general direction. Additionally, a reduction in the amount of data leads to
an increase in efficiency in comparison to the VFF algorithm.
Although the wavefront and VFH algorithms each have the capability to reach the goal from the start position
individually, this will not necessarily guarantee path optimality. VFH guarantees local but not global path
optimality while wavefront does not perform real-time obstacle avoidance. This design will involve the
implementation of a wrapper program in the central control system which will systematically assign goal
positions to the wavefront driver in order to cover the given area in its entirety. The wavefront driver finds the
optimal path from the robot’s current position to the given goal. It then forward smaller goal positions along the
optimal path to the VFH driver in a sequential manner. VFH will in turn perform real-time obstacle avoidance
and drive the robot to goal positions supplied by wavefront (as shown in Figure 6).
Figure 6 – Example of the proposed system in action[1]
The combination of wavefront and VHF algorithms will offer a highly optimized hybrid methodology which will
provide efficient and rapid navigation of complex environments while smoothly avoiding obstacles as well as
guaranteeing local and global path optimality. This solution should meet and in fact exceed all required
navigation objectives mentioned in the previous section.
5. OMNI-DIRECTIONAL VISION SYSTEM
Omni-directional cameras provide 360 degree view of the robot's environment. Figure 7 presents Omnidirectional camera. The sensor composed of a camera pointed upwards at the vertex of a spherical mirror. The
optical axis of the camera and the optical axis of the mirror are aligned. The spherical shape of the mirror causes
that the resolution depend on the distance between the camera and the observed region. Interdependence between
the distance to the obstacle and corresponding distance on the image (in pixles) is depicted in the picture .
Figure 7- Omni-directional vision system prototype with hyperbolic mirror [4]
The resolution of an image is maximal near the robot, so it is possible to localize nearest obstacles precisely. If
the distance value is in the range between 0 and 50 cm the resolution of few millimeters is achieved. The main
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
178
advantage of the Omni-directional sensor is that they simplify the interpretation of data. Suppose that the robot
took a picture while standing in some place. Than the robot has changed its orientation by α degree and took a
new picture. Obviously, the new image will look like the old one rotated by α degree.
6. GLOBAL NAVIGATION
The navigator takes as input start and goal configurations of the mobile robot, and the known environment
information. It calls on a path planner to plan a path, and then determines a trajectory for the mobile robot. In the
best case, the trajectory output will have the goal configuration as its intended terminus. However, due to
imprecise control and odometer error, the mobile robot cannot precisely follow the planned trajectory. Thus,
several iterations of this process might be required before the robot attains the goal configuration. Before each
iteration, a localization operation will be performed to precisely determine the robot's current configuration,
which will be used as the new start configuration by the path planner.
In many cases the path output by the path planner will be modified to facilitate the subsequent localization step
and also to ensure the robot follows a valid path (e.g., to ensure that no collisions will occur due to imprecise
control or odometer error). In particular, some upper bound on the positioning error the robot will accumulate as
it travels will be known based on the specifications of the mobile robot. These bounds will be used to determine
a region that is expected to contain the robot as it travels { these regions are often called uncertainty regions
(ellipses in our case).
To ensure that no collision will occur, the navigator must plan a trajectory in which the uncertainty region does
not intersect any obstacles. In addition, if the localization algorithm used has any special requirements (as does
the method we employ), then the navigator must also be sure that the uncertainty regions on the trajectory could
not place the robot in a situation where localization is not possible.
Thus, moving the mobile robot from a given start to a goal configuration is an iterative process that is governed
by a high-level navigator which synthesizes the current trajectory to be given to the robot from information
provided by path planning, localization, and uncertainty/error calculation modules. A high-level pseudo-code
description of the process is shown in Figure 8.
Navigator(start, goal)
1. while goal is not reached {
2. find path from start to goal
3. compute uncertainty regions along path
4. determine subgoal ('safe' prefix of path)
5. compute trajectory from start to subgoal
6. drive robot to subgoal and stop
7. robot senses environment
8. localize robot using sensor input
9. set start to current configuration
10. }
Figure 8: Pseudo-code for Global Navigator
The main intelligence required by the navigator is in Step 4, the determination of an appropriate subgoal along
the path from the current position to the desired final goal. This requires an understanding of the capabilities and
requirements of the localization method. For example, if there are certain situations in which localization would
not be possible, then the global navigator must ensure that they do not occur.
In addition, the navigator is responsible for performing any global optimization of the selected route. For
example, if sensing and localization is a time consuming operation, and speed is a concern, then the navigator
must consider both the length of the path and the number of localization operations required when selecting a
route. In this case, it will aid the navigator's task if the path planning module is capable of providing multiple
paths from the start to the goal. For this reason, we favor roadmap path planning algorithms, which encode
multiple representative feasible paths.
7. LOCALIZATION
Many different localization methods have been proposed. We have selected a method that provides fast
localization using only range sensor data (distance measurements) which is based on simple geometric properties
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
179
of the environment. During preprocessing, the workspace is partitioned into sectors using simple visibility
computations, and a small identifying label is computed for each sector.
The localizer analyzes the range sensor readings and extracts characteristic points, which are compared with the
pre-computed sector labels to localize the robot, first to a sector, and then to a particular configuration within
that sector. This two step process is computationally very simple, and allows precise localization at any place in
the workspace without any landmarks (beacons).
Another advantage of this localization method is that it provides opportunities for the global navigation
procedure to analyze and select trajectories in terms of their localization requirements.
8. CONCLUSION AND FUTURE WORK
This paper presents the hardware and software design of an Omni-directional robot. The kinematic analysis and
control methods are performed and integrated into the navigation process.
Techniques employed in the robot sensing, mapping, obstacle-avoidance and path-planning processes are also
discussed. However, the world model relies crucially on the alignment of the robot with its map. Drift and
slippage impose limits on its ability to estimate the location of the robot within its global map. So far the world
model obtained is not that accurate. The high-level global mapping is still under work and a land-mark method is
being incorporated into the localization process. To increase the accuracy of the Localization system it is
recommended that the initial training be performed at intervals of 0.5m.
In this work we have the combination of navigation along different structured environments and keeps the robot
away from obstacles. These behaviors rely only on Omni-directional vision. This is an advantage as
a catadioptric sensor can easily be installed on any mobile robot. Any Omni-directional camera can be used
without the need of calibration. Also, it has a very low computational cost.
The disadvantage of vision-based approaches is that vision is not as reliable as range sensors like sonars or laser.
Future work is to improve the sonar-based navigation algorithm and compose it with the entropy-based direction
estimator via fuzzy logic. It is also necessary to implement a mechanism for getting out of the local minima.
9. REFERENCES
[1]
BORENSTEIN, J.; KOREN, Y. The Vector Field Histogram - Fast Obstacle Avoidance for Mobile
Robots. In: IEEE Journal of Robotics and Automation, Vol 7, No 3, pp. 278-288, June 1991.
[2]
HOLT, B.; BORENSTEIN, J.; KOREN, Y.; WEHE, D. K. OmniNav: Obstacle Avoidance for Large,
Non-circular, Omni-directional Mobile Robots. In: Robotics and Manufacturing, Vol. 6 (ISRAM 1996),
Montpellier, France, May 27-30, p. 311-317.
[3]
TAE-SEOK, J.; JANG-MYUNG L.; BING LAM, L.; TSO, S. K. A Study on Multi-sensor Fusion for
Mobile Robot Navigation In an Indoor Environment. In: 8th IEEE Conference on Mechatrinics and
Machine Vision in Practice. Hong Kong, 2001.
[4]
GRASSI, V. JR.; OKAMOTO, J. JR. Development of an Omni-directional vision system. In: Journal
of the Brazilian Society of Mechanical Sciences and Engineering. [online] Available from:
http://www.scielo.br/scielo.php?pid=S1678-58782006000100007&script=sci_arttext
[5]
Omnidirectional
camera.
Wikipedia.org.
[online]
[2011-01-15]
Available
from:
http://en.wikipedia.org/wiki/Omnidirectional_camera
[6]
Motion
planning.
Wikipedia.org.
[online]
[2011-01-15]
Available
from:
http://en.wikipedia.org/wiki/Path_planning
[7]
LI G.; CHANGCHANG W. Omnidirectional Camera Image Roaming. University of North Carolina at
Chapel
Hill.
[online]
[2011-01-15]
Available
from:
http://www.cs.unc.edu/~lguan/COMP790.files/790final.htm.
[8]
Omnidirectional Robot Vision. IEEE ICRA 2010. VUT : Brno. 2010 [online] [2011-01-15] Available
from: http://www.dei.unipd.it/~emg/omniRoboVis2010/OmniRoboVis2010/Home.html.
[9]
Robot omnidirectional movement speedvectors. Wikipedia.org. [online] [2011-01-15] Available from:
http://commons.wikimedia.org/wiki/File:Robot_omnidirectional_movement_speedvectors.PNG.
[10] Rovio Wi-Fi Roaming Bot: Interact Even You Are Thousand Miles Away! The Cool Gadgets. [online]
Available from: http://thecoolgadgets.com/rovio-wi-fi-roaming-bot-interact-even-you-are-thousandmiles-away/.
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
180
ADDRESS:
Mourad KARAKHALIL
Institute of Automation and Computer Science
Faculty of Mechanical Engineering
Brno University of Technology
Technicka 2896/2
61600 Brno
Czech Republic
e-mail: [email protected]
Prof. Ing. Imrich RUKOVANSKY, CSc.
European Polytechnic Institute Kunovice
Osvobození 699, 686 04 Kunovice
Czech Republic
[email protected]
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
181
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
182
ASYMPTOTIC PROPERTIES OF DELAYED SINE AND COSINE
Zdeněk Svoboda
Brno University of Technology
Abstract: The step by step method is basic concept for investigation of differential equation s with
delay. The notion of the delayed exponential of matrix is useful for application of this method to
linear system of first order with constant matrix and constant delay. Analogous results are obtained
for linear system of second order and the notions of delayed sinus and cosinus are defined. The
definitions of these notions are given according to intervals and this fact complicates examination
in infinity. This contribution deals with the asymptotic properties of this functions. In special cases
are find functions with the same asymptotic properties.
Key words: Delayed differential equation, characteristic equation
1. THE SYSTEM OF THE FIRST ORDER
Investigation of structure of the linear system of differential equations of the second order with constant delay
and constant matrix is based on the concepts of the delayed matrix cosine and delayed matrix sine, which is
defined in [9], [8]. Analogous results for systems of linear differential equations with constant delay and
a constant matrix are derived by the delayed exponential of matrix, for more details see [10], [2] and for
difference systems [7], [5], [6] too. These results are obtained by step by step method and due to it the definition
of delayed matrix cosine and sine resp. delayed exponential of matrix are given with respect to intervals.
Let is given a square real constant matrix A and positive constant
is defined as follows:

. Then the delayed exponential of a matrix

  t   


  t  0
I



eAt  
k
2
 I  A t  A2 (t   ) … Ak (t  (k  1) )  (k  1)  t  k 

k
1
2



(1)
This matrix is together with an initial condition e  I for  r  t  0 the solution of the equation:
At
y (t )  Ay (t   )
(2)
Moreover, for permutable matrices A , B , i.e. AB  BA , it is possible to describe the solution of initial value
problem for equation y (t )  Ay (t   )  By (t ) x(t )   (t ) for  r  t  0 as follows:
y (t )  e B (t  )eA1 (t  ) ( ) 
0
 e
A ( t   s ) A1 ( t   s ) A
e
e [ ( s )  B ( s )]ds where A1  e  B A

The structure of solutions is studied in [10] also for the equation
x (t )  Ax(t )  Bx(t   )  f (t )
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
183
and can be expressed by notion of delayed exponential of a matrix, for more details see [10]. For special matrix
A there exists a constant matrix such that the exponential of the matrix eCt has the similar asymptotic
At
properties as the matrix e . To be more specify, that there exist two limits:
d
lim(eCt  eAt )  0
lim (eCt  eAt )  0
t 
t  d t
This means that for t  n we obtain the relation lim e
An
n 
A ( n 1)
e
n  e An

lim
 lim eCn . If more over there exists the limit
n 
 lim eA( n 1) (eAn ) 1  eC
(3)
n 
and the constant matrix C has at least one eigenvalue with positive real part, then the exponential of matrix C ,
i.e.
eCt , is the solution of the matrix equation (?) and the matrix C is solution of the matrix equation
C  Ae  C 
(4)
For the derivation of the matrix C from the relation (3) is suitable natation
(n  1  k ) k k k
A   An ( A )
k
k 0
n
eAn  
where An are polynomials with respect matrix A and delay  . Let diagonal matrix is Jordan canonical form of
the matrix A and moreover modules of the eigenvalues of A satisfies the inequality e   j    1 then there is
the constant matrix C describing the limit
e  C  lim
n 
eA n
A ( n 1)
e
 An ( j ) 
… P
 lim P 1diag …
 An 1 ( j ) 
n 


 n  2 ( j ) j 1

lim P 1diag …
( j ) j 1  O(( j ) n  3 )… P 
n 
j
 j 1

 W (  ) 
  ( j ) j 1

P diag  …
( j ) j 1 ) n  3 )… P  P 1diag … 0 j … P


j
 j
 j 1



1
where W0 ( x ) is the principal branch of the well known LambertW function, for more details see [12], [13].
Remark 1. The Lambert function is a very useful tool for description of the set of solutions of characteristic
equation 4 in scalar case, i.e.
  ae  r  , which is equivalent the equation  er   a . Lambert function, named
after Johann Heinrich Lambert, see [?], is the inverse function of f ( w)  we . The function satisfying
w
z  W ( z )eW ( z ) is a multivalued (except at 0). For real arguments x ( x  1e ) and real w ( w  1 ) the
equation above defines a single-valued function W0 ( x ) . The Taylor series of W0 around 0 is given by
( n) n 1 n
3
8
125 5
W0 ( x)  
x  x  x 2  x3  x 4 
x  …
n
2
3
24
n 1

(5)
which has radius convergence 1e . The Lambert W function cannot be expressed in terms of elementary
functions.
 z 2 follows that for any constant  z  and any couple of
2
2
values uk  ivk  Wk ( z ) , ul  ivl  Wl ( z ) the implication vk  vl  uk  ul holds and so, the inequality
From the fact u  iv  W ( z )  (u  v )e
2
2
2u
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
184
eWk ( z )  eWl ( z ) holds, too. eW0 ( z ) is the greatest real part of all values W ( z ) . If the matrix A
has the Jordan canonical form diagonal with eigenvalues  j satisfying the inequality e j  1 , then the
Ct
function e , which matrix C is defined by (3), bounds exponentials e
see [4].
Ct
of other matrices C ,For more details
2. THE SYSTEM OF THE SECOND ORDER
Analogous results for the system of second order in the form

x(t )  B 2 x(t   )  0
(6)
are based on the the definition of the notion of the delayed matrix sine and cosine:

  t   


I
  t  0


B2
I


0  t 

2

Cos Bt  



2

[t  (k  1) ]2 k
2 t
 …(1) k B 2 k
 (k  1)  t  k 
I  B
2
(2k )





  t   


B(t   )
  t  0


B3
B (t   )  t 
0  t 

3

Sin Bt  



2 k 1

B3
k 2 k 1 [t  ( k  1) ]
 (k  1)  t  k 
 B(t   )  t  …(1) B
3
(2k  1)




These notions enable us to specify the solution of (6) satisfying the initial conditions x(t )   (t ) , x (t )   (t )
for   t  0 in the form
0


x(t )  (Cos Bt ) ( )  B 1  ( Sin Bt ) ( )   Sin B(t     )( )d  



3. RESULTS
In the scalar case is possible show relation between delayed exponential and delayed cosine and delayed sine.
This connection enables to describe the asymptotic properties of the delayed cosine and delayed sine by delayed
exponential. Let C be the constant such that exponential
constant C satisfies the characteristic equation
eCt is the solution of the equation (6), then the
C 2  B 2e  C  0
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
185
which can be obtained as the product of following couple of equivalent equations
C  iBe
 C 2
 0
This fact evokes the next Lemma describing the relation between delayed exponential of matrix and the delayed
matrix cosine and delayed matrix sine.
Lemma 1. For any square matrix
B holds
iBt
 iBt
   e2  e2

Cos B  t   
2
 2
Sin B(t   ) 
eiBt  eiBt
2
2
2i

(7)
Proof: The proof immediately follows from the comparison of definition of delayed exponential of matrix with
the definitions of the delayed matrix cosine and delayed matrix sine. These relations is possible to read as
a modification of well—known Euler’s identity in the form:
 
eiBt  Cos B  t    i Sin B(t   )
2
 2
Theorem 1. Let positive constant B satisfies the inequality Be  2 then the delayed sine and delayed cosine
are divergent i.e.
limsup Cos Bt   limsup Sin Bt   .
t 
t 
Proof: The proof both thesis are analogous, therefore is shown only for Cos Bt . The proof follows
immediately from Lemma and from the asymptotic properties delayed exponential which is special case
described in section about the systems of first order. For Cos Bt is possible expressed
iBn
    e
lim Cos B  n    2
n 
 2 2
 eiBn
2
2
 0
As the values Lambert function of conjugate arguments are conjugate, the fraction above has the form
eiBn  e iBn
2
2
2
e
e (W0 ( Bi 2 ) n )

cos  m(W0 ( Bi 2 )n  
The sequence of this fraction is unbounded for n ® Ą , because holds inequality
e(W0 ( Bi 2 )n)  0 .
ACKNOWLEDGEMENT
The paper was supported by grant Czech Grant Agency (Prague) no. P201-11-0768 and by the Czech Ministry of
Education in the frame of projects MSM002160503.
REFERENCES
[1]
BELLMAN, R.; COOKE, K. L. Differntial-Difference equations. Academic Press : New York, 1963.
[2]
BOICHUK, A.; DIBLÍK, J.; KHUSAINOV, D.; RŮŽIČKOVÁ, M. Fredholm’s boundary-value
problems for differential systems with a single delay. Nonlinear Analysis 72, p. 2251-2258, 2010.
[3]
BOICHUK, A.; DIBLÍK, J.; KHUSAINOV, D.; RŮŽIČKOVÁ, M. Boundary Value Problems for Delay
Differential Systems. Advances in Difference Equations, Volume 2010, Article ID 593834, 20 p.
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
186
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
CORLESS, R. M.; GONNET, G. H.; HARE, D. E. G.; KNUTH, D. E. On the Lambert W Function.
Advances in Computational Mathematics, Vol 5, p. 329-359, 1996.
DIBLÍK, J.; KHUSAINOV, D. Representation of solutions of discrete delayed system
x(k  1)  Ax(k )  Bx(k  m)  f (k ) with commutative matrices, Journal of Mathematical Analysis and
Applications, No 1, ISSN 0022-247X, p. 63–76. 2006.
DIBLÍK, J.; KHUSAINOV, D. Representation of solutions of linear discrete systems with constant
coefficients and pure delay, Advances in Difference Equations, Art. ID 80825, DOI
10.1155/ADE/2006/80825, p. 1–13. ISSN 1687-1839, e-ISSN: 1687-1847, doi:10.1155/ADE, 2006.
DIBLÍK, J.; KHUSAINOV, D. Y.; RŮŽIČKOVÁ, M. Controllability of linear discrete systems with
constant coefficients and pure delay SIAM J. CONTROL OPTIM. Vol. 47, No. 3, p. 1140-1149, 2008.
DIBLÍK, J.; KHUSAINOV, D. Y.; RŮŽIČKOVÁ, M.; LUKÁČOVÁ, J. Control of Oscillating Systems
with a Single Delay, In: Advances in Difference Equations Volume 2010, Article ID 108218, 15 p. 2010.
KHUSAINOV, D. YA.; DIBLÍK, J.; LUKÁČOVÁ, J.; RŮŽIČKOVÁ, M. Representation of a solution of
the Cauchy problem for an oscillating system with pure delay, Nonlinear Oscillations, Vol 11, No 2, p.
276-285, 2008.
KHUSAINOV, D. YA.; SHUKLIN, G. V. Linear autonomous time-delay system with permutation
matrices solving, Studies of the University of Žilina, Mathematical Series Vol 16, p. 1-8, 2003.
LAMBERT, J. H. Observationes variae in mathesin puram. In: Acta Helveticae physicomathematicoanatomico-botanico-medica, Band III, p. 128-168, 1758.
SVOBODA, Z. Asymptotic properties of delayed exponential of matrix. Journal of Applied Mathematics.
Slovak University of Technology in Bratislava, p. 167-172, ISSN 1337- 6365, 2010.
SVOBODA, Z. The system of linnear differential equations with constsnt coefficients and constant delay.
XXVIII International Collquium on the Management of Educational Process Proceedings, Brno, p. 247251, 2010. ISBN 978-80-7231-733-2.
ADDRESS:
RNDr. Zdeněk Svoboda, CSc.
Department of Mathematics
Faculty of Electrical Engineering and Communication
Brno University of Technology
Technická 8
61600 Brno
Czech Republic
[email protected]
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
187
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
188
GAME THEORY
Marie Tomšová
Brno University of Technology
INTRODUCTION
Many problems in economy lead to the search of a maximum or a minimum of a function under certain
conditions. Optimization always concerned only one subject. Competition was not directly considered (although
one of the reasons for optimization is success exactly in competition). However, there are economic situations
that lead to conflict of interests (e.g. seller versus vendor etc.) and when the implications of the decision of one
participant is directly dependent on the decision of the others. Game theory models such conflict decisionmaking situations.
BASIC NOTIONS
By strategic (alternatively, conflict) game we understand conflicting decision making process with at least two
participants (players) with conflicting interests, and in which the implications of the decision of one participant
depend on the decision taken by others.
The goal of game theory is thus the support of decision-making processes in simile situations.
The founders of the theory are John von Neumann (known also for his work on constructing computers, who was
born in the Austro-Hungarian Empire) and O. Morgenstern. Their seminal work was published in 1944.
(Examples of strategic games: chess, bridge, poker. On the contrary, the following are not games in this sense:
Ludo (there are no strategies,), patience (only one player), hazardous games, e.g. roulette.)
The reset of the game must be in some way quantified. It will be given evaluated by some real number (e.g. in
chess 1 – 0.5 – 0, a game of football 3 – 1 – 0 etc.). For each player, the goal is to achieve a maximum number:
the highest victory.
In the standard terminology of the players, we consider game to be especially the set of its rules. A concrete
game is called a game or a match. Some games (board and card games) consist of moves. (Contemporary game
theory cannot model intelligent games with more moves, as e.g. chess. It is more or less limited to simple games
of two players, who can choose from a small number of possibilities.)
Definition 1: By game G in normal form we mean the ordered set (I,X,F), where I={1,2,..N} is the set of
players, X={X1,X2,...,XN} is the set of strategies, while its subset Xi is the set of strategies of i-th player,
F={f1,f2,..,fN} is the set of payment fiction, fi :X -> R is the payment fiction of i-th player. (In other words,
fi(x1,x2,..,xN) is the amount (winning) gained by player i ,when player 1 chooses strategy x1, player 2 strategy
x2 etc. .
Remark: Games are classified according to a number of criteria. By a finite game, we mean a game with a finite
set of strategies. The notion of single-move and multiple-move game is evident. According to the behaviour of
the players, we distinguish games with intelligent, p-intelligent and unintelligent players (p-intelligent player
behaves rationally with probability p and irrationally with probability (1-p). Games against an unintelligent
player are also called games against the nature.
Definition 2: Constant sum game is a game in which the sum of all winnings is the same for any choice of
strategies. If this constant is 0, then we speak about a zero-sum game.
Example 1: Chess are a game with constant sum 1. Football match with winnings 3 – 1 – 0 is not a game with
constant sum (the sum is 3 in the case of victory, 2 in the case of a draw. The game analysed in this example is
a zero-sum game.
Strategically equivalent games are games for which the sets I and X are identical, while the winnings of one are
an equivalent multiple of the winnings of the other..
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
189
Coalition games are such games where players with different interests can make coalitions. We will not deal
with those games and in the following, we will deal with antagonistic (non-coalition) zero-sume game sof two
players. In the first part of the text, we will deal with inteligent players who are interested in the reset and can
find the best strategy against their rival. In the second part, we will deal with a game where only one player
behaves rationally.
ANTAGONISTIC GAME
By antagonistic game we mean a game of two players with constant sum. In the following, we will only look at
zero-sum games. If we denote the set of strategies of the first player by X={x} and the set of strategies of the
second player by Y={y}, then if f ( x, y ) is the payment function of the first player (player A), then  f ( x, y )
is the payment function of the second player (player B).
If players choose the same strategy in multiple repetition of the game, we call this strategy pure strategy. If the
player chooses at least one of two strategies, then we speak of a mixed strategy. With mixed strategies, we
assume that the probability of choosing one will be given by the distribution of probabilities. If for example
X={x1,x2,x3} is the set of strategies of a certain player, then mixed strategy (1/3,1/6,1/2) means that the player
will choose the first strategy x1 with probability 1/3 etc. For a finite set of strategies, the mixed strategy is
described by the vector of probabilities
When searching for optimal strategy, we employ elements of maximization and minimization. Player A makes
effort to maximize f ( x, y ) (his winning) on condition that his rival B cannot diminish it. On the other hand,
player B tries to minimize f ( x, y ) (his loss) on condition that player A cannot increase it. We will formulate
the effort of the first player in the following definition:
Definition 3: Optimal strategy of the first player in an antagonistic game is such a strategy for which there exists
a strategy y0 of the second player such that the following holds f ( x, y 0 )  f ( x0 , y 0 )  f ( x0 , y ) for all
xX, yY. (This is a saddle point). The value of the payment function for this strategy w  f ( x0 , y 0 ) is then
called the price (also value) of the game. If such strategy (an ordered pair ( x0 , y 0 ) ) exists, it is called a solution
of the game in pure strategies. Later we will see that such strategy need not exist and we will show how to look
for optimum probability (mixed) strategy in some special cases.
MATRIX GAMES
Example 2: Two players A and B have two cards each:
A: black (b) 5, red (r) 5
B: black 5, red 3
Rules: When told, both players will at the same time show one consciously chosen card.
 If the cards have the same colour, player A wins by the difference of numbers
 If the cards have different colours, player B wins by the difference of numbers.
Table for winnings of player A is
b5
r5
player B
player A
b5
0
0
r3
-2
2
0  2

0  2
in matrix form 
Player A has two possibilities (strategies): b5, r5
Player B has two possibilities (strategies): b5, r3
Winnings for the four possible combinations strategies are described in the matrix (so-called payment matrix).
This is an example of a matrix game.
Definition 4: Matrix game is a game of two players with finitely many strategies. The element mij of the
payment matrix M is victory of the first player (A) when he chose i-th strategy and his rival chose j-th strategy.
In the case of zero-sum games, mij is also the loss of player B with this choice of strategy.
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
190
We will try to find out whether there is a solution in pure strategies. Saddle point is a maximum with regard to
the strategy o the first player. This means that we are looking for such an element in the matrix that is at the
same time minimum in its row (the elements of the row are namely payments/winnings for a choice of a fixed
strategy by the first player and for all choices of the second player) and maximum in its column. Such element
really exists (in row 2, column 1), the solution of the game is (r5, b5) and the value of the game is 0.
Player A will certainly choose strategy r5 which will ensure that he will not lose. Player B will count on it and
will choose strategy b5.
0  2
1  2
1 0
1 3 
, M 3  
, M 4  
, M 5  

2 
0 2 
0 2
1 2 
Games with matrices M 2  
1
For M2, the saddle point is 1, for M3 and M4, there are no saddle points, M5 has two saddle points (in the first
column).
Definition 5: Low cost of the game is the value v d  max min f ( x, y ) , high cost of the game is then
xX
yY
v h  min max f ( x, y ) .
yY
xX
Remark: For the low cost, we are looking for a minimum in each row and then for a maximum within them (for
M2 max{-2,1}=1, for M3 max{-2,0}=0, u M4 max{0,0}=0 )
For high cost, we determine maximum in every column and then minimum from them (for M2 min{1,2}=1, for
M3 min{1,2}=1, for M4 min{1,2}=1 (for the original game, both low and high cost is 0, for M5 it is 1).
If the low cost and high cost are the same, there is a solution in pure strategies and the common value is the low
and high cost of the game.
MIXED STRATEGIES
 2


1
1
.5 

We will first look at the example of the matrix game with a square matrix. The game with matrix M   1

apparently ha has no saddle point (low cost is –1, high cost +1). If player A chooses the first strategy, he risks
that player B chooses the second strategy and A will lose 2. On the contrary, if B chooses the second strategy,
A may also choose his second strategy and A will win 1. Apparently there is a scheme leading to minimum loss
(optimum strategy is nothing else than strategy leading to the smallest guaranteed loss).
The sought scheme will probably consist in switching strategies according to some probability. The reasoning of
player A can be the following: I will try to choose such strategy that my winning does not depend on what my
rival does. Player B can of course reason in the same way.
Let probability strategies for player A be (p,1-p) (he will choose the first strategy with probability p) and (q,1-q)
for player B. With this choice, expected value of the winning is
 q 
 .
E ( p, q )  ( p 1  p ) * M * 
1  q 
For example, for p=1/4 and q=1/3 we get
 1/ 3 
 1  2   1/ 3 
 
  (1 *1 / 4  1 * 3 / 4  2 *1 / 4  1.5 * 3 / 4) * 
 * 
E (1 / 4,1 / 3)  (1 / 4 3 / 4) * 
 2 / 3
  1 1.5   2 / 3 
 1 * (1 / 4 *1 / 3)  1* (3 / 4 *1 / 3)  2 * (1 / 4 * 2 / 3)  1.5 * (3 / 4 * 2 / 3)  1 / 12  1 / 4  1 / 3  3 / 4  1 / 4
If B chooses strategy (1/2 1/2), the expected value of winning for player A would be +1/16
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
191
GAMES WITH MATRICES (2,2)
Let us now deal with the example of a matrix game with the payment matrix of the type (2,2). It can be proved
that it always has a solution (p*,q*) satisfying E ( p, q*)  E ( p*, q*)  E ( p*, q ) , i.e. ( p*, q*) is a saddle
point. Let us now show the way to find the solution:
We will start from the following consideration: if the inequalities become equalities, i.e. we succeed in finding
(p*,q*) so that E ( p, q*)  E ( p*, q*)  E ( p*, q ) and at the same time 0  p*, q*  1 , this will certainly
( p*,1  p*) for player A such that if player A
chooses it, his rival has no influence on the result (and the same holds for player B and strategy ( q*,1  q*) ),
be a solution. The meaning is the following: if there is a strategy
we have a solution. The cost of the game should be the same for both parties involved.
a
Let the payment matrix be V  
c
b
 . We can easily see that if we find a strategy ( p*,1  p*) for player
d 
A such that ( p*,1  p*)M  ( v, v ) (vector with equal entries), then the resulting value of the game will be
 q 
  v
( v, v ) * 
for
any
q<0,1>
and
similarly
for
player
B:
1  q 
w
 q *  w
     ( p,1  p ) *    w for any p. If v  w , it is a solution. It follows easily that
M * 
w
1  q *   w 
d b
ad  bc
d c
and really v  w 
p* 
, q* 
a bcd
a bcd
a bcd
Let us show different situations on three examples:
2
Example 3: M  
 1
0
 has no saddle point, so solution in pure strategies does not exist.
3 
2
0
  ( 2 p  1  p, 3  3 p ) . Both entries are equal, so
3 
3 p  1  3  3 p  6 p  4  p*  2 / 3 with v  3 p * 1  2  1  1
Optimum mixed strategy for A: ( p,1  p )
 1
2q
 2 0  q  
  2q 


  
  
 .
  1 3 1  q    q  3  3q   3  4q 
2q  3  4 q  q*  1 / 2 and w  2q*  1  v
Optimum
strategy
for
B:
Again
 1  2
 . Pure strategy (0,1) is a solution for A, (1,0) for B. If we follow the same
3 
Example 4: M  
0
method as in the previous example, we get for A:
 1  2
  (  p,  2 p  3  3 p ) , therefore 3  4 p, p*  3 / 4 .
( p,1  p )
3 
0
  1  2  q    q  2  2q 
 , therefore q  2  3  3q  4 q  5  q  5 / 4 ; this value


For B: 
3 1  q   3  3q 
0
is not acceptable, which signalizes that there is only one solution in pure strategies.
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
192
Example 5:
0  2
 , we
3 
If we change the element m11 from to previous example from –1 to 0, i.e. M  
0
would get an acceptable mixed solution
p*  3 / 5, q*  1 with the zero value of the game. Apart from that,
there is a solution p * *  0, q*  1 . In both cases, the value of the game is zero.
If a game has two different solutions, then any of its convex combinations is a solution. Thus
 (0,1)  (1   )(3 / 5,1) for <0,1>. Solution – e.g. for =0.5 we get p=3/10, q=1. The game has infinitely
many solutions.
GAMES WITH MATRICES OF THE TYPES (2,N) AND (M,2)
For these games, we can use a graphic method with advantage.
Let us show the procedure on an example:
5 0 2
 be the payment matrix and let us look for optimum strategies for both
6 3 
Example 6: Let M  
1
players.
The game does not have a solution in pure strategies (the matrix has no saddle point). Let us start with the player
who has two (pure) strategies, that is with player A. His mixed strategy is denoted by x  (x,1-x) instead of the
(p,1-p) used up to now; the reason is that it is usual to write the equation of a line with x as a variable. Let us
denote strategies of player B with numbers 1,2,3.
Let us now study what are the expected values of the winnings of player A (who chooses a mixed strategy (x,1-x)
) when B chooses a pure strategy:
5 0 2
  (5 x  1(1  x ), 0 x  6 * (1  x ), 2 x  3(1  x ))
x * M  ( x,1  x ) * 
1 6 3
The expected values are the entries of the resulting vector, i.e.
f ( x ,1)  5 x  1(1  x )  1  4 x
f ( x ,2) 
f ( x ,3) 
6  6x
3 x
We have three equations for x<0,1>. We will draw these lines into a graph:
Player A wants to secure his winning min f ( x, j )  min{1  4 x,6  6 x,3  x}, x  0,1  for the case
j
j
when player B chooses j-th strategy. The line of minimum guaranteed winning is the lower broken line (bold)
Starting from these guaranteed winnings, the player searches for the maximum (over x). In our case, this is the
point with co-ordinates x=0.4, with the corresponding value (winning) v=2.6. Optimal strategy for player A is
therefore x0  (0.4, 0.6 ) . From the picture, it is clear that if player B (correctly) assumes that A will choose
the optimal strategy and at the same time wants to keep the winning in the value of v (i.e. not let it increase), he
cannot use the second strategy in his mixed strategy. The following theorem provides a description of the general
procedure:
Theorem 1: For a solution (i.e. for optimum mixed strategy) ( x0 , y 0 ) with the corresponding value v of the
game, the following holds:
y0 j * [ f ( x0 , j )  v]  0 for all j
x0i * [ f (i, y0 )  v]  0 for all i
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
193
(Practically: if y 0 j 0, then f ( x 0 , j )  v . If f ( x 0 , j )  v , then y 0 j  0 and similarly for x0i)
In our example :
x01  0.4  0  f (1, y0 )  v , i.e. 5 y01
 2 y 03  2.6
x02  0.6  0  f (2, y0 )  v , i.e.. y 01  6 y 20  3 y03  2.6
Because f ( x0 ,2)  3.6  v , then y 02  0 .
We obtain the system
y 01  2 y 03  2.6
y 01  3 y 03  2.6
, whose solution is y01=0.2, y03=0.8 and optimal strategy B is
y 0  (0.2, 0, 0.8)
In the case of the game with payment matrix (n,2) we find the solution most easily if we transpose the matrix and
change the sign. Thus we swap the roles of the players.
After the calculations, we swap the roles of the players back: we change the sign of the value, and this is the
value of A’s winning.
Example 7: We will solve example 4 by the graphical method
0  2
0  2
 , ( x,1  x )
  (0,  2 x  3  3x )  (0,3  5 x )
3 
0 3 
Matrix is of the type (2,2), M  
0
0 0 
0 0 
  ( 2  2 y,3  3 y )
, ( y,1  y )
 2  3
 2  3
For player B (after the swap of roles M   
Graphs:
Player A
Player B
Maximum of the bold line for A is the value 0 for any x<0, 0.6>, for player B it is point y=1 (pure first
strategy), which confirms the result from Example 4.
Games with a general matrix (m,n) have a solution, as has been proven in the book by von Neumann and
Morgenstern, but this solution is difficult to find with the original algebraic tools of graph theory. However, we
can use with advantage the relationship of the matrix game to the corresponding linear programming problem.
Before that, let us show on a practical example how the payment matrix can sometimes be reduced by leaving
out superfluous strategies:
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
194
5 0 2 3


Example 8: Solve matrix game with payment matrix M   1 6 3 4 
0 5 2 1


Comparing the second and third strategy of player A, we see that the second one is better than the third one
(elements of the second row are greater than the corresponding elements of the third row). The third strategy is
5 0 2 3

1
6
3
4


therefore uninteresting for player A and can be left out. We consider the reduced matrix M   
from the point of view of player B. He will certainly not use his fourth strategy, because the third strategy will
ensure lesser loss for him (the elements of the third column are smaller than the corresponding elements of the
5 0 2

1
6
3


fourth column). We will therefore leave out fourth column and solve the game with matrix M   
(see Example 5)
(The above-stated property is called dominating: the third row of M is dominated by the second, and it is
strongly dominated, because the inequalities are sharp. The fourth column of M’is strongly dominated by the
third. The property of dominating can be extended to convex combinations of rows or columns. In other words,
if a row (column) is a domino-convex combination of other rows (columns), it can be left out.
MATRIX GAME FROM THE POINT OF VIEW OF LINEAR PROGRAMMING
Let us consider matrix game with payment matrix with only positive elements. This can be easily ensured by
adding a sufficiently large positive constant k (we will increase the winning of A by k) and after having solved
the game, we subtract the constant again. During the calculations, the value of the game will be positive.
We show the consideration leading to the linear programming problem on a simple example:
2
We have a game with matrix M  
 1
0
4 2
 (see Example 3); we add 2 and get M  

3
1 5
According to the minimax principle, for optimal strategy of player A the following holds:
x0  ( x,1  x )  ( x01 , x02 ) The inequality x0 * M * y  v holds for any strategy y of player B. For pure
 4 2  0 
 4 2  1 
   (0,2 x1  5 x 2)
   ( 4 x1  x 2,0), ( x1 , x 2 )
1
5
1
5
0
 1 

 

strategies of player B, we get ( x1 , x 2 )
Thus we obtain the inequalities
4 x1  x 2  v
2 x1  5 x 2  v
First way of formulating the equivalent linear programming problem:
Player A tries to maximize his winning v, i.e. minimize -v. We get the problem
 v  min
 v  max
4 x1  x2  v  0 and its dual 4 y1  2 y 2  v  0
2 x1  5 x2  v  0
y2  5 y2  v  0
(Instead of v we may in the primary problem use the sign x3 and in its dual y3). The solution of the primary
problem is (2/3, 1/3, 1) (third entry is the value of the game), for the dual problem (1/2, 1/2, 1)
Second way:
It holds that x1  x 2  1
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
195
x
We introduce new variables through the transformation xi  i (this is possible, since v0). Therefore
v
x
1
 vi   xi  v
i
Player A tries to maximize his winning, i.e. minimize x1  x2 
1
v
It is easy to formulate the primary and dual problem:
x1  x2 [  1 / v ]  min
4 x1  x2  1
y1  y 2 [  1 / v ]  max
4 y1  2 y 2  1
2 x1  5 x2  1
xi  xi / v  0
y 2  5 y 2  1
y j  y j / v  0
The solution of the primary problem is x0  (0.154, 0.231), v  1 /(0.154  0.231)  2.6 and the optimal strategy
of the first player is x0  v * x0  (0.4, 0.6) . The solution of the dual problem gives the optimum strategy (0.2, 0
,0.8) for second player.
For a general matrix game with a matrix of the type (m,n)
 a11

 a 21
M 
..

 a m1
a12
.
a1n 

a2n 
, the first way yields
.. 

.. a mn 
..
a11 x1  ..
 v  min
 a m1 x m  v  0
 v  max
a11 y1  ..  a1n y n  v  0
a12 x1  ..
 am2 xm  v  0
a 21 y1  ..  a 2 n y n  v  0
......
a1n x1  ..
 a mn x m  v  0
a m1 y1  ..  a mn y n  v  0
xi  0
yj  0
General formulation for the second way is apparent.
CORRECT GAME
A game is called correct (just), if the expected value of winning of any player is zero when optimum strategies
have been chosen (i.e. everybody has the same chance) Let us show this on games against nature:
Example 9: Player A tosses a coin. If heads fall in the n-th toss, player B pays him n crowns (and the game
ends). How many crowns should player A pay to player B before the game so that the game is correct (just)?
He should pay as much as is the expected value of his winning, i.e.
1
1
1
1 *  2 *  3 *  ... =2. This is an acceptable amount, against which we do not object.
2
4
8
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
196
Modification of the conditions: if head falls in the n-th toss, A gets from B 2n crowns. What will the correct
compensation be?
This apparently banal example carries one of the deep mathematical paradoxes in it self (so-called Euler’s St.
Petersburg
paradox).
Expected
value
of
the
winning
is
namely
1
1
1
21 *  2 2 *  2 3 *  ...  1  1  1  ...  
4
8
2
Thus player A should pay infinitely many crowns to B. However, A will only win infinitely much on B when he
tosses the eagle infinitely many times, and the probability of this phenomenon is infinitely small. The game was
practiced and B usually asked A for 40 crowns, which was the empirically proved (and of course approximate)
average winning of player A.
Example 10: In Sportka, a ticket cost 3 crowns. Half of the money was used on the winnings of those who were
betting, and the other half to the state (for overheads and the development of sports). The game is not correct and
the expected value of the winning is -1,50 crowns. The games is always disadvantageous for the one who bets,
but the chances of a high winning, of the first price, albeit very small (ca. 1:15million) still attracts hordes of
betters in the same way as the flame of the candle attracts the owlet moths.
GAMES AGAINST THE NATURE
Nature is a player who does not care about the result (although we can doubt it nowadays, when nature appears
more and more as an enemy to its worst pest, people). Nature is therefore a player with irrational, inscrutable
strategy an in the game against it, e.g. in agriculture, construction work in tectonically active areas, securing fuel
for winter etc., we have to take this into account. When choosing a strategy, a whole range of attitudes applies
from the careful to the hazardous ones.
Example 11: Summer price of coal is 100 CZK/q, winter price is 130 CZK/q. When the winter is mild, we spend
40q, in a normal one 50q, in sever winter 60q. Payment matrix is
In the spring, we will move out and will have to write the coal off. The matrix has a saddle point denoted by an
asterisk. If we choose that point, we have ensured ourselves against severe winter (we chose careful strategy).
We can also choose the other extreme and minimize the costs (4000); but we thus risk that the costs will be the
highest if the winter is severe.
If the rational player knows probability distribution of the strategies of the irrational player, we speak of
decision-making with risk. If he does not know this distribution, we speak of decision-making in uncertainty.
Thus if we are guided in choosing a strategy by the empirically grounded probabilities of mild, normal, and sever
winter, it is decision-making with risk.
For decision-making with risk, we often use Bayesian principle of decision-making. We choose such a strategy
that leads to the highest value of the winning. In practice, nature chooses mixed strategy, which we know. We
choose pure strategy, the one that leads to our highest winning (the amount of the winning is of course not
guaranteed, since it is stochastic).
An example of a game against nature with risk is the game on which we have shown the St. Petersburg paradox.
Nature is represented by player A, n-th strategy=toss a head in n-th toss, distribution of probabilities is known
[namely (1/2, 1/4, 1/8, 1/16, ...)], our strategy, i.e. the strategy of player B, is to achieve with multiple repetitions
of the game an average zero winning of player A, so that he always pays us in advance the expected value of the
winning.
For decision-making in uncertainty, there are different approaches, which can be found in the references.
ACKNOWLEDGEMENT.
This investigation was supported by Grant FEKT-S-11-2-921 of Faculty of Electrical Engineering and
Communication, BUT.
REFERENCES
[1]
ACKOFF, R.; SASSIENY, M. Fundamentals of Operations Research. Wiley : New York, 1968.
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
197
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
BOSE, S. An Introduction to Queuing Systems. Kluver Academic Publishers : Boston, 2001.
DUDORKIN, N. Operační analýza. FEL ČVUT : Praha, 1997.
LEE, A. Applied Queuing Theory. St. Martin’s Press : New York, 1966.
LIPSKU, L. Queuing Theory. A Linear Algebraic Approach. Macmillan : New York, 1992.
PARZEN, E. Stochastic Processes. Holden-Day : San Francisco, 1962.
SAATY, T. Mathematical Methods of Operations Research. Mac Grave-Hill : New York, 1959.
SAATY, T. Elements of Queuing Theory with Applications. Dover : New York, 1983.
TANNER, M. Practical Queuing Analysis. Mac Grave-Hill : New York, 1959.
TYC, O. Operační analýza. MZLU : Brno, 2002.
ZAPLETAL, J. Operační analýza. EPI, s.r.o. : Kunovice, 1995.
ADDRESS
Mgr. Marie Tomšová
Department of Mathematics
FEEC BUT
Technická 8
616 00 Brno
Czech Republic
E-mail: [email protected]
Telephone: + 420 541 143 222
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
198
AUTHOR INDEX
B
BARTONĚK, D. ......................................11, 17, 67
BAŠTINEC, J. ..................................... 95, 103, 115
BAŠTINCOVÁ, A. ..............................................95
D
DERMEKOVÁ, S................................................17
DENDAMRONGVIT, S. .....................................49
DIBLÍK, J. ...........................................................95
DOSTÁL, P........................................................121
Ď
ĎUĎÁK, J......................................................25, 31
G
GAŠPAR, G.........................................................31
M
MARČEK, D......................................................143
MARČEK, M.....................................................143
MINNETT, P. ......................................................49
MORÁVEK, P. ..................................................151
MUŽÍKOVÁ, K...................................................61
N
NOVOTNÁ, V.....................................................55
O
OPATŘILOVÁ, I.................................................67
OŠMERA, P.......................................................159
P
PETRUCHA, J...................................................171
PIDDUBNA, G. .................................................115
H
HANULIAK, I. ..................................................125
HANULIAK, P. .................................................125
HOLÚBEK, A....................................................133
HORVÁTH, M. .................................................143
R
RICHTER, L. .................................................37, 43
RUKOVANSKÝ, I. ...........................................175
CH
CHVALINA, J. ..................................................103
S
SKORKOVSKÝ, P. .............................................81
SLOVÁČEK, D. ................................................137
SVOBODA, Z....................................................183
J
JANOVIČ, F. .....................................................137
K
KARAKHALIL, M............................................175
KRATOCHVÍL, O.............................................121
KRÁL, J. ........................................................37, 43
KUBAT, M. .........................................................49
L
LUHAN, J. ...........................................................55
Š
ŠEDA, M............................................................151
ŠIMČÁK, M. .......................................................75
T
TOMÁŠ, I. ...........................................................89
TOMŠOVÁ, M. .................................................189
V
VERTÉSY, G.......................................................89
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
199
„ICSC– NINTH INTERNATIONAL CONFERENCE ON SOFT COMPUTING APPLIED IN COMPUTER AND ECONOMIC
ENVIRONMENTS” EPI Kunovice, Czech Republic. January 21, 2011
200
Title:
ICSC 2011 – Ninth International Conference on Soft Computing Applied in
Computer and Economic Environments
Author:
Team of authors
Publisher, copyright holders, manufactured:
European Polytechnic Institute, Ltd.
Osvobození 699, 686 04 Kunovice, Czech Republic
Load:
100 ks
Number of pages: 200
Edition:
first
Release Year:
2011
ISBN 978-80-7314-221-6
Download

european polytechnical institute kunovice