Course work modeling and analysis of the information system of the construction organization LLC "M.T. Vpik"

Vladimir Bogdanov is an extraordinary person: the CEO of the largest... 23.05.2021
Chercher

By clicking on the "Download archive" button, you will download the file you need completely free of charge.
Before downloading this file, remember those good essays, tests, term papers, theses, articles and other documents that lie unclaimed on your computer. This is your work, it should participate in the development of society and benefit people. Find these works and submit them to the knowledge base.
We and all students, graduate students, young scientists who use the knowledge base in their studies and work will be very grateful to you.

To download an archive with a document, enter a five-digit number in the field below and click the "Download archive" button

Similar documents

    Objectives, functions and structure of the university branch. Information flow assessment and UML modeling. Structure analysis information system and navigation systems. Database design, physical implementation and testing of the information system.

    thesis, added 01/21/2012

    Designing a model of the "Hotel" information system in the IDEF0 standard. Development of a Data Flow Diagramming designed to describe document flow and information processing. Creating a decomposition diagram in IDEF3 notation.

    course work, added 12/14/2012

    Analysis of enterprise structure and management. Functions, types of activities, organizational and information models of the enterprise, assessment of the level of automation. Prospects for the development of automated information processing and management systems at the enterprise.

    practice report, added 09/10/2012

    Creation automated system accounting of orders and their implementation in a construction company for apartment renovation. General requirements to the information system. Database structure design. Building an ER diagram. Implementation of an information system.

    course work, added 03/24/2014

    Development of a conceptual model of an information processing system for a message switching node. Construction of structural and functional block diagrams of the system. Model programming in GPSS/PC language. Analysis economic efficiency simulation results.

    course work, added 03/04/2015

    Development software for entering, storing, editing and retrieving information on materials, customers, orders, cost and income accounting construction company. Studying the subject area; building a data flow diagram and database structure.

    course work, added 09/21/2015

    Description of the operating features of the store. System design: information modeling and data flow diagramming. Modeling and software implementation of an information system. User interface design.

    course work, added 02/18/2013

This type of analysis is based on the calculation of a number of quantitative indicators for the constructed model. It is necessary to take into account that these assessments are largely subjective, since the assessment is carried out directly using graphical models, and their complexity and level of detail are determined by many factors.

Complexity. This indicator characterizes how hierarchically complex the process model is. The numerical value is determined by the complexity coefficient k sl .

k sl = ? ur /? ekz

Where? ur -- number of decomposition levels,

Ekz -- number of process instances.

The complexity of the model under consideration is equal to:

At k sl<= 0,25 процесс считается сложным. При k sl =>0.66 is not considered as such. The process under consideration is 0.25, which does not exceed the complexity threshold.

Processivity. This indicator characterizes whether the constructed process model can be considered essential (describes the structure of the subject area in the form of a set of its main objects, concepts and connections) or process (all instances of the model processes are connected by cause-and-effect relationships). In other words, this indicator reflects how well the constructed model of a certain situation in the company corresponds to the definition of the process. The numerical value is determined by the process coefficient k pr

k pr = ? raz/? kep

Where? raz -- the number of “gaps” (lack of cause-and-effect relationships) between instances of business processes,

Processivity is equal to

Controllability. This indicator characterizes how effectively process owners manage processes. The numerical value is determined by the controllability coefficient k kon

k kon = ? s/? kep

Where? s -- number of owners,

Kep -- the number of instances in one diagram.

Controllability is equal to

When k kon = 1 the process is considered controlled.

Resource intensity. This indicator characterizes the efficiency of resource use for the process in question. The numerical value is determined by the resource intensity coefficient k r

k r = ? r/? out

Where? r -- the number of resources involved in the process,

Out -- number of outputs.

Resource intensity is equal to

The lower the coefficient value, the higher the efficiency of resource use in the business process.

At k r< 1 ресурсоемкость считается низкой.

Adjustability. This indicator characterizes how strongly the process is regulated. The numerical value is determined by the adjustability coefficient k reg

where D is the amount of available regulatory documentation,

Kep -- number of instances in one diagram

Adjustability is equal to

At k reg< 1 регулируемость считается низкой.

Parameters and values ​​of quantitative indicators are presented in table. 7.

Table 7. Quantitative indicators

For overall assessment of the analyzed process, calculate the sum of the calculated indicators

K = k sl + k pr + k kon + k r + k reg

The sum of the indicators is equal to

K = 0.1875 + 0.25 + 0.9375 + 0.273 + 0.937 = 2.585

The calculated value satisfies the condition K > 1. When K > 2.86, the process is considered obviously ineffective. At 1< K < 2,86 процесс частично эффективен.

To conduct a quantitative analysis of the models, we will use the following indicators:

1. The number of blocks on the diagram is N;

2. Decomposition level of the diagram – L;

3. Balance of the diagram – B;

4. The number of arrows connecting to the block is A.

This set of indicators applies to each diagram in the model, then using the coefficients (formula 1, 2), by which the quantitative characteristics of the model as a whole can be determined. To increase the understandability of the model, it is necessary to strive to ensure that the number of blocks (N) in the diagrams of lower levels is less than the number of blocks in the parent diagrams, that is, with an increase in the level of decomposition (L), the decomposition coefficient d decreases: d = N / L

Thus, a decrease in this coefficient indicates that as the model is decomposed, the functions should be simplified, therefore, the number of blocks should decrease.

Diagrams must be balanced. This means that the number of arrows entering and leaving the block should be equally distributed, that is, the number of arrows should not vary greatly. It should be noted that this recommendation may not be followed for processes that involve obtaining a finished product from large quantity components (production of a machine unit, production of a food product and others). The diagram's balance coefficient is calculated by the following formula:

It is desirable that the balance coefficient be minimal for the diagram and be constant in the model

In addition to assessing the quality of the diagrams in the model and the model itself in general based on the coefficients of balance and decomposition, it is possible to analyze and optimize the described processes. The physical meaning of the balance coefficient is determined by the number of arrows connected to the block, and accordingly it can be interpreted as an evaluation coefficient for the amount of information processed and received. Thus, on the graphs of the dependence of the balance coefficient on the level of decomposition, the existing peaks relative to the average value show the overload and underload of the information system subsystems in the enterprise, since different levels of decomposition describe the activities of various subsystems. Accordingly, if there are peaks in the graphs, then a number of recommendations can be made for optimizing the described processes automated by the information system.

Analysis of the context diagram “A-0 Information system construction organization»

Number of blocks: 1

Chart decomposition level: 3

Balance factor: 3

Number of arrows connecting to the block: 11

Analysis of process details “A2 Module “Suppliers”

Number of blocks: 4

Analysis of process detail “A3 Module “Objects”

Number of blocks: 3

Chart decomposition level: 2

Balance factor: 5.75

Analysis of process detail “A1 Module “Workers”

Number of blocks: 3

Chart decomposition level: 2

Balance factor: 5.75

Analysis of process details “A 4.1 Module “Reports”

Number of blocks: 3

Chart decomposition level: 2

Balance factor: 5.75

Analysis of the details of the process “A 5 Module “Contractors”

Number of blocks: 3

Chart decomposition level: 2

Balance factor: 5.75

The balance coefficient at the child levels of decomposition for the child levels of the process The store information system indicates that the diagram is balanced. Because the balance coefficient is not equal to zero, then it is possible to carry out further decomposition of some levels, after which it is possible to analyze the names of the activities of this model.

When conducting a quantitative analysis of the model, a graph of the decomposition coefficient was constructed, in which we see that as the level of decomposition increases, the decomposition coefficient decreases. Thus, a decrease in this coefficient indicates that as the model is decomposed, the functions are simplified, therefore, the number of blocks decreases. The decomposition coefficient graph is shown in Figure 10.

Figure 10 – Decomposition coefficient graph

On the graph of the dependence of the balance coefficient on the level of decomposition, the existing peaks relative to the average value indicate the congestion of the information system subsystems of the enterprise; the balance coefficient for the diagram is maximum. The balance coefficient graph is shown in Figure 11.

Figure 11 - Balance coefficient graph

Fundamentals of Quantitative Analysis

Quantitative analysis of the financial market is forecasting prices and profitability of financial assets, assessing the risks of investing in financial assets using mathematical and statistical methods time series analysis.

At first glance, quantitative analysis resembles technical analysis, since both types of analysis use historical price data financial asset and historical data on other characteristics of the financial asset. But there is a significant difference between technical analysis and quantitative analysis.

Technical analysis is based on empirically found patterns. And these patterns do not have a strict scientific basis.

While quantitative analysis methods have a strict mathematical basis. Many of the methods of quantitative analysis are successfully used in sciences such as physics, biology, astronomy, etc.

Basic ideology of quantitative analysis

The basic ideology of quantitative analysis is very similar to the approach practiced in the natural sciences.

IN quantitative analysis First, some hypothesis is put forward about the functioning of the financial market. A mathematical model is built on the basis of this hypothesis. This model should capture the most important idea of ​​the hypothesis put forward and discard unimportant random details.

Then, using mathematical methods This model is being studied. The most important thing in such a study is to forecast the prices of financial assets. Such a forecast can be made both for the current moment in time and for historical points in time. Then the forecast is compared with the real price chart.

Basic Quantitative Analysis Model

The most important model of quantitative analysis is the Efficient Financial Market model, which is formed on the basis of the Efficient Market Hypothesis.

In quantitative analysis, an efficient market is a situation where all financial market participants have access to all information related to the financial market at any given time. This means that all market participants not only always have all the information, but also have the same identical information. It does not happen that any market participant has any additional insider information that would be inaccessible to other market participants.

Under such conditions, all prices of all financial assets are always at their equilibrium values. That is, the price of any financial asset in an efficient market is always equal to the price at which supply and demand are equal to each other. In an efficient market, there is no such thing as any financial asset being overvalued or undervalued.

An efficient market leads to the fact that as soon as traders have some new information, prices immediately change instantly, reacting to the emergence of new information. Thus, prices are always in an equilibrium state, no matter how they change.

Therefore, from a quantitative point of view, it is impossible to make money in an efficient market, as in a real market, when investors buy undervalued assets and sell overvalued assets. Also, in an efficient market there are never market bubbles, when the price moves opposite to its equilibrium value.

Quantitative analysis states that in an efficient market, the price of a financial asset changes randomly such that the most likely price at the next point in time will be the current price. And prices different from the current price will be less likely. This random process is called a martingale. (Do not confuse martingale and martingale. Martingale is one of the money management strategies. In French, both of these words are homonyms, that is, they are written the same “martingale”, but have different meanings.)

This means that short-term speculation in financial assets in an efficient market is impossible. The only way to make money in such a market is to buy securities for long-term ownership.

This is a "buy and hold" strategy.

If the efficient market hypothesis is violated, prices of financial assets will deviate from their equilibrium values. Therefore, depending on one or another hypothesis of a violation of the efficient market in quantitative analysis, the opportunity opens up for constructing mathematical models that allow making money on the difference between real and equilibrium prices.

Specific hypotheses for deviations from the basic model often do not have a strict scientific basis in quantitative analysis. These hypotheses of deviation from the basic model lead to different mathematical models of the financial market. And, accordingly, these mathematical models can lead to completely different forecasts of financial asset prices.

Therefore, depending on what hypothesis of deviation from the basic model in quantitative analysis is accepted by financial market participants, they begin to adhere to one or another model of their behavior in the market. In this regard, the task of testing the market for its efficiency, how much the market differs from the efficient market, becomes very urgent.

This problem in quantitative analysis is solved using methods of statistical testing of hypotheses that underlie an efficient market. Such a check is possible if there is an adequate model that determines the profitability of financial assets subject to market equilibrium.

Quantitative Analysis and Psychology

Based on the above, it becomes clear that in financial markets there is also a connection between quantitative analysis and the psychology of traders and investors, as was the case for technical analysis and fundamental analysis. Market prices of a financial asset can change in one direction or another depending on which hypothesis of deviation from the basic model is accepted by the proponents of quantitative analysis who own the largest amount financial resources involved in this market.

Quantitative Time Series Analysis

Quantitative analysis of time series involves great mathematical difficulties. These difficulties are associated with the statistical nonstationarity of the behavior of prices of many exchange assets.

When studying time series, it is usually considered that the time series of changes in the prices of a financial asset is the sum of some dynamic component and a random component. The dynamic component depends on the fundamental economic laws according to which the price should change. And the random term is associated with some non-economic factors, for example, with the emotional behavior of traders, with the release of some force majeure news, etc.

The task of quantitative analysis is to identify this dynamic component and filter out random noise. The identified dynamic component can be extrapolated into the future. This extrapolation gives the average of the predicted price. And filtered random noise makes it possible to estimate statistical moments of a higher order. This is primarily a second-order statistical moment, that is, dispersion, which is associated with volatility. Knowing dispersion and volatility allows you to assess risks.

This time series analysis scheme is used, for example, when searching for signals from extraterrestrial civilizations among cosmic radio noise. This is exactly the task when the dynamic signal that we are looking for is completely unknown to us.

But for quantitative analysis of a time series of stock prices, the task is much more complicated. After all, extraterrestrial civilizations, knowing the statistical and spectral characteristics of cosmic radio noise, will try to send signals to the Universe that will be statistically and spectrally as different as possible from cosmic noise. They will do this on purpose to make it easier for other civilizations to search for and recognize their signals.

But the financial market is not such an intelligent creature. Therefore, for price time series there is no such clear separability of these series into dynamic and random components. Therefore, many mathematical signal filtering methods in quantitative analysis simply do not work.

In fact, time series of stock prices are the sum of several series. The first of these series is a purely time series. The last series in this sum is a purely random series with a zero autocorrelation function. And intermediate terms are intermediate series for which the autocorrelation function vanishes after some time. And we have a whole spectrum of times when the autocorrelation function vanishes.

Conclusion

In the fields of economics and finance, statistical models and methods are called econometrics. On the one hand, quantitative analysis of the financial market based on econometric models and methods is a development of traditional fundamental analysis in the field of market uncertainty. On the other hand, quantitative analysis makes an attempt to more strictly substantiate methods for studying historical data. This may further lead to a closer connection between quantitative and technical analysis.

The abstraction stage in the study of certain physical phenomena or technical objects consists of identifying their most essential properties and features, presenting these properties and features in such a simplified form that is necessary for subsequent theoretical and experimental research. Such a simplified representation of a real object or phenomenon is called model.

When using models, some data and properties inherent in a real object are deliberately abandoned in order to easily obtain a solution to a problem, if these simplifications only have an insignificant effect on the results.

Depending on the purpose of the research, various models can be used for the same technical device: physical, mathematical, simulation.

A model of a complex system can be represented as a block structure, that is, as a connection of links, each of which performs a specific technical function ( functional diagram ). As an example, we can consider the generalized model of the transmission system shown in Figure 1.2.


Figure 1.2 – Generalized model of an information transmission system

Here, a transmitter is understood as a device that converts a message from source A into signals S that best correspond to the characteristics of a given channel. Operations performed by the transmitter may include primary signal conditioning, modulation, encoding, data compression, etc. The receiver processes the signals X(t) = S(t) + x(t) at the channel output (taking into account the influence of additive and multiplicative noise x) in order to best reproduce (restore) the transmitted message A at the receiving end. Channel (in in the narrow sense) is a medium used to transmit signals from a transmitter to a receiver.

Another example of a complex system model is a phase-locked loop (PLL), used to stabilize the intermediate frequency (IF) in radio receivers (Figure 1.3).





Figure 1.3 – PLL system model

The system is designed to stabilize the inverter f f = f c - f g by correspondingly changing the frequency of the tunable oscillator (heterodyne) f g when the signal frequency changes f with. Frequency f g in turn, will change with the help of a controlled element in proportion to the output voltage of the phase discriminator, depending on the phase difference of the output frequency f fc and reference oscillator frequencies f 0 .

These models make it possible to obtain a qualitative description of processes, highlight the features of the functioning and performance of the system as a whole, and formulate research objectives. But technical specialist this data is usually insufficient. It is necessary to find out exactly (preferably in figures and graphs) how well the system or device works, identify quantitative indicators for assessing effectiveness, and compare the proposed technical solutions with existing analogues in order to make an informed decision.

For theoretical research, obtaining not only qualitative but also quantitative indicators and characteristics, it is necessary to perform a mathematical description of the system, that is, to create its mathematical model.

Mathematical models can be represented by various mathematical means: graphs, matrices, differential or difference equations, transfer functions, graphical connection of elementary dynamic links or elements, probabilistic characteristics, etc.

Thus, the first main question that arises in quantitative analysis and calculation electronic devices is to compile, with the required degree of approximation, a mathematical model that describes changes in the state of the system over time.

A graphical representation of a system in the form of a connection of various links, where each link is associated with a mathematical operation (differential equation, transfer function, complex transfer coefficient), is called block diagram . In this case, the main role is played not by the physical structure of the link, but by the nature of the connection between the input and output variables. Thus, various systems can be dynamically equivalent and after replacing the functional diagram with the structural one can be applied general methods analysis of systems, regardless of the field of application, physical implementation and operating principle of the system under study.

Contradictory requirements are placed on a mathematical model: on the one hand, it must reflect the properties of the original as fully as possible, and on the other, it must be as simple as possible so as not to complicate the study. Strictly speaking, every technical system(or device) is nonlinear and nonstationary, containing both lumped and distributed parameters. Obviously, an accurate mathematical description of such systems is very difficult and is not associated with practical necessity. The success of system analysis depends on how correctly the degree of idealization or simplification is chosen when choosing their mathematical model.

For example, any active resistance ( R) may depend on temperature, have reactive properties on high frequencies. At high currents and operating temperatures, its characteristics become significantly nonlinear. At the same time, when normal temperature, at low frequencies, in the small signal mode, these properties can be ignored and the resistance can be considered an inertia-free linear element.

Thus, in a number of cases, with a limited range of parameter changes, it is possible to significantly simplify the model, neglect the nonlinearity of the characteristics and nonstationarity of the parameter values ​​of the device under study, which will allow, for example, its analysis using a well-developed mathematical apparatus for linear systems with constant parameters.

As an example, Figure 1.4 shows a block diagram ( graphic image mathematical model) of the PLL system. If the frequency instability of the input signal is slight, the nonlinearity of the characteristics of the phase discriminator and the controlled element can be neglected. In this case, mathematical models of the functional elements indicated in Figure 1.3 can be represented in the form of linear links described by corresponding transfer functions.



Figure 1.4 – Block diagram (graphical representation of the mathematical model) of the PLL system

Designing electronic circuits using analysis and optimization programs on a computer, as noted above, has a number of advantages over the traditional method of designing “by hand” with subsequent finishing on a breadboard. Firstly, with the help of computer analysis programs it is much easier to observe the effect of varying circuit parameters than with the help of experimental studies. Secondly, it is possible to analyze the critical operating modes of a circuit without physically destroying its components. Thirdly, analysis programs make it possible to evaluate the operation of a circuit under the worst combination of parameters, which is difficult and not always possible to carry out experimentally. Fourthly, the programs make it possible to carry out measurements on a model of an electronic circuit that are difficult to perform experimentally in the laboratory.

The use of a computer does not exclude experimental research (and even involves subsequent testing on a prototype), but it puts in the hands of the designer a powerful tool that can significantly reduce the time spent on design and reduce the cost of development. A computer has a particularly significant effect when designing complex devices (for example, integrated circuits), when it is necessary to take into account big number factors affecting the operation of the circuit, and experimental rework is too expensive and time-consuming.

Despite obvious advantages, the use of computers has given rise to great difficulties: it is necessary to develop mathematical models of components of electronic circuits and create a library of their parameters, improve mathematical methods for analyzing the diverse operating modes of various devices and systems, develop high-performance computing systems, etc. In addition, many tasks turned out to be beyond the control of computers . For most devices, their structure and circuit diagram largely depend on the scope of application and initial design data, which creates great difficulties in synthesis circuit diagrams using a computer. In this case, the initial version of the circuit is compiled by an engineer “manually”, followed by modeling and optimization on a computer. The greatest achievements in the construction of programs for structural synthesis and synthesis of circuit diagrams are in the field of designing matching circuits, analog and digital filters, and devices based on programmable logic matrices (PLM).

When developing a mathematical model a complex system is divided into subsystems, and for a number of subsystems, mathematical models can be unified and concentrated in appropriate libraries. Thus, when studying electronic devices using computer modeling programs, a schematic or block diagram is a graphical representation of components, each of which is associated with a selected mathematical model.

To study circuit diagrams, models of typical independent sources, transistors, passive components, integrated circuits, and logic elements are used.

To study systems given structural diagrams, it is important to indicate the relationship between input and output variables. In this case, the output of any structural component is represented as a dependent source. Typically, this relationship is specified by either a polynomial function or a fractional rational transfer function using the Laplace operator. Taking into account the selected function coefficients, it is possible to obtain models of such structural components, as an adder, subtractor, multiplier, integrator, differentiator, filter, amplifier and others.

Modern computer modeling programs contain dozens of types of libraries of various models, and each library contains dozens and hundreds of models of modern transistors and microcircuits produced by leading manufacturers. These libraries often make up the bulk of the software. At the same time, during the modeling process it is possible to quickly correct the parameters of existing models or create new ones.

We recommend reading

Top