Vadim Shmal "Effective Methods and Transportation Processes Management Models at the Railway Transport. Textbook"

Igor Shapkin Doctor of Technical Sciences, Professor Russian university of transport (MIIT). Pavel Minakov Ph. D. Associate Professor Russian university of transport (MIIT). Vadim Shmal Ph. D. Associate Professor Russian university of transport (MIIT).

date_range Год издания :

foundation Издательство :Издательские решения

person Автор :

workspaces ISBN :9785006076716

child_care Возрастное ограничение : 18

update Дата обновления : 03.11.2023

Sometimes (relatively rarely) as a result of the study, it is possible to indicate a single, strictly optimal solution. Much more often there are cases – to highlight the area of almost equivalent optimal solutions, within which the final choice can be made. The decision-making itself goes beyond the scope of the operations study and falls within the competence of the responsible person, more often a group of persons who are given the right of final choice.

In this choice, they can take into account, along with the recommendations arising from the mathematical calculation, also a number of considerations (quantitative and qualitative) that were not taken into account by this calculation.

The indispensable presence of a person (as the final instance of decision-making) is not canceled even in the presence of a fully automated control system, which, it would seem, makes the optimal decision depending on the situation without human intervention. We must not forget that the very creation of the control algorithm, the choice of one of its possible options, is also a decision, and a very responsible one. With the development of ACS and ITS, human functions are not canceled, but simply move from one elementary level to another, higher.

The parameters that combine to form a solution are called solution elements. For example, if you plan to transport goods, the elements of the solution will be numbers that indicate how much cargo will be sent from each point of origin to each destination, the routes of the goods and the time of delivery.

In the simplest problems of operations research, the number of solution elements can be relatively small. However, in most tasks of practical importance, the number of elements of the solution is very large, which, of course, makes it difficult to analyze the situation and make recommendations. As a rule, any task of operations research results in a whole scientific study performed collectively, which takes a lot of time and requires the mandatory use of computer technology.

In addition to the elements of the solution, which we, within some limits, can dispose of, in any problem of operations research there are also given, «disciplining» conditions that are fixed from the very beginning and cannot be violated. In particular, such conditions include the means (material, technical, technological, human) that we have the right to dispose of, and various kinds of restrictions relying on solutions.

2.2 Mathematical modeling of operations

For the application of quantitative research methods in any field, some kind of mathematical model is always required. When constructing a mathematical model, a real phenomenon (in our case, an operation) is always simplified, schematized, and the resulting scheme is described using one or another mathematical apparatus. The more successfully the mathematical model is chosen, the better it will reflect the characteristic features of the phenomenon, the more successful the study will be and the more useful the recommendations arising from it.

There are no general ways to construct mathematical models. In each case, the model is selected based on the target orientation of the operation and the research task, taking into account the required accuracy of the solution and the accuracy with which we can know the initial data. If the initial data is known inaccurately, then, obviously, there is no point in building a very detailed, subtle and accurate model of the phenomenon and wasting time (your own and machine) on subtle and accurate optimization of the solution. Unfortunately, this principle is often neglected in practice and excessively detailed models are chosen to describe phenomena.

The model should reflect the most important features of the phenomenon, i.e. it should take into account all the essential factors on which the success of the operation most depends. At the same time, the model should be as simple as possible, not «clogged» with a mass of small, secondary factors, since taking them into account complicates mathematical analysis and makes the results of the study difficult to see. In a word, the art of making mathematical models is precisely the art, and experience in this matter is acquired gradually. Two dangers always lie in wait for the compiler of the model: the first is to drown in detail («you can’t see the forest because of the trees»); The second is to coarsen the phenomenon too much («throw out the baby with the bathwater»). Therefore, when solving problems of operations research, it is always useful to compare the results obtained by different models, to arrange a kind of «model dispute». The same problem is solved not once, but several, using different systems of assumptions, different apparatus, different models.

If scientific conclusions change little from model to model, this is a serious argument in favor of the objectivity of the study. If they differ significantly, it is necessary to revise the concepts underlying the various models, to see which of them is most adequate to reality. It is also characteristic of the operations study to re-refer to the model (after the study in the first approximation has already been performed) to make the necessary adjustments to this model.

The construction of a mathematical model is the most important and responsible part of the study, which requires deep knowledge not only and not so much of mathematics, but of the essence of the phenomena being modeled. As a rule, «pure» mathematicians do not cope with this task well without the help of specialists in this field. They focus on the mathematical apparatus with its subtleties, and not the correspondence of the model to the real phenomenon. Experience shows that the most successful models are created by specialists in this field of practice, who have received deep mathematical training in addition to the main one, or by groups that unite specialists and mathematicians.

The mathematical training of a specialist wishing to engage in the study of operations in his field of practice should be quite wide. Along with classical methods of analysis, it should include a number of modern branches of mathematics, such as optimization methods, including linear, nonlinear, dynamic programming, methods of machine search for extremes, etc. Special requirements for probabilistic training are related to the fact that most operations are carried out in conditions of incomplete certainty, their course and outcome depend on random factors – such as meteorological conditions, fluctuations in supply and demand, failures of technical devices, etc. Therefore, creative work in the field of operations research requires a good command of probability theory, including its newest sections: the theory of stochastic processes, information theory, theory of games and static solutions, theory of queuing.

When constructing a mathematical model, a mathematical apparatus of varying complexity can be used (depending on the type of operation and research tasks). In the simplest cases, the model is described by simple algebraic equations. In more complex ones, when it is necessary to consider the phenomenon in dynamics, the apparatus of differential equations, both ordinary and partial derivatives, is used. In the most difficult cases, if the development of the operation in time depends on a large number of intricately intertwined random factors, the method of statistical modeling is used. As a first approximation, the idea of the method can be described as follows: the process of development of the operation, as it were, is «copied», reproduced on a machine (computer) with all the accompanying accidents. Thus, one instance (one implementation) of a random process (operation) is built, with a random course and outcome. By itself, one such implementation does not give grounds for choosing a solution, but, having received a set of such implementations, we process them as ordinary statistical material (hence the term «statistical modeling»), derive the average characteristics for a set of implementations and get an idea of how, on average, the conditions of the problem and the elements of the solution affect the course and outcome of the operation.

In the study of operations, the course of which is influenced by random factors, the so-called «stochastic problems of operations research», both analytical and statistical models are used. Each of these types of models has its advantages and disadvantages. Analytical models are coarser than statistical ones, take into account fewer factors, and inevitably require some assumptions and simplifications. These models can describe the phenomenon only approximately, schematically, but the results of such modeling are more visual and more clearly reflect the patterns inherent in the phenomenon. And most importantly, analytical models are more suitable for finding optimal solutions, which can also be carried out by analytical methods, using all the means of modern mathematics.

Statistical models, in comparison with analytical ones, are more accurate and detailed, do not require such crude assumptions, and allow us to take into account a large (in theory, infinitely large) number of factors. It would seem that they are closer to reality and should be preferred. However, they also have their drawbacks: comparative bulkiness, high consumption of computer time; poor visibility of the results obtained and the difficulty of comprehending them. And most importantly, the extreme difficulty of finding the optimal solutions that have to be sought «by touch», by guesses and trials.

Young professionals, whose experience in operations research is limited, having at their disposal modern computers, often unnecessarily begin research with the construction of its statistical model, trying to take into account in this model a huge number of factors (the more, the better). As a result, many of these models remain «stillborn», since they have not developed a methodology for applying and comprehending the results, translating them into the rank of recommendations.

The best results are obtained when analytical and statistical models are used together. A simple analytical model allows you to understand the basic laws of the phenomenon, outline, as it were, its «contour», and indicate a reasonable solution in the first approximation. After that, any refinement can be obtained by statistical modeling. If the results of statistical modeling do not diverge too much from the results of analytical modeling, this gives us reason not only in this case, but also in many similar ones, to apply an analytical model. If the statistical model gives significantly different results compared to the analytical one, a system of corrections to the analytical solution can be developed such as «empirical formulas» that are widely used in technology.

When optimizing solutions, it can also be very useful to optimize them in advance on an analytical model. This will allow, when using a more accurate statistical model, to search for the optimal solution not quite at random, but in a limited area containing solutions that are close to the optimal ones in the analytical model. Given that in practice we are rarely interested in a single, exactly optimal solution, more often it is necessary to indicate the area in which it lies, analytical optimization methods, tested and supported by statistical modeling, can be a valuable tool for making recommendations.

The construction of a mathematical model of operations is not important in itself, but is aimed at identifying optimal solutions. It is advisable to choose a solution that ensures operations of maximum efficiency. Under the effectiveness of the operation, of course, the measure of its success is the degree of its adaptability to achieve the goal before it.

In order to compare various solutions in terms of effectiveness, it is necessary to have some kind of quantitative criterion, an indicator of effectiveness (it is often called the «target function»). This indicator is selected so that it best reflects the target orientation of operations. To choose a performance indicator, you must first ask yourself: what do we want, what do we strive for when undertaking an operation? When choosing a solution, we prefer one that turns the performance indicator into a maximum (or minimum).

Very often, the cost of performing operations appears as performance indicators, which, of course, need to be minimized. For example, if the operation aims to change the production technology so as to reduce the cost of production as much as possible, then it will be natural to take the average cost as an indicator of efficiency and prefer the solution that will turn this indicator into a minimum.

In some cases, it happens that the operation pursues a well-defined goal A, which alone can be achieved or not achieved (we are not interested in any intermediate results). Then the probability of achieving this goal is chosen as an indicator of effectiveness. For example, if you are shooting at an object with the sine qua non condition of destroying it, the probability of destroying the object will be an indicator of effectiveness.

Choosing the wrong KPI is very dangerous, as it can lead to incorrect recommendations. Operations organized from the point of view of an unsuccessfully chosen indicator can lead to large unjustified costs and losses (recall at least the notorious «shaft» as the main criterion for the economic activity of enterprises).

2.3 Different types of operations research problems and methods for solving them

The objectives of the study are divided into two categories: a) direct and b) reverse. Direct tasks answer the question: what will happen if, under the given conditions, we make such and such a decision? In particular, what will be equal to the selected performance indicator W in this decision?

Inverse problems answer the question: how should the elements of the solution be selected in order for the efficiency indicator W to turn to the maximum?

Naturally, direct problems are simpler than inverse ones. It is also obvious that in order to solve the inverse problem, first of all, one must be able to solve a straight line. This purpose is served by the mathematical model of the operation, which makes it possible to calculate the efficiency indicator W (and, if necessary, other characteristics) for any given conditions, with any solution.

If the number of possible solutions is small, then by calculating the W value for each of them and comparing the values obtained with each other, you can directly specify one or more optimal options for which the efficiency indicator reaches a maximum. However, when the number of possible solutions is large, the search for the optimal one among them «blindly», by simple search, is difficult, in some cases it is almost impossible. For this purpose, special methods of targeted search are used (we will get acquainted with some of them later). Now we will limit ourselves to the formulation of the problem of optimizing the solution in the most general form.

Let there be an operation «O», the success of which we can influence to some extent by choosing in one way or another the parameters (elements of the solution) that depend on us. The efficiency of the operation is characterized by the efficiency indicator W, which is required to be turned to the maximum.

Suppose that the direct problem is solved and the mathematical model allows you to calculate the value of W for any chosen solution, for any set of conditions.

Let us first consider the simplest (so-called «deterministic») case, when the conditions for performing the operation are fully known, i.e. do not contain an element of uncertainty. Then all the factors on which the success of the operation depends are divided into two groups:

1) Predetermined, predetermined factors (conditions) ?1, ?2,… over which we have no influence (in particular, restrictions imposed on the decision);

2) Factors depending on us (elements of the solution) x1, x2,… which we, within certain limits, can choose at our discretion.

The W performance indicator depends on both groups of factors. We will write this in the form of a formula:

W = W (a1, a2,..; х1, х2,..).

It is believed that the type of dependence (1) is known to us and with the help of a mathematical model we can calculate for any given ?1, ?2,.., x1, x2,.. value of W (i.e., the direct problem is solved). Then the inverse problem can be formulated as follows:

Under given conditions, ?1, ?2,.. find such elements of the solution x1, x2,.., which turn the W indicator to the maximum.

Before us is a typically mathematical problem belonging to the class of so-called variational problems. Methods for solving such problems are analyzed in detail in mathematics. The simplest of these methods (the well-known «maximum and minimum problems») are familiar to every engineer. These methods prescribe to find the maximum or minimum (in short, the «extremum») of the function to differentiate it by arguments, equate the derivatives to zero and solve the resulting system of equations. However, this classical method has only limited application in the study of operations. First, in the case when there are many arguments, the task of solving a system of equations is often not easier, but more difficult than the direct search for an extremum. Secondly, the extremum is often reached not at all at the point where the derivatives turn to zero (such a point may not exist at all), but somewhere at the boundary of the area of change of arguments. All the specific difficulties of the so-called «multidimensional variational problem in the presence of limitations» arise, sometimes unbearable in its complexity even for modern computers. In addition, we must not forget that the function W may not have derivatives at all, for example, be integer, or be given only with integer values of arguments. All this makes the task of finding an extremum far from being as easy as it seems at first glance. The optimization method should always be chosen based on the features of the W function and the type of constraints imposed on the elements of the solution. For example, if the function W linearly depends on the elements of the solution x1, x2,.., and the constraints imposed on x1, x2,.., have the form of linear equalities or inequalities, the problem of linear programming arises, which is solved by relatively simple methods (we will get acquainted with some of them later). If the W function is convex, special methods of «convex programming» are used, with their kind of «quadratic programming». To optimize the management of multi-stage operations, the method of «dynamic programming» can be applied. Finally, there is a whole set of numerical methods for finding the extremes of the functions of many arguments, specially adapted for implementation on computers. Thus, the problem of optimizing the solution in the considered deterministic case is reduced to the mathematical problem of finding the extremum of a function that can present computational, but not fundamental difficulties.

Let’s not forget, however, that we have considered so far the simplest case, when only two groups of factors appear in the problem: the given conditions ?1, ?2,.. and solution elements x1, x2,… The real tasks of operations research are often reduced to a scheme where, in addition to two groups of factors ?1, ?2,.., x1, x2,.., there is a third – unknown factors ?1, ?2, …, the values of which cannot be predicted in advance.

In this case, the W performance indicator depends on all three groups of factors:

W = W (a1, a2,..; х1, х2,..; o1, x2, …)

And the problem of solution optimization can be formulated as follows:

Under given conditions, ?1, ?2,.. Taking into account the presence of unknown factors ?1, ?2, … find such elements of the solution x1, x2,…, which, if possible, provide the maximum value of the efficiency indicator W.

This is another, not purely mathematical problem (it is not for nothing that the reservation «if possible» is made in its formulation). The presence of unknown factors translates the problem into a new quality: it turns into a problem of choosing a solution under conditions of uncertainty.

However, uncertainty is uncertainty. If the conditions for the operation are unknown, we cannot optimize the solution as successfully as we would if we had more information. Therefore, any decision made under conditions of uncertainty is worse than a decision made under predetermined conditions. It is our business to communicate to our decision as much as possible the features of reasonableness. It is not for nothing that one of the prominent foreign experts in operations research, T.L. Saati, defining his subject, writes that «operations research is the art of giving bad answers to those practical questions to which even worse answers are given by other methods.»

The task of making a decision in conditions of uncertainty is found at every step in life. Suppose, for example, that we are going to travel and put some things in our suitcase. The size of the suitcase is limited (conditions ?1, ?2,..), the weather in the travel areas is not known in advance (?1, ?2,…). What items of clothing (x1, x2,..) should I take with me? This problem of operations research, of course, is solved by us without any mathematical apparatus, although based on some statistical data, say, about the weather in different areas, as well as our own tendency to colds; Something like optimizing the decision, consciously or unconsciously, we produce. Curiously, different people seem to use different performance indicators. If a young person is likely to seek to maximize the number of pleasant impressions from the trip, then an elderly traveler, perhaps, wants to minimize the likelihood of illness.

And now let’s take a more serious task. A system of protective structures is being designed to protect the area from floods. Neither the moments of the onset of floods, nor their size are known in advance. And you still need to design.

In order to make such decisions not at random, by inspiration, but soberly, with open eyes, modern science has a number of methodological techniques. The use of one or the other of them depends on the nature of the unknown factors, where they come from and by whom they are controlled.

The simplest case of uncertainty is the case when the unknown factors ?1, ?2,… are random variables (or random functions) whose statistical characteristics (say, distribution laws) are known to us or, in principle, can be obtained. We will call such problems of operations research stochastic problems, and the inherent uncertainty – stochastic uncertainty.

Here is an example of a stochastic operations research problem. Let the work of the catering enterprise be organized. We do not know exactly how many visitors will come to it the day before work, how long the service of each of them will continue, etc. However, the characteristics of these random variables, if we are not already at our disposal, can be obtained statistically.

Let us now assume that we have before us a stochastic problem of operations research, and the unknown factors ?1, ?2,… – ordinary random variables with some (in principle known) probabilistic characteristics. Then the efficiency indicator W, depending on these factors, will also be a random value.

The first thing that comes to mind is to take as an indicator of efficiency not the random variable W itself, but its average value (mathematical expectation)

W = M [W (a1, a2,..; х1, х2,..; o1, x2, …)]

and choose such a solution x1, x2,.., in which this average value turns into a maximum.

Note that this is exactly what we did, choosing in a number of examples of operations, the outcome of which depends on random factors, as an indicator of efficiency, the average value of the value that we wanted to turn into a maximum (minimum). This is the «average income» per unit of time, «average relative downtime», etc. In most cases, this approach to solving stochastic problems of operations research is fully justified. If we choose a solution based on the requirement that the average value of the performance indicator is maximized, then, of course, we will do better than if we chose a solution at random.

But what about the element of uncertainty? Of course, to some extent it remains. The success of each individual operation carried out with random values of the parameters ?1, ?2, …, can be very different from the expected average, both upwards and, unfortunately, downwards. We should be comforted by the following: by organizing the operation so that the average value of W is maximized and repeating the same (or similar) operations many times, we will ultimately gain more than if we did not use the calculation at all.

Thus, the choice of a solution that maximizes the average W value of the W efficiency indicator W is fully justified when it comes to operations with repeatability. A loss in one case is compensated by a gain in the other, and in the end our solution will be profitable.

But what if we are talking about an operation that is not repeatable, but unique, carried out only once? Here, a solution that simply maximizes the average value of W will be imprudent. It would be more cautious to guard yourself against unnecessary risk by demanding, for example, that the probability of obtaining an unacceptably small value of W, say, W?w0, be sufficiently small:

P (W ?w0) ? ?,

where ? is some small number, so small that an event with a probability of ? can be considered almost impossible. The condition-constraint can be taken into account when solving the problem of solution optimization along with others. Then we will look for a solution that maximizes the average value of W, but with an additional, «reinsurance» condition.

The case of stochastic uncertainty of conditions considered by us is relatively prosperous. The situation is much worse when the unknown factors ?1, ?2, … cannot be described by statistical methods. This happens in two cases: either the probability distribution for the parameters ?1, ?2, … In principle, it exists, but the corresponding statistical data cannot be obtained, or the probability distribution for the parameters ?1, ?2, … does not exist at all.

Let us give an example related to the last, most «harmful» category of uncertainty. Let’s assume that some commercial and industrial operation is planned, the success of which depends on the length of skirts ? women will wear in the coming year. The probability distribution for the parameter ? cannot, in principle, be obtained from any statistical data. One can only try to guess its plausible meanings in a purely speculative way.

Let us consider just such a case of «bad uncertainty»: the effectiveness of the operation depends on the unknown parameters ?1, ?2, …, about which we have no information, but can only make suggestions. Let’s try to solve the problem.

The first thing that comes to mind is to ask some (more or less plausible) values of the parameters ?1, ?2, … and find a conditionally optimal solution for them. Let’s assume that, having spent a lot of effort and time (our own and machine), we did it. So what? Will the conditionally optimal solution found be good for other conditions? As a rule, no. Therefore, its value is purely limited. In this case, it will be reasonable not to have a solution that is optimal for some conditions, but a compromise solution that, while not optimal for any conditions, will still be acceptable in their whole range. At present, a full-fledged scientific «theory of compromise» does not yet exist (although there are some attempts in this direction in decision theory). Usually, the final choice of a compromise solution is made by a person. Based on preliminary calculations, during which a large number of direct problems for different conditions and different solutions are solved, he can assess the strengths and weaknesses of each option and make a choice based on these estimates. To do this, it is not necessary (although sometimes curious) to know the exact conditional optimum for each set of conditions. Mathematical variational methods recede into the background in this case.

When considering the problems of operations research with «bad uncertainty», it is always useful to confront different approaches and different points of view in a dispute. Among the latter, it should be noted one, often used because of its mathematical certainty, which can be called the «position of extreme pessimism». It boils down to the fact that one must always count on the worst conditions and choose the solution that gives the maximum effect in these worst conditions for oneself. If, under these conditions, it gives the value of the efficiency indicator equal to W *, then this means that under no circumstances will the efficiency of the operation be less than W * («guaranteed winnings»). This approach is tempting because it gives a clear formulation of the optimization problem and the possibility of solving it by correct mathematical methods. But, using it, we must not forget that this point of view is extreme, that on its basis you can only get an extremely cautious, «reinsurance» decision, which is unlikely to be reasonable. Calculations based on the point of view of «extreme pessimism» should always be adjusted with a reasonable dose of optimism. It is hardly advisable to take the opposite point of view – extreme or «dashing» optimism, always count on the most favorable conditions, but a certain amount of risk when making a decision should still be present.

Let us mention one, rather original method used when choosing a solution in conditions of «bad uncertainty» – the so-called method of expert assessments. It is often used in other fields, such as futurology. Roughly speaking, it consists in the fact that a team of competent people («experts») gathers, each of them is asked to answer a question (for example, name the date when this or that discovery will be made); then the answers obtained are processed like statistical material, making it possible (to paraphrase T. L. Saati) «to give a bad answer to a question that cannot be answered in any other way.» Such expert assessments for unknown conditions can also be applied to solving problems of operations research under conditions of «bad uncertainty». Each of the experts evaluates the degree of plausibility of various variants of conditions, attributing to them some subjective probabilities. Despite the subjective nature of the estimates of probabilities by each expert, by averaging the estimates of the whole team, you can get something more objective and useful. By the way, the subjective assessments of different experts do not differ as much as one might expect. In this way, the solution of the problem of studying operations with «bad uncertainty» seems to be reduced to the solution of a relatively benign stochastic problem. Of course, the result obtained cannot be treated too trustingly, forgetting about its dubious origin, but along with others arising from other points of view, it can still help in choosing a solution.

Let’s name another approach to choosing a solution in conditions of uncertainty – the so-called «adaptive algorithms» of control. Let the operation O in question belong to the category of repeating repeatedly, and some of its conditions are ?1, ?2,… Unknown in advance, random. However, we do not have statistics on the probability distribution for these conditions and there is no time to collect such data (for example, it takes a considerable amount of time to collect statistics, and the operation needs to be performed now). Then it is possible to build and apply an adapting (adapting) control algorithm, which gradually takes place in the course of its application. At first, some (probably not the best) algorithm is taken, but as it is applied, it improves from time to time, since the experience of application indicates how it should be changed. It turns out something like the activity of a person who, as you know, «learns from mistakes.» Such adaptable control algorithms seem to have a great future.

Finally, we will consider a special case of uncertainty, not just «bad» but «hostile.» This kind of uncertainty arises in so-called «conflict situations» in which the interests of two (or more) parties with different goals collide. Conflict situations are characteristic of military operations, partly for sports competitions; in capitalist society – for competition. Such situations are dealt with by a special branch of mathematics – game theory. (It is often presented as part of the discipline «operations research.») The most pronounced case of a conflict situation is direct antagonism, when two sides A and B clash in a conflict, pursuing directly opposite goals («us» and «adversary»).

The theory of antagonistic games is based on the proposition that we are dealing with a reasonable and far-sighted adversary, always choosing his behavior in such a way as to prevent us from achieving our goal. In the accepted proposals, game theory makes it possible to choose the optimal solution in some sense, i.e. the least risky in the fight against a cunning and malicious opponent.

However, such a point of view on the conflict situation cannot be absolutized either. Life experience suggests that in conflict situations (for example, in hostilities), it is not the most cautious, but the most inventive who wins, who knows how to take advantage of the enemy’s weakness, deceive him, go beyond the conditions and methods of behavior known to him. So in conflict situations, game theory provides an extreme solution arising from a pessimistic, «reinsurance» position. Yet, if treated with due criticism, it, along with other considerations, can help in the final choice.

Closely related to game theory is the so-called «statistical decision theory». It is engaged in the preliminary mathematical justification of rational decisions in conditions of uncertainty, the development of reasonable «strategies of behavior» in these conditions. One possible approach to solving such problems is to consider an uncertain situation as a kind of «game», but not with a consciously opposing, reasonable adversary, but with «nature». By «nature» in the theory of statistical decisions is understood as a certain third-party authority, indifferent to the result of the game, but whose behavior is not known in advance.

Finally, let’s make one general remark. When justifying a decision under conditions of uncertainty, no matter what we do, the element of uncertainty remains. Therefore, it is impossible to impose too high demands on the accuracy of solving such problems. Instead of unambiguously indicating a single, exactly «optimal» (from some point of view) solution, it is better to single out a whole area of acceptable solutions that turn out to be insignificantly worse than others, no matter what point of view we use. Within this area, the persons responsible for this should make their final choice.

2.4 Multi-criteria Operations Research Tasks

Despite a number of significant difficulties associated with the uncertainty of the conditions of the operation, we have still considered only the simplest problems of operations research, when the criterion by which the effectiveness is evaluated is clear, and it is necessary to turn into a maximum (or minimum) a single indicator of efficiency W. It is he who is the criterion by which one can judge the effectiveness of the operation and the decisions made.

Finally, let’s make one general remark. When justifying a decision under conditions of uncertainty, no matter what we do, the element of uncertainty remains. Therefore, it is impossible to impose too high demands on the accuracy of solving such problems. Instead of unambiguously indicating a single, exactly «optimal» (from some point of view) solution, it is better to single out a whole area of acceptable solutions that turn out to be insignificantly worse than others, no matter what point of view we use. Within this area, the persons responsible for this should make their final choice.

Despite a number of significant difficulties associated with the uncertainty of the conditions of the operation, we have still considered only the simplest problems of operations research, when the criterion by which the effectiveness is evaluated is clear, and it is necessary to turn into a maximum (or minimum) a single indicator of efficiency W. It is he who is the criterion by which one can judge the effectiveness of the operation and the decisions made.

Unfortunately, in practice, such tasks, where the evaluation criterion is clearly dictated by the target orientation of the operation, are relatively rare, mainly when considering small-scale and modest-value activities. As a rule, the effectiveness of large-scale, complex operations affecting the diverse interests of participants cannot be exhaustively characterized using a single performance indicator W. To help him, he has to attract other, additional ones. Such operations research tasks are called «multi-criteria».

Such a multiplicity of criteria (indicators), of which it is desirable to turn some into a maximum, and others into a minimum, is characteristic of any somewhat complex operation. We suggest the reader to formulate in the form of an exercise a number of criteria (performance indicators) for the operation in which the work of urban transport is organized. Fleet of mobile vehicles (trams, buses, trolleybuses) it is considered set; the solution swings routes and stops. When choosing a system of indicators, think about which of them is the main one (most closely related to the target orientation of the operation), and arrange the rest (additional) in descending order of importance. Using this example, you will see that a) none of the indicators can be chosen as the only one and b) the formulation of the indicator system is not such an easy task as it may seem at first glance.

So, typical for a large-scale task of operations research is multi-criteria – the presence of a number of quantitative indicators W1, W2,…, one of which is desirable to turn into a maximum, and others into a minimum, and others into a minimum («so that the wolves are fed and the sheep are safe»).

The question is, is it possible to find a solution that satisfies all these requirements at the same time? With all frankness, we answer: no. A solution that turns any indicator to a maximum, as a rule, neither turns into a maximum, nor a minimum of others. Therefore, the phrase «achieving maximum effect at minimum cost», which is widely used in everyday life, is nothing more than a phrase and should be discarded in scientific analysis. Another wording will be legitimate: «achieving a given effect at minimal cost» or «achieving the maximum effect at a given cost» (unfortunately, these legal formulations seem to be somehow not «elegant» enough in oral speech).

What if you still have to evaluate the effectiveness of the operation by several indicators?

In practice, the following technique is often used for this: they try to compile one of several indicators and, when choosing a solution, use such a «generalized» indicator. Often it is composed in the form of a fraction, where in the numerator are those values, the increase of which is desirable, and in the denominator – the increase of which is undesirable. For example, the enemy’s losses are in the numerator, own losses and the cost of funds spent are in the denominator, etc.

In practice, another, slightly more intricate method of compiling a «generalized» performance indicator is often used. They take individual private indicators, attribute some «weights» to them (a1, a2,…), multiply each indicator by the corresponding weight and add them up; Those indicators that need to be maximized are with a minus sign.

With the arbitrary assignment of weights attributed to particular indicators, this method is no better than the first. Proponents of this technique refer to the fact that a person, making a compromise decision, also mentally weighs the pros and cons, attributing greater importance to factors that are more important to him. This may be true, but, apparently, the «weighting coefficients» with which different indicators are included in the mental calculation are not constant, but change depending on the situation.

Here we meet with an extremely typical technique for such situations – «the transfer of arbitrariness from one instance to another.» Indeed, the simple choice of a compromise solution in a multi-criteria problem based on a mental comparison of the advantages and disadvantages of each solution seems too arbitrary, not «scientific» enough. But manipulating a formula that includes, albeit equally arbitrarily assigned coefficients, gives the solution the features of some kind of «scientificity». In fact, there is no science here – one transfusion from empty to empty.

It turns out that the mathematical apparatus cannot help us in solving multi-criteria problems? Far from it, it can help, and very significantly. Firstly, with the help of this device, it is possible to successfully solve direct problems of operations research and establish what advantages and disadvantages each of the solutions has according to different criteria. The mathematical model gives us the opportunity to calculate not only the value of the main performance indicator, but also all additional ones, and the complexity of the calculation increases little. Comparison of the results of solving a set of such direct problems provides the decision maker with a certain «accumulated scientific experience». Knowing what he wins and what he sacrifices, a person can evaluate each of the decisions and choose the most acceptable for himself.

A perplexing question may arise: what about the mathematical methods of optimization, about which he heard a lot and which he hoped so much? The trouble is that each of these methods makes it possible to find only an optimal solution for a single, scalar criterion W. Evaluate by the vector criterion (W1, W2,…) Modern mathematics does not yet know how. Indeed, not every «better» or «worse» is directly related to «more» or «less», and mathematical methods so far speak only the language of «more-less». Of all the devices known to us, so far only a person is able to make reasonable decisions not according to the scalar, but according to the vector criterion. How he does this is not clear. Maybe each time he reduces the vector to a scalar, forming some function (linear, nonlinear) from its components? Possibly, but not plausible. Most likely, when choosing a solution, he thinks not formally, but generally, instinctively assessing the situation as a whole, discarding insignificant details, subconsciously using all the experience he has, if not literally such, but similar situations. At the same time, the (informal) choice of a compromise solution can significantly help a person with a mathematical apparatus. In any case, it helps to discard in advance obviously unsuccessful solutions, which are not worth thinking about.

Let’s demonstrate one of these methods of preliminary «rejection» of unsuccessful decisions. Let us have to make a choice between several solutions: X1, X2,…, Xn (each option is a vector, the components of which are the elements of the solution). The effectiveness of the operation is evaluated by two indicators: the productivity of P and the cost of S. The first indicator is desirable to maximize, and the second to minimize.

Similarly, unsuitable options are discarded in the case when there are not two, but more. (With more than three of them, the geometric interpretation loses clarity, but the essence of the matter remains the same: the number of competitive solutions decreases sharply.) As for the final choice of the solution, it still remains the prerogative of man – this unsurpassed «master of compromise».

However, the procedure for choosing the final solution, being repeated repeatedly, in different situations, can serve as the basis for which convenient formalization. We are talking about the construction of so-called «heuristic methods» of decision-making. Such methods are widely used in attempts to automate the solution of some informal tasks. For example, in order to force the automaton to solve difficult-to-formalize tasks (for example, reading handwritten text, recognizing images or sounds of live speech), so-called training automata are created. The program according to which such a machine works is not laid down in it in advance, but is formed gradually, in the process of familiarization with an increasingly wide range of situations. The initial model for the machine is an experienced person who knows how to perform an informal task, say, to make a decision according to a vector criterion. Subsequently, there may be further improvement of the program (already in the order of «self-study»).

Конец ознакомительного фрагмента.

Все книги на сайте предоставены для ознакомления и защищены авторским правом