THE BELL

There are those who read this news before you.
Subscribe to get the latest articles.
Email
Name
Surname
How would you like to read The Bell
No spam

test model is a logical structure that describes the functionality of the system and / or user behavior, according to which test cases are generated. Building a test model begins with building a structure, and then the approved structure is filled with test cases.

Models are usually built based on requirements and/or expected behavior of the system. Building a test model and managing it is suitable for large systems with complex business logic and is difficult to apply to projects using agile methodologies, because the cost of maintaining the test model management and quality assurance process will be too high.

Test model management is a process that controls the coverage of the test model, the quality of scenarios that describe the test model and its actualization.

Test model management is a continuous process throughout life cycle product.

Test model coverage

To control the coverage of all requirements, you can use trace matrices that determine the coverage of requirements by test scenarios (see example).
Before the test cases are described, the structure of the test model must be approved with the customer.

Script quality

To manage the quality of scenarios, it is necessary to control not only the level of description of test cases, but also their quality.

Before beginning the description of test cases, it is necessary to define the requirements for each level of description and the criteria for the quality of description of test cases.

Possible levels of description of test cases:

At the 4th level, agreement with the customer can be replaced by agreement.

The quality criteria for describing test cases can be as follows:

  • Test cases must be written according to the requirements

Testing is the process of verifying that a product meets its requirements. Therefore, in the part of the general description of the test case (in test tracking systems, the term “Summary” is usually used), it is necessary to refer to a specific requirement in conjunction with fragments of the requirements text. Thus, for all project participants it will be clear on the basis of what this test case is written.

  • Use detailed preconditions

How to save time on test cases?

Set formatting rules for all test cases. So the test case will be easy to understand and read for any project participant. For example, on a project, you can enter the following rules:

  • All input parameters should be marked in red.
  • All scripts must be highlighted in blue,
  • All names of buttons, fields, blocks are in italics and bold.
  • Important passages are underlined.
  • Each step performed must have an expected result.
  • Each step in test cases should describe only one action and the expected result to it. Those. when receiving a failed test case in a particular step, it should be unequivocally clear on which action the error occurs.
  • The expected result must be unambiguous.

Test cases must be unambiguous, i.e. should be drafted and formulated in such a way that they do not allow for ambiguous interpretation, but are clearly understood by all participants.

If writing test cases takes a long time, then a situation may arise when a specialist stops seeing his mistakes. To do this, you need a look from the side - here it will help cross-review. This stage is recommended to be carried out in cases where the development of a test model is extended in time and is long in time. For example, when the development of test scenarios takes more than 1 month.

The script quality control process can be conducted with Test Model Control- specially prepared template.

Test model update

It is necessary to regularly update the test model and the test cases themselves for compliance with the requirements, as well as review the priorities of the test cases.

For updating you can maintain a "Matrix of requirements"(Requirement Traceability Matrix): after each change in a certain requirement, a selection of all test scenarios related to this requirement is made from the test-tracking system and updated.

Test model controls:

  • test rail
  • test link
  • Jira+Zephyr
  • Microsoft Test Manager (MTM)
  • excel

Testing is a process that allows you to evaluate the quality of the product being produced. A high-quality software product must meet the requirements for it, both functional and non-functional. The PS must implement all the required VIs and not have defects - differences between the real-life properties or behavior from the required ones. In addition, the PS must have the properties of reliability (there must be no hangups, crashes, etc.), security, provide the desired performance, be easy to use, expandable, etc. Thus, testing is a process of analyzing the PS, aimed at identifying defects and to assess the properties of PS.

Goals of the testing process

The purpose of testing is to evaluate the quality software product through

  • Component interaction checks;
  • Checking the correct integration of components;
  • Checking the accuracy of the implementation of all requirements and identifying defects.

Features of the testing process in RUP

Testing is an iterative process carried out in all phases of the life cycle. Testing starts from the beginning initial stage identifying requirements for a future product and closely integrating with current tasks. For each iteration, the goal of testing and methods for achieving it are determined. At the end of each iteration, it is determined to what extent this goal has been achieved, whether additional tests are needed, whether the principles and testing tools should be changed.

Each defect found is recorded in the project database with a description of the situation in which it was found. The analyst determines whether this is a real defect and whether it is a repetition of a previously discovered defect. The found defect is assigned a priority A indicating the importance of the fix. The designer responsible for developing the subsystem, component, or class, or another person designated by the manager, proceeds to fix the defect. The order in which defects are fixed is governed by their priorities. The tester repeats the tests and is convinced (or not convinced) that the defect has been fixed.

Test developer responsible for planning, developing and implementing tests. He creates a test plan and model, test procedures (see below) and evaluates the test results.

tester (tester) responsible for performing system testing. His responsibilities include setting up and executing tests, evaluating test performance, recovering from errors, and recording detected defects.

Artifacts

During testing, the following documents are created:

Test Plan– a document that defines the testing strategy in each iteration. It contains a description of the goals and objectives of testing in the current iteration, as well as the strategies that will be used. The plan indicates what resources will be required and provides a list of tests.

Test Model is a representation of what and how will be tested. The model includes a set of control tasks, test methods, test scenarios and expected results (test cases), test scripts and descriptions of test interactions.

  • Control task– a set of test data, test execution conditions and expected results.
  • Test Method– a document containing instructions for setting up and performing control tasks, as well as for evaluating the results obtained.
  • Test script- this is a simplified description of the test, including the initial data, conditions and sequences of actions, and expected results.
  • Test script is a program that is executed during automated testing using test tools.
  • Description of test interaction is a diagram of sequences or cooperations that reflects the flow of messages ordered in time between test components and the test object.

Test results and data obtained during the execution of tests.

Workload Model is used to model external functions performed by end users, the scope of those functions, and the workload generated by those functions. The model is intended for carrying out load and/or stress testing that simulates the operation of the system in real conditions.

Defects- these are descriptions of the facts of non-compliance of the system with the requirements found during testing. They are a type of change requests.

Testing works are carried out in each iteration in all phases, but the goals and objectives in different phases of the project are significantly different.

Phase of entering the project. In this phase, preparation for testing is carried out. It includes:

  • Create a test plan containing test requirements and test strategies. A single plan can be created for all types of testing (functional, load, etc.) or separate plans for each type.
  • Analysis of the scope of testing.
  • Formulation of quality criteria and completion of testing.
  • Installation and preparation for operation of testing tools.
  • Formulation of requirements for the PS development project, determined by the needs of testing.

Development phase. In the iterations of this phase, the construction of the test model and related artifacts begins. Since the VI model is already present in this phase, you can begin to design test scenarios. At the same time, it is not advisable to perform tests, since usually there are no completed PS fragments in this phase yet. The following activities are carried out:

  • Development of test scenarios.
  • Creation of test scripts.
  • Development of control tasks.
  • Development of test methods.
  • Development of a workload model.

Construction phase. In this phase, completed system fragments and prototypes appear, which must be tested. At the same time, in almost every iteration, all modules are checked (both previously developed and tested, and new ones added in the current iteration). Tests applied in previous iterations are also used in subsequent iterations for regression testing, that is, to check that the previously implemented system functionality is preserved in the new iteration. The following activities are carried out:

  • Create a test plan for each iteration.
  • Refinement and addition of the testing model.
  • Execution of tests.
  • Description of the defects found.
  • Description of test results.
  • Evaluation of test results.

Based on the results of testing, changes are made to the program code in order to eliminate the identified defects, after which the testing is repeated.

Deployment phase. In iterations of this phase, testing of the entire PS as a software product is performed. The activities carried out are similar to the activities of the previous phase. Defect detection determines the need for changes and retesting. The iterative process is repeated until the test termination criteria are met.

Test results are evaluated on the basis of testing metrics that allow determining the quality of the tested PS and the testing process itself.

Instrumental support

Since the iterative testing process involves multiple repetitions of tests, manual testing becomes inefficient and does not allow a thorough assessment of the quality of the software product. This is especially true for load and stress testing, where you need to simulate the workload and accumulate a significant amount of data. The solution is to use tools that support the automation of compiling and executing tests.

Like the development process, the software post-testing process also follows a specific methodology. By methodology, in this case, we mean the various combinations of principles, ideas, methods, and concepts that you resort to while working on a project.

There are currently a fairly large number of different approaches to testing, each with its own starting points, duration of execution, and methods used at each stage. And choosing one or the other can be quite a challenge. In this article, we will look at different approaches to software testing and talk about their main features to help you navigate the existing variety.

Waterfall model (Linear sequential software life cycle model)

The Waterfall Model is one of the oldest models that can be used not only for software development or testing, but also for almost any other project. His basic principle is the sequential order in which tasks are performed. This means that we can proceed to the next development or testing step only after the previous one has been successfully completed. This model is suitable for small projects and is applicable only if all requirements are clearly defined. The main advantages of this methodology are economic efficiency, ease of use and documentation management.

The software testing process begins after the completion of the development process. At this stage, all the necessary tests are transferred from units to system testing in order to control the operation of components both individually and as a whole.

In addition to the advantages mentioned above, this approach to testing also has its drawbacks. There is always the possibility of finding critical errors in the testing process. This may lead to the need to completely change one of the system components or even the entire logic of the project. But such a task is impossible in the case of the waterfall model, since the return to the previous step in this methodology is prohibited.

Learn more about the waterfall model from the previous article..

V-Model (Verification and Validation Model)

Like the Waterfall Model, the V-Model is based on a direct sequence of steps. The main difference between these two methodologies is that testing in this case is planned in parallel with the corresponding development stage. According to this software testing methodology, the process starts as soon as the requirements are defined and it becomes possible to start static testing, i.e. verification and review, which avoids possible software defects at later stages. An appropriate test plan is created for each level of software development that defines the expected results and entry and exit criteria for that product.

The scheme of this model shows the principle of dividing tasks into two parts. Those related to design and development are placed on the left. Tasks related to software testing are located on the right:

The main steps of this methodology may vary, but typically include the following:

  • Stage requirements definitions. Acceptance testing belongs to this phase. Its main task is to assess the readiness of the system for final use.
  • The stage at which high-level design, or High-Level Design (HDL). This phase refers to system testing and includes an assessment of compliance with the requirements for integrated systems.
  • Detailed design phase(Detailed Design) is parallel to the integration testing phase, during which the interactions between the various components of the system are tested
  • After coding stage Another important step begins - unit testing. It is very important to make sure that the behavior of individual parts and components of the software is correct and meets the requirements.

The only drawback of the considered testing methodology is the lack of ready-made solutions that could be applied to get rid of software defects found during the testing phase.

incremental model

This methodology can be described as a multi-cascade software testing model. The workflow is divided into a number of cycles, each of which is also divided into modules. Each iteration adds certain functionality to the software. The increment consists of three cycles:

  1. design and development
  2. testing
  3. implementation.

In this model, simultaneous development of different versions of the product is possible. For example, the first version may be in the testing phase while the second version is in development. The third version can go through the design phase at the same time. This process can continue until the end of the project.

Obviously, this methodology requires the detection of the maximum possible number of errors in the software under test as quickly as possible. As well as the implementation phase, which requires confirmation of the readiness of the product to be delivered to the end user. All these factors significantly increase the weight of testing requirements.

Compared to previous methodologies, the incremental model has several important advantages. It is more flexible, changes in requirements lead to lower costs, and the software testing process is more efficient, since testing and debugging is much easier through the use of small iterations. However, it is worth noting that total cost still higher than in the case of the cascade model.

spiral model

The Spiral Model is a software testing methodology that is based on an incremental approach and prototyping. It consists of four stages:

  1. Planning
  2. Risk Analysis
  3. Development
  4. Grade

Immediately after the first cycle is completed, the second begins. Software testing begins at the planning stage and continues until the evaluation stage. The main advantage of the spiral model is that the first test results appear immediately after the results of the tests in the third stage of each cycle, which helps to ensure the correct assessment of quality. However, it is important to keep in mind that this model can be quite expensive and not suitable for small projects.

Although this model is quite old, it remains useful for both testing and development. Furthermore, the main objective many software testing methodologies, including the spiral model, have changed recently. We use them not only to find defects in applications, but also to find out the reasons that caused them. This approach helps developers work more efficiently and fix bugs quickly.

Read more about the spiral model in the previous blog post..

Agile

Agile software development methodology and software testing can be described as a set of approaches focused on the use of interactive development, dynamic formation of requirements and ensuring their implementation as a result of constant interaction within a self-organizing organization. working group. Most agile software development methodologies aim to minimize risk through development in short iterations. One of the main principles of this flexible strategy is the ability to quickly respond to possible changes, rather than relying on long-term planning.

Learn more about Agile(note - article in English).

Extreme Programming (XP, Extreme Programming)

Extreme Programming is one example of agile software development. A distinctive feature of this methodology is "pair programming", a situation where one developer works on the code, while his colleague constantly reviews the written code. The software testing process is quite important because it starts even before the first line of code is written. Each application module should have a unit test so that most bugs can be fixed at the coding stage. Another distinguishing property is that the test determines the code, and not vice versa. This means that a certain piece of code can only be considered complete if all tests pass. Otherwise, the code is rejected.

The main advantages of this methodology are constant testing and short releases, which helps to ensure high quality code.

Scrum

Scrum - Part of the Agile methodology, an iterative incremental framework created to manage the software development process. According to Scrum principles, the test team should be involved in the following steps:

  • Participation in Scrum planning
  • Unit testing support
  • User story testing
  • Collaborate with customer and product owner to determine acceptance criteria
  • Providing automated testing

Moreover, QA members should be present at all daily meetings, like other team members, to discuss what was tested and done yesterday, what will be tested today, as well as the overall progress of testing.

At the same time, the principles of Agile methodology in Scrum lead to the emergence of specific features:

  • Estimating the effort required for each user story is a must
  • The tester needs to be attentive to requirements as they can change all the time.
  • The risk of regression increases with frequent code changes.
  • Simultaneous planning and execution of tests
  • Misunderstanding between team members in case the customer's requirements are not completely clear

Learn more about the Scrum methodology from a previous article..

Conclusion

In conclusion, it is important to note that today the practice of using one or another software testing methodology implies a multiversal approach. In other words, you should not expect that any one methodology will be suitable for all types of projects. The choice of one of them depends on a large number of aspects, such as the type of project, customer requirements, deadlines, and many others. From a software testing perspective, it is common for some methodologies to start testing early in development, while for others it is customary to wait until the system is complete.

Whether you need help with software development or testing, a dedicated team of developers and QA engineers is ready to go.

  • Web Service Testing
  • Most The best way evaluate whether we have tested the product well – analyze missed defects. Those faced by our users, implementers, businesses. You can evaluate a lot from them: what we didn’t check thoroughly enough, what areas of the product should be given more attention, what is the percentage of omissions in general, and what is the dynamics of its changes. With this metric (perhaps the most common in testing), everything is fine, but ... When we released the product and found out about the missed errors, it might be too late: an angry article about us appeared on Habré, competitors are rapidly spreading criticism, customers have lost trust in us, management is dissatisfied.

    To prevent this from happening, we usually try to evaluate the quality of testing in advance, before the release: how well and thoroughly do we check the product? What areas are lacking attention, where are the main risks, what progress? And to answer all these questions, we evaluate test coverage.

    Why evaluate?

    Any evaluation metrics are a waste of time. At this time, you can test, start bugs, prepare autotests. What magical benefit do we get from test coverage metrics to sacrifice testing time?
    1. Finding your weak areas. Naturally, we need this? not just to grieve, but to know where improvements are needed. What functional areas are not covered by tests? What haven't we checked? Where are the greatest risks of missing errors?
    2. Rarely do we get 100% coverage from evaluation results. What to improve? Where to go? What is the percentage now? How can we increase it by any task? How fast can we get to 100? All these questions bring transparency and clarity to our process., and the answers to them are given by the coverage estimate.
    3. Focus of attention. Let's say our product has about 50 different functional areas. coming out a new version, and we start testing the 1st of them, and find typos there, and buttons that have moved a couple of pixels, and other trifles ... And now the time for testing is over, and this functionality has been tested in detail ... And the remaining 50? Coverage assessment allows us to prioritize tasks based on current realities and deadlines.

    How to evaluate?

    Before implementing any metric, it is important to decide how you will use it. Start by answering exactly this question - most likely, you will immediately understand how it is best to calculate it. And I will only share in this article some examples and my experience of how this can be done. Not in order to blindly copy solutions - but in order for your imagination to rely on this experience, thinking through a solution that is ideal for you.

    Assessing requirements coverage by tests

    Let's say you have analysts in your team, and they don't spend their time in vain. working time. Based on the results of their work, requirements were created in the RMS (Requirements Management System) - HP QC, MS TFS, IBM Doors, Jira (with additional plugins), etc. In this system, they make requirements that correspond to the requirements for the requirements (sorry for the tautology). These requirements are atomic, traceable, specific… In general, ideal conditions for testing. What can we do in such a case? When using a scripted approach, link requirements and tests. We conduct tests in the same system, make a requirement-test connection, and at any time we can see a report on which requirements have tests, which ones do not, when these tests were passed, and with what result.
    We get a coverage map, we cover all uncovered requirements, everyone is happy and satisfied, we don’t miss mistakes ...

    Okay, let's get back down to earth. Most likely, you do not have detailed requirements, they are not atomic, some of the requirements are generally lost, and there is no time to document each test, or at least every second one. You can despair and cry, or you can admit that testing is a compensatory process, and the worse we are with analytics and development on the project, the more we ourselves should try and compensate for the problems of other participants in the process. Let's analyze the problems separately.

    Problem: Requirements are not atomic.

    Analysts also sometimes sin with a salad in their heads, and usually this is fraught with problems with the entire project. For example, you are developing a text editor, and you may have two requirements in your system (among others): “html formatting must be supported” and “when opening a file of an unsupported format, a pop-up window with a question must appear.” How many tests are required for basic verification of the 1st requirement? And for the 2nd? The difference in the answers, most likely, is about a hundred times!!! We cannot say that if there is at least 1 test on the 1st requirement, this is enough - but about the 2nd, most likely, completely.

    Thus, having a requirement test does not guarantee us anything at all! What does our coverage statistics mean in this case? Almost nothing! We'll have to decide!

    1. In this case, the automatic calculation of requirements coverage by tests can be removed - it still does not carry a semantic load.
    2. For each requirement, starting with the highest priority, we prepare tests. When preparing, we analyze what tests will be required for this requirement, how many will be enough? We conduct a full-fledged test analysis, and do not brush aside “there is one test, but okay.”
    3. Depending on the system used, we export/upload tests on demand and… we test these tests! Are they enough? Ideally, of course, such testing should be carried out with an analyst and a developer of this functionality. Print the tests, lock your colleagues in the meeting room, and do not let go until they say “yes, these tests are enough” (this happens only with written agreement, when these words are spoken for unsubscribing, even without analyzing the tests. During an oral discussion, your colleagues will pour out a tub criticism, missed tests, misunderstood requirements, etc. - this is not always pleasant, but it is very useful for testing!)
    4. After finalizing the tests on demand and agreeing on their completeness, in the system this requirement can be marked with the status “covered by tests”. This information will mean much more than "there is at least 1 test here."

    Of course, such an agreement process requires a lot of resources and time, especially at first, until practice is developed. Therefore, only carry out high-priority requirements and new improvements on it. Over time, tighten the rest of the requirements, and everyone will be happy! But ... and if there are no requirements at all?

    Problem: There are no requirements at all.

    They are absent from the project, they are discussed orally, everyone does what he wants / can and how he understands. We test the same. As a result, we get a huge number of problems not only in testing and development, but also initially incorrect implementation of features - we wanted something completely different! Here I can advise the option “define and document the requirements yourself”, and even used this strategy a couple of times in my practice, but in 99% of cases there are no such resources in the testing team - so let's go a much less resource-intensive way:
    1. Create a feature list. Sami! In the form of a google-tablet, in the PBI format in TFS - choose any, as long as it is not a text format. We still need to collect statuses! We include all the functional areas of the product in this list, and try to choose one general level of decomposition (you can write out software objects, or user scripts, or modules, or web pages, or API methods, or screen forms ...) - but not all of this at once ! ONE decomposition format that will make it easier and clearer for you not to miss important things.
    2. We coordinate the COMPLETENESS of this list with analysts, developers, business, within our team ... Try to do everything so as not to lose important parts of the product! How deep to analyze is up to you. In my practice, there were only a few products for which we created more than 100 pages in the table, and these were giant products. Most often, 30-50 lines is an achievable result for further careful processing. In a small team without dedicated test analysts more fichelista elements would be too difficult to maintain.
    3. After that, we go through the priorities, and process each line of the fichelist as in the requirements section described above. We write tests, discuss, agree on sufficiency. We mark the statuses, for which feature there are enough tests. We get status, progress, and expansion of tests through communication with the team. Everyone is happy!

    But... What if the requirements are maintained, but not in a traceable format?

    Problem: Requirements are not traceable.

    There is a huge amount of documentation on the project, analysts type at a speed of 400 characters per minute, you have specifications, technical specifications, instructions, references (most often this happens at the request of the customer), and all this acts as requirements, and everything has been on the project for a long time Confused where to look for what information?
    Repeating the previous section, helping the whole team clean up!
    1. We create a featurelist (see above), but without a detailed description of the requirements.
    2. For each feature, we collect together links to technical specifications, specifications, instructions, and other documents.
    3. We go by priorities, prepare tests, and agree on their completeness. Everything is the same, only thanks to the combination of all documents in one plate we increase the ease of access to them, transparent statuses and consistency of tests. In the end, everything is super, and everyone is happy!

    But… Not for long… It seems that in the last week, analysts have updated 4 different specifications based on customer requests!!!

    Problem: Requirements change all the time.

    Of course, it would be nice to test some fixed system, but our products are usually live. The customer asked for something, something changed in the legislation external to our product, and somewhere analysts found an analysis error of the year before last... Requirements live their own lives! What to do?
    1. Let's say you have already collected links to TK and specifications in the form of a feature list, PBI, requirements, Wiki notes, etc. Let's say you already have tests for these requirements. And now, the requirement is changing! This could mean a change in RMS, or a task in TMS (Task Management System), or a letter in the mail. Either way, it leads to the same result: your tests are out of date! Or they may be irrelevant. This means that they require updating (coverage by tests old version product is somehow not considered very much, right?)
    2. In the feature list, in RMS, in TMS (Test Management System - testrails, sitechco, etc) tests must be marked as irrelevant immediately and without fail! In HP QC or MS TFS, this can be done automatically when updating requirements, and in a google-tablet or wiki, you will have to put down pens. But you should see right away: the tests are irrelevant! This means that we are waiting for a full re-path: update, re-run the test analysis, rewrite the tests, agree on the changes, and only after that mark the feature/requirement again as “covered by tests”.

    In this case, we get all the benefits of the test coverage assessment, and even in dynamics! Everyone is happy!!! But…
    But you've been focusing so much on working with requirements that now you don't have enough time to either test or document the tests. In my opinion (and there is a place for a religious dispute!) requirements are more important than tests, and it's better that way! At least they are in order, and the whole team is in the know, and the developers are doing exactly what is needed. BUT THERE IS NO TIME FOR DOCUMENTING TESTS!

    Problem: Not enough time to document tests.

    In fact, the source of this problem can be not only lack of time, but also your quite conscious choice not to document them (we don’t like it, we avoid the pesticide effect, the product changes too often, etc.). But how to evaluate test coverage in this case?
    1. You still need requirements as full requirements or as a feature list, so one of the above sections, depending on the work of analysts on the project, will still be necessary. Got a requirement/featurelist?
    2. We describe and verbally agree on a short testing strategy, without documenting specific tests! This strategy may be specified in a table column, on a wiki page, or in a requirement in the RMS, and again must be agreed upon. As part of this strategy, reviews will be conducted in different ways, but you will know: when it is last time tested and with what strategy? And this, you see, is also not bad! And everyone will be happy.

    But… What else "but"? Which???

    Say, we will go around everything, and may high-quality products be with us!

    | Lesson planning for the school year | Main stages of modeling

    Lesson 2
    Main stages of modeling





    By studying this topic, you will learn:

    What is modeling;
    - what can serve as a prototype for modeling;
    - what is the place of modeling in human activity;
    - what are the main stages of modeling;
    - what is a computer model;
    What is a computer experiment.

    computer experiment

    To give life to new design developments, to introduce new technical solutions into production or to test new ideas, an experiment is needed. An experiment is an experiment that is performed with an object or model. It consists in performing some actions and determining how the experimental sample reacts to these actions.

    At school, you conduct experiments in the lessons of biology, chemistry, physics, geography.

    Experiments are carried out when testing new product samples at enterprises. Usually, a specially created setup is used for this purpose, which makes it possible to conduct an experiment in laboratory conditions, or the real product itself is subjected to all kinds of tests (a full-scale experiment). To study, for example, the performance properties of a unit or assembly, it is placed in a thermostat, frozen in special chambers, tested on vibration stands, dropped, etc. It is good if it is a new watch or a vacuum cleaner - the loss during destruction is not great. What if it's a plane or a rocket?

    Laboratory and full-scale experiments require large material costs and time, but their significance, nevertheless, is very great.

    With development computer technology a new unique research method appeared - a computer experiment. In many cases, computer simulation studies have come to help, and sometimes even to replace, experimental samples and test benches. The stage of conducting a computer experiment includes two stages: drawing up an experiment plan and conducting a study.

    Experiment plan

    The experiment plan should clearly reflect the sequence of work with the model. The first step in such a plan is always to test the model.

    Testing is the process of checking the correctness of the constructed model.

    Test - a set of initial data that allows you to determine the correctness of the construction of the model.

    To be sure of the correctness of the obtained modeling results, it is necessary: ​​♦ to check the developed algorithm for building the model; ♦ make sure that the constructed model correctly reflects the properties of the original, which were taken into account in the simulation.

    To check the correctness of the model construction algorithm, a test set of initial data is used, for which the final result is known in advance or predetermined in other ways.

    For example, if you use calculation formulas in modeling, then you need to select several options for the initial data and calculate them “manually”. These are test items. When the model is built, you test with the same inputs and compare the results of the simulation with the conclusions obtained by calculation. If the results match, then the algorithm is developed correctly, if not, it is necessary to look for and eliminate the cause of their discrepancy. Test data may not reflect the real situation at all and may not carry semantic content. However, the results obtained during the testing process may prompt you to think about changing the original informational or sign model, primarily in that part of it where the semantic content is laid down.

    To make sure that the constructed model reflects the properties of the original, which were taken into account in the simulation, it is necessary to select a test example with real source data.

    Conducting research

    After testing, when you have confidence in the correctness of the constructed model, you can proceed directly to the study.

    The plan should include an experiment or series of experiments that meet the objectives of the simulation. Each experiment must be accompanied by an understanding of the results, which serves as the basis for analyzing the results of modeling and making decisions.

    The scheme for preparing and conducting a computer experiment is shown in Figure 11.7.

    Rice. 11.7. Scheme of a computer experiment

    Analysis of simulation results

    The ultimate goal of modeling is to make a decision, which should be developed on the basis of a comprehensive analysis of the simulation results. This stage is decisive - either you continue the study, or finish. Figure 11.2 shows that the results analysis phase cannot exist autonomously. The conclusions obtained often contribute to an additional series of experiments, and sometimes to a change in the problem.

    The results of testing and experiments serve as the basis for developing a solution. If the results do not correspond to the goals of the task, it means that mistakes were made at the previous stages. This may be either an incorrect statement of the problem, or an overly simplified construction of an information model, or an unsuccessful choice of a modeling method or environment, or a violation of technological methods when building a model. If such errors are identified, then the model needs to be corrected, that is, a return to one of the previous stages. The process is repeated until the results of the experiment meet the objectives of the simulation.

    The main thing to remember is that the detected error is also the result. As the proverb says, you learn from your mistakes. The great Russian poet A. S. Pushkin also wrote about this:

    Oh, how many wonderful discoveries we have
    Prepare enlightenment spirit
    And experience, the son of difficult mistakes,
    And genius, paradoxes friend,
    And chance, god is the inventor...

    Control questions and tasks

    1. What are the two main types of modeling problem statement.

    2. In the well-known "Problem Book" by G. Oster, there is the following problem:

    The evil witch, working tirelessly, turns 30 princesses into caterpillars a day. How many days will it take her to turn 810 princesses into caterpillars? How many princesses a day would have to be turned into caterpillars to get the job done in 15 days?
    Which question can be attributed to the type of "what will happen if ...", and which - to the type of "how to do so that ..."?

    3. List the most well-known goals of modeling.

    4. Formalize the playful problem from G. Oster's "Problem Book":

    From two booths located at a distance of 27 km from one another, two pugnacious dogs jumped out towards each other at the same time. The first runs at a speed of 4 km / h, and the second - 5 km / h.
    How long will the fight start?

    5. Name as many characteristics of the "pair of shoes" object as you can. Compose an information model of an object for different purposes:
    ■ choice of footwear for hiking;
    ■ selection of a suitable shoe box;
    ■ purchase of shoe care cream.

    6. What characteristics of a teenager are essential for a recommendation on choosing a profession?

    7. Why is the computer widely used in simulation?

    8. Name the tools of computer modeling known to you.

    9. What is a computer experiment? Give an example.

    10. What is model testing?

    11. What errors are encountered in the modeling process? What should be done when an error is found?

    12. What is the analysis of simulation results? What conclusions are usually drawn?

    THE BELL

    There are those who read this news before you.
    Subscribe to get the latest articles.
    Email
    Name
    Surname
    How would you like to read The Bell
    No spam