All about the repair and decoration of apartments

How we do World of Warships: export automation and content verification. BigWorld Engine - Game engines - Files for game developers - Game development Content budgets and a duck in the bathroom

Are you having trouble finding a specific video? Then this page will help you find the video you need so much. We will easily process your requests and give you all the results. No matter what you are interested in and what you are looking for, we can easily find the video you need, no matter what direction it would be.


If you are interested in current news, then we are ready to offer you the most relevant news reports in all directions at the moment. The results of football matches, political events or world, global problems. You will always be up to date with all the events if you use our wonderful search. The awareness of the videos we provide and their quality does not depend on us, but on those who uploaded them to the Internet. We only supply you with what you are looking for and require. In any case, using our search, you will know all the news in the world.


However, the world economy is also a rather interesting topic that worries a lot of people. Quite a lot depends on the economic state of different countries. For example, import and export, any food or equipment. The same standard of living directly depends on the state of the country, as well as wages and so on. How can such information be useful? It will help you not only adapt to the consequences, but it can also warn you against traveling to one country or another. If you are an inveterate traveler, be sure to use our search.


Today it is very difficult to understand political intrigues and to understand the situation, you need to find and compare a lot of different information. That is why we can easily find for you various speeches of the State Duma deputies and their statements for all the past years. You can easily understand politics and the situation in the political arena. The policies of various countries will become clear to you and you can easily prepare yourself for the coming changes or adapt to our realities.


However, you can find here not only various news from all over the world. You can also easily find a movie that will be nice to watch in the evening with a bottle of beer or popcorn. In our search database there are films for every taste and color, you can easily find an interesting picture for yourself. We can easily find for you even the oldest and hard-to-find works, as well as well-known classics such as Star Wars: The Empire Strikes Back.


If you just want to relax a bit and are looking for funny videos, then we can quench your thirst here too. We will find for you a million different entertaining videos from all over the planet. Short jokes will easily cheer you up and will amuse you for the whole day. Using a convenient search system, you can find exactly what will make you laugh.


As you already understood, we work tirelessly so that you always get exactly what you need. We have created this wonderful search especially for you so that you can find the necessary information in the form of a video and watch it on a convenient player.

In this publication, we continue to ask questions to game developers. This time, the interview is given by Mikhail Zhivets, technical director of the project "", the company Wargaming.net.

NVIDIA WORLD: What graphics engine is used in your project? What are the minimum and recommended system requirements? Are they due to certain features or did you focus on a certain level of prevalence of 3D maps? Do you use the accessibility features of the latest generations of graphics cards, or do you go by the standard?

The World of Tanks project uses the BigWorld graphics engine with its own modifications, which were required to be made due to the specifics of the project. A feature of the engine is its focus on open worlds with dynamically loaded parts of the map, as, for example, is done in World of Warcraft. We try to use technologies that allow us to create a picture on the monitor screen that is close to the sketches of our artists and adapt them to system requirements.

NVIDIA WORLD: Are video cards from NVIDIA and ATI significantly different in terms of programming? Is it possible to just write through DirectX or OpenGL and get efficient code for both vendors, or do you need to make your own versions of functions for each vendor? What approach did you use?

M. J.: Video cards from different manufacturers have their own specific features. This could be support for special texture formats or accessing textures from a vertex shader. At the same time, when using the common functionality provided by the DirectX or OpenGL APIs, the differences between the GPUs of the two vendors are minimal and, as a result of the difference between video cards from ATI and NVIDIA, there are more additional features than any limitations. As for the "special treatment" on the part of the engine, BigWorld does not provide a decisive advantage for any of the vendors.

NVIDIA WORLD: What are the reasons for the differences in the speed of games on NVIDIA and ATI cards? Is this a property of the game engine algorithms, or does it depend more on game scenes? That is, for example, in one game there are more high-poly scenes, and in another there is a large overdraw indicator or a lot of translucent textures with complex anti-aliasing, and therefore video cards that have a higher fillrate do better? (“Did your program like any architecture?”).

M. J.: The differences mainly come down to hardware features (differences in the width of memory buses, the number of rasterization units, shader units, etc.), as well as to the implementation of drivers. The more balanced the load on the CPU and GPU is and the better the scene is organized, the less the features of the video card should affect the overall performance. This includes a careful attitude to the number and resolution of textures, low overdraw from particles and transparent objects, the use of LODs, grouping objects into batches, and so on - all this equally falls on the shoulders of engine developers and game designers.

NVIDIA WORLD: When testing video cards, as a rule, approximately the same sets of games are used. How appropriate is such a generalization when displaying test results for such different genres as 3D shooters, third-person RPGs and strategy games. Are popular benchmarks indicative of your game's performance?

M. J.: I would not divide applications by genre. Everything is exclusively in the hands of the developers of the algorithms they use. In any case, the developers are trying to load the user's system as balanced as possible and achieve, at the same time, the maximum attractiveness of the product.

Looking at FPS numbers in games like Crysis or the latest Need for Speed, you can roughly imagine the performance in a number of other modern games, so here we can say with all confidence that these tests are quite revealing. Games of this level can be used as benchmarks for video cards, since at high quality settings they load the graphics subsystem to the maximum.

NVIDIA WORLD: What is the reason for the increase in CPU load with increasing resolution? Is it true that increasing the detail of models to minimize their "angularity" requires more CPU power for animation, cutting off invisible primitives, building shadow volumes, etc.? Was there such an effect in your game?

M. J.: An increase in resolution should not directly affect the CPU load, unless, of course, the engine provides for automatic increase in the polygonality of models, additional tessellation and more frames of model animation. If the CPU load increases for no reason as the resolution increases, the problem lies either in the game code or in the video card driver.

NVIDIA WORLD: Can CPU load increase when anisotropic filtering is enabled? The same goes for Full Scene Antialiasing. If this requires the entire scene to be drawn at twice the resolution, then theoretically the number of triangles can also increase.

M. J.: The answer to both questions is no. If anisotropic filtering is enabled, the load on the texturing units will increase. With FSAA enabled, more work will be done by the pixel pipeline. As for higher resolution, doubling the number of pixels will increase the number of operations in the pixel shader, rasterizers and TMUs.

NVIDIA WORLD: The process of transferring calculations to GPU shaders has been going on for a very long time, at first it was T&L, clipping. Does animation of models on the GPU count now? What, first of all, remains in terms of a 3D engine for calculations on the CPU? (What did you count on the GPU besides T&L?).

M. J.: The engine used in our project does not perform any calculations on the GPU. Theoretically, if we, say, had to calculate the physics of many objects in real time, we could take the same PhysX , which performs great calculations on the NVIDIA GPU, but due to the specifics of the project, we have enough CPU capabilities.

NVIDIA WORLD: In recent years, video accelerators have become "intelligent", they themselves use methods of cutting off invisible primitives, such as: a hierarchical z-buffer. How effective are they? Is it possible now to just shove all the triangles into the video accelerator so that it draws everything itself? In the days of the first 3D games (the Quake and Unreal series), sophisticated methods were used to reduce rendered triangles, BSP trees, etc. How relevant is this now?

M. J.: Of course, you should not count on the fact that the video accelerator will independently determine which objects are worth rendering and which will not be visible. We have to apply the technique of early cutting off the unnecessary, since the transfer of redundant information to the GPU leads to a drop in performance. So, the problem is still relevant, albeit to a much lesser extent. Especially in those cases when you have to draw a large number of different objects. The fact is that modern cards are not only busy cutting off the unnecessary, they are also loaded with calculations of more complex types of lighting, etc. Accordingly, if there is an opportunity and resources to help the video card, this should be done.

For example, in the BigWorld engine, to cut off invisible objects before the drawing stage, the Umbra library is used. The scene is a BSP tree, which also allows you to quickly and efficiently discard scene fragments that are obviously not in the scope.

NVIDIA WORLD: At one time there was such a situation, around the time of the P4 crisis, that the bottleneck of the system was the processor, which could not “load” the video accelerator, and to increase FPS in games, first of all, a top-end CPU model was required. Is there a similar relationship between the CPU and GPU now, or have the processors crossed a certain critical performance level and can you take an inexpensive processor and a powerful video card? This refers to games with an emphasis on graphics, in style like Quake and Doom.

M. J.: A weak processor with a powerful video card is a bottleneck of the system, since many operations, both low-level (formation of the flow of control commands by the driver) and high-level (application logic - game loop, particle update, character animation, physics, sound, etc.) directly dependent on CPU performance. The size of the processor's cache memory also plays an important role, which is quite small on cheap models.

For the most efficient operation of the video card, system resources, a high-speed system bus, and a sufficient amount of good-speed RAM are also needed. Often the video subsystem is not able to show what it is capable of, simply because the CPU does not have time to transfer the necessary data to it. So, if it comes to saving on resources, it is enough to purchase an average processor, a slightly above average motherboard, but do not skimp on a video card and RAM.

NVIDIA WORLD: DirectX from version to version is becoming more and more like a graphics engine. Is it true that nowadays a 3D program consists practically of Direct3D API calls and most of the calculations are done in it? How are you enjoying the latest version of DirectX ? Has it become almost a full-fledged engine and how many more versions will be required for this?

M. J.: Direct3D cannot be considered as a standalone graphics engine and it is unlikely that Microsoft will take the step of creating its own game engine in the foreseeable future. This is impractical for a number of reasons, including the many differences in engine requirements for different game projects. For example, a car simulator, a real-time strategy game, and a 3D shooter have distinctive features that do not allow the same engine to be used equally effectively in all three cases.

In the tenth version of Direct3D, new abstraction levels appeared when working with resources and additional opportunities for developing shaders, such tricks as Stream Output became available. At the same time, D3D has remained the same as it was - a low-level API, using which a developer can design a game engine for a specific task.

NVIDIA WORLD: In general, recently there has been a “globalization” of game and 3D engines. There are some of the most popular platforms on which many games are made. Is it an objective process? Does it make sense now for a developer to write their own engine, when you can license a ready-made full-featured one. For example, the next version of Unreal Engine was recently released, which has already been downloaded tens of thousands of times. How long is it until the moment when all games will use one or two 3D engines?

M. J.: We use BigWorld and are happy with it so far. As for "globalization", it is unlikely that a miracle will happen and CryTek, Epic and a number of others will decide to give each other a piece of the game engine market. Do not forget that, as a rule, games created on the same engine are very similar, and this is not always good. Most likely, the number of engines will only grow, with more and more narrow specialization. In any case, there will always be unique engines.

NVIDIA WORLD: Is it true that the latest video cards have become very powerful in terms of raw power and, not differing from previous models in terms of effects, can reveal their potential, first of all, on systems with large monitors (from 1920x1200) in anti-aliasing and full anisotropic filtering modes? Does it make sense for a person with a monitor, for example, 1280x1024, not being a fan of filtering and AA, to buy a new video card, such as a GTX285 and Radeon on a new process?

M. J.: I agree, it is. But, nevertheless, do not forget about decent monitors, with sufficient color quality, high resolution, contrast. You will get much more pleasure from the game. But even if you decide not to change your favorite monitor, it is worth replacing the card, as this will definitely increase the performance of your system, because some of the old calculation algorithms on new cards are already implemented in hardware.

NVIDIA WORLD: Previously, game developers were conservative in using the features and effects of the newest graphics cards, as they focused on the most common cards at the moment. That is, for example, conditional DirectX n comes out, and games are still written under DirectX n-2. Has the situation changed recently? Is it easy to use the new effects capabilities for recently released video accelerators in the game?

M. J.: If Windows 7 had come out a couple of years earlier, or Microsoft had abandoned the idea of ​​tying DirectX 10 to Windows Vista, game developers would have moved to at least the tenth version of the API long ago. However, now we have what we have: all the hits of recent years use DX9, and DX10 support is often a marketing ploy.

NVIDIA WORLD: How intensively do games usually use the power of the newest, at the time of release, video accelerators? For example, are the full capabilities of the GT200 architecture being used now? Is it a typical situation when, at the time of release, a new GPU simply performs existing games a little better, but over time, as games are optimized for the new architecture and new features are applied, its value seems to grow? How much does your game use new features?

M. J.: In the case of the new architecture and its unified shader model, almost everyone has won. The workload of the GPU vertex and pixel pipelines evened out where there were large distortions in one direction or another. As for new features, the greater the percentage of video cards with support for a particular feature, the more willingly game developers will use them in their projects. The BigWorld we use is based on the capabilities of DirectX 9 and SM 3.0.

NVIDIA WORLD: Now online games have become very popular. Has this left its mark on the video accelerator industry? Since the engines of such games are focused on the basic set of features that a large number of users have, and the FPS is still limited by the Internet connection, it seems pointless to buy a top-end video accelerator for online games. Is this a brake on the development of game graphics? Browser games in "full 2D" that do not need an accelerator have increased in popularity.

M. J.: If you look at the number of subscribers of World of Warcraft and compare it with, say, Aion or Age of Conan, the answer is obvious: for online games, the gameplay, the development of the game world, the exciting PvP component, and other moments that are not directly related are primarily important to graphics. This is not a brake on the development of graphics, because players do not live alone in MMORPGs. New shooters, driving simulators and RPGs will inevitably continue to raise the bar for 3D graphics quality in games. MMO games are slowly but surely moving in the same direction, aiming to attract a larger gaming audience. By the way, the same Blizzard is now rumored to be working on a new MMOG and it is unlikely that they will use an 8-year-old engine in their new project.

NVIDIA WORLD: To what extent do SLI and Crossfire technologies require support from the game developer? What is the reason for the different effectiveness of these technologies for different games? With features of the game engine or with game scenes? How much does your game benefit from using SLI and Crossfire?

M. J.: Optimizing an engine for two video cards is a very difficult process. It is necessary to carefully organize a compact data transfer to the GPU and balance the engine in terms of load between the GPU and the CPU. If the application, when working on one card, already rests on the CPU or data transfer over the bus, you will have to forget about SLI or Crossfire. This, in fact, is what distinguishes certain games when working on paired accelerators. Our game so far gives a performance boost of about 10% on SLI, but now we are optimizing a number of modules in order to improve performance in such modes.

NVIDIA WORLD: For a long time, much has been said about the crisis of the PC market and the gradual transition of games to consoles. But this has not happened yet. Can this be expected in the future? Is it true that because of the standard hardware, programming graphics for set-top boxes is much easier? Or does the eternal lag of set-top boxes in terms of the level of "iron" levels out the unification?

M. J.: Game consoles have a number of advantages for the developer (one platform, one set of features and, as a result, the same performance for all players), but the market for PC games and, in particular, online games is constantly growing, so you should not expect a general transition to consoles. costs. Do not forget that the release of game consoles with more powerful hardware is being held back by console manufacturers themselves, trying to recoup the costs of releasing previous generation consoles through sales of current games. Therefore, before releasing the Playstation 4 or Xbox 720, a lot of time will pass and many projects will be released that are focused on the current generation of consoles. In some cases, such as the Nintendo Wii, advanced hardware is completely unnecessary to achieve excellent results.

NVIDIA WORLD: Is it true that games now almost universally use a combination of the projection method to build shadows from models and the method of shadow volumes with dynamically or statically calculated volumes for shadows on moving objects, models and weapons, and from moving parts of models on themselves?

M. J.: Instead of shadow volumes, the same projection technique is often used, with modifications (cascading shadow maps, a separate shadow map for self-shadowing, etc.). Shadow volumes give a sharp shadow edge, but suffer from high fillrate, additional CPU calculations, and complex implementation of shadow soft edges.

NVIDIA WORLD: Do existing shading methods have a future in terms of building dynamic lighting for the entire scene? That is, with a gradual increase in the power of video accelerators, they can be used to calculate the entire lighting in dynamics? Or will some new methods be required?

M. J.: Projective shadows have come a long way in modern games and some modifications of them will be used for a long time to come, combined with additional effects like SSAO. Of course, if there is free ray tracing on the GPU, then neither projective nor volumetric shadows have any chance.

NVIDIA WORLD: How do you rate the Precomputed Radiance Transfer technology in terms of application in games? Are you going to use it in the future?

M. J.: PRT requires lengthy calculations, is not compatible with animated models and, in principle, does not provide great visual advantages compared to the same Ambient Occlusion. Take Halo 3 as an example, which uses PRT but doesn't stand out in terms of lighting quality compared to Gears of War or Crysis.

NVIDIA WORLD: DirectX 11 and Fermi architecture, is it worth waiting for DirectX 11 accelerators and games, or is it a passing version of the API? Can the junior model DirectX 11 accelerator, inferior in absolute power, fill rate, be better than the old, but more powerful model with DirectX 10 support? Can we expect some kind of qualitative leap in game graphics with the release of the Fermi architecture, or will it be extensive growth, more triangles, more speed in high resolutions, etc.? If Fermi were available at the time your game was developed, how different would the game be, in terms of graphics?

M. J.: The eleventh version of DirectX offers more options for computing on the GPU, while, compared with DX10, it does not bring any fundamental improvements to 3D graphics. As for Fermi, the most interesting feature in my opinion is fully controlled tessellation for 3D models.

Mikhail Zhivets, Technical Director of the World of Tanks project, Wargaming.net

After the premiere closed screenings of World of Warships at gamescom and IgroMir, the official launch of the game is getting closer and closer. Closed alpha testing is now in full swing, and we, the developers of Lesta Studio, the St. Petersburg division of Wargaming, still have to solve a whole bunch of issues. At the same time, many obstacles still managed to be left behind. Below is a story about how we adapted the exporter of our engine to the needs of Ships and built the content verification process.

Standard delivery of the engine

Any engine includes a toolkit for exporting 3D models from 3D editors to its own data format. Our BigWorld, on the basis of which World of Tanks is made, is no exception. It supports export from 3D Max and Maya. Almost any game project requires the adaptation of standard exporters to the specifics of the project. In our project, ship models are specific.

The first version of the adapted exporter from Maya simply “trained” it to recognize the more complex structure of the ship scene. Added some Python control code to the existing C++ code, as well as a plugin for Maya with UI on wxWidget. It looked something like this:


UI of the adapted exporter

The resulting tool had a lot of shortcomings.

Export could only be done with the participation of the user, who had to "tell" the exporter what the scene contained. The use of this tool in automating various processes, for example, automatic verification of content at the distribution build stage, was out of the question.

The exporter required the user to know about far from obvious parameters, worked slowly, practically did not support scene verification, and also required a huge amount of resources for support.

The architecture was the main issue for expanding functionality in the future. The export was effectively an atomic operation (a set of spaghetti functions) that translated data from one structure (loaded Maya scene) to another structure (BigWorld) directly into physical files. When serializers and business logic are implemented “in a monolith”, and the data model is simply absent, then it is impossible to add data processing (pre / post-processing), as well as reuse (code reuse) serializers and data model in other tools that implement their own business -logic. It was impossible to build more complex content production processes.

Over time, adding new functionality to existing code became nearly impossible. It was decided to rewrite the exporter from scratch, putting a new architecture into it.

harsh everyday life

The level of our project has increased the requirements for the quality, complexity and volume of content. Our studio has grown a lot in the last couple of years. We now have the ability to allocate sufficient resources to tasks related to content production. Professionals with a great background in architecture development and technologies in C++/C# came to us. At the same time, for the developers of the exporter, this was the first experience of using the Python and Maya APIs. This introduced additional risks that needed to be taken into account.

We estimated the refactoring of the exporter in two or three man-months. You can't do without optimism in game development.

We have included the following risks:

Lack of formal requirements;
level of knowledge of Python;
the complexity of the Maya API;
refactoring of algorithms for processing primitives.

Much actual time was spent collecting requirements from informal sources, such as developers turned managers, old-timers, torsion fields, and the code of an existing exporter. These bits of knowledge have been formalized and written down as requirements, specifications, and UML diagrams in Confluence.

The first prototypes showed the need to use the concept of namespaces and Python modules (__init__.py). A mechanism has also been worked out that allows you to "transparently" use the functionality from C ++ libraries (.pyd).

The complexity and complexity of the Maya API could be written in a separate book. Any functionality required prototyping, consultations with 3D artists and with the developers of the engine (rendering).

The standard exporter had its own implementation of a large number of algorithms, for example, triangulation of polygons, calculation of nested node transformation matrices, etc. We abandoned them in favor of using the Maya API, which greatly improved the performance of the exporter.

It is high time to add the rule to Murphy's laws that any project you have conceived will certainly be implemented in no more than “x3” of the planned time, if you do not quit it.

The result was worth the effort we put in. In the end, even our main artist responsible for exporting models, after a couple of months of operation, found our exporter "almost perfect."

Let's look under the hood

Our studio actively uses scripts in Python. We tried to implement the entire exporter on it. Naturally, Python is not suitable for processing large binary data - such as vertex containers (vertex buffer), vertex index containers (index buffer), etc. The data model and serializers of such containers were implemented in C ++ as a library (.pyd) , which naturally fit into the Python data model. All business logic was implemented in Python.

The exporter framework was planned to be used not only for the task of "manual" export from Maya, but also for any tasks where its functionality could be reused, for example, to automate content verification. From any developed toolkit, we require the presence of interfaces (API) for Python, command line (command line) and UI tools.

Architecture

The architecture of the exporter framework is modular, layered. There are physical and logical layers, as well as a domain layer. Each layer contains separate modules: data model, business logic, serializers, as well as converters that can convert the data model of one layer to the data model of another layer. The physical and logical layers actually implement an analogue of the ORM architecture.

The architecture of the domain layer is designed for convenient processing of the data model by business logic. It is completely sandboxed and contains no assumptions about how it will be serialized into physical storage.


Layered architecture of the exporter

Export Process

Layered architecture introduces certain features of the export process. In fact, we are deserializing two (or more) models from different sources (Maya and BigWorld Engine). After that, these models are merged into one new one. Next, the new model is serialized into the BigWorld-Engine-format.


Export Process

Flexibility in the content production process

The implemented architecture makes it quite easy to build complex content production processes. For example, our primary ship model is technologically composed of three separate Maya scenes, each of which is simultaneously developed by different departments:

The first scene contains a visual model and a collision model.
The second scene contains the ballistic model.
The third contains the effects ports.

In addition to this, the engine toolkit (editors) adds (edits) its own data to the derived engine format model (fourth scene).

The exporter easily solves the non-trivial task of combining all four scenes into one resulting ship model.

Content verification

The content verification system allows us to look for content errors both in each layer (source) separately and in the resulting content. The number of verifiers now reaches several dozen. Automatic content verification is built into the build process of the distribution kit, which makes it possible to eliminate the human factor as much as possible and guarantee the integrity and technical purity of the content.


An example of ship model verification in Maya

Content budgets and a duck in the bathroom

An important part of the content verification process is the verification of budgets, such as the verification of polygon budgets. The figure below shows, in particular, information on the number of triangles for the visual model for each lod:


Maya UI plugin

A vivid illustration of the need for such verification is a story told to me by colleagues about one of the previous projects. On the map there was a plot built up with indestructible houses. As soon as the camera turned its gaze to this area, the FPS immediately dropped wildly. After studying the problem, it turned out that inside one of the houses there was a bath in which a small “plastic” duck swam. All this would have looked like a funny prank of artists, if not for the fact that the duck model contained about a million polygons.

In practice, it is very difficult to comply with the budgetary value. Many models are objectively exceptions. Setting the budget value as a range also does not solve the problem, since over time the polygons of the models simply begin to tend to the upper value of the range. In our case, we plan to change the personal budgets of those models that do not correspond to the standard budget of this type of models.

Content processing

At each stage of the export, processing (pre/post-processing) is required. For example, before converting the Maya layer logical data model to the domain data model for air defense guns, the gun skeleton must first be rotated 45 degrees in the Y axis and the skeleton removed. Our architecture allowed us to transparently embed various processing at any stage of the export.




An example of a model before and after pre-processing

x64 support

Quite recently, our artists have migrated en masse from 32-bit Maya 2012 to 64-bit Maya 2014. Since the exporter is written almost entirely in Python, we had almost no problems with x64 support. Only the library (.pyd), implemented in C ++, required a little "shamanism".

Now the exporter can be easily used in both x32 and x64 processes, since it itself determines and loads the required assembly of the C ++ library (.pyd).

Card verification

When developing the architecture of the exporter, we could not predict in advance in what other tools and automations it would be possible to reuse it. Card verification automation is an example of how the "correct" exporter architecture found its way into another tool.

Map verification builds and verifies the dependency graph of a map on other objects located on it. In particular, the map uses visual models of the landscape (stones, icebergs), buildings (houses, hangars, marinas), equipment (airplanes, boats), etc.

The peculiarity of the map verifier is that it can verify not only the presence of these visual model files, but also the models themselves, using the exporter framework. This eliminated the human factor when the map development department (LA) has to "take the word" of the 3D model development department (3D Art) that technically correct models are used.

Building a distribution

The exporter framework also found its use in the process of preparing a content pack for the distribution. The distribution should not include models that:

No longer in use;
are still under development;
intended for future versions of the product.

According to the basic list of game objects (root game objects), it is required to build a dependency graph, according to which a complete list of the required content will be generated. There is nothing easier than deserializing the model using the exporter framework and “finding out” what other models are required (content references).

Results

The development history of our exporter has shown how it evolved from a simple highly specialized tool into a powerful system that solves its immediate tasks, and also found application in other content production processes. The basis of its successful development and reuse is a modular architecture that allows you to use its individual "cubes" to build other systems.

In the near future, the exporter will face another test related to changing the BigWorld Engine file format. We are confident that the underlying architecture will not experience any difficulties and will be able to work with both the existing and the new file format.

  • Genre focus: 3D MMO of any genre;
  • Platform: PC, PS3, Xbox 360, iOS (iPad), Web;
  • Programming language: C++, Python
  • License: indie and commercial;
  • Open source: not provided or provided for an increased fee;
  • Multiplayer: client-server;
  • Advantages: powerful, support for all the latest technologies, optimized, iOS support, cheap for such features;
  • Flaws: is not provided free of charge;
  • Engine developers: Big World Tech Inc.

    BigWorld Engine is the most advanced 3D engine for creating MMO games. It made such games as "World of Tanks", "Pealm of the Titans" from Wargaming.net and other games from other global game development companies. More than 15 MMO games have been made on this engine. It is developed by BigWorld Technology.

    Engine optimization allows you to create low-demand games with amazing graphics. The engine allows you to port games to iOS. Written in the C++ programming language, the game logic is implemented in a convenient Python scripting language. There are powerful tools and a client-server engine. For sound, the FMOD library is supported, and any other libraries are connected via a plug-in system. Works with XML and MySQL databases. The toolkit includes powerful World Editor, Model Editor and Particle Editor.

    It is very affordable. A BigWorld: Indie Edition build costs only $299; BigWorld: Indie Source Edition - $2,999; BigWorld: Commercial Edition - negotiated individually.

    This advanced engine is not inferior in capabilities to other world engines of its type. The engine is available in Russian, Korean, American and Japanese. There is documentation, it can work on browsers. In general, you can get to work if you have knowledge and diligence.

    It is no longer available for third-party licensing, because Wargaming decided to stop distributing their engine.

    Official site: http://www.bigworldtech.com





    The BigWorld Technology tool chain provides a complete, end-to-end MMOG content creation system that will enhance the quality and timelines of your game. All tools are designed for cooperative production of game assets in a large team environment, ensuring effective use of resources and a smooth content pipeline.

  • Similar posts