All about the repair and decoration of apartments

Why does 1s work slowly. Automation Tips

The phrase “1C slows down” was probably heard by everyone working with products on the 1C: Enterprise platform. Someone complained about it, someone accepted the complaint. In this article we will try to consider the most common causes of this problem and its solutions.

Let us turn to the metaphor: before finding out why a person has not come somewhere, it is worth making sure that he has legs to walk. So, let's start with the hardware and network requirements.

If Windows 7 is installed:

If Windows 8 or 10 is installed:



Also remember that there must be at least 2 GB of free disk space, and a network connection must have a speed of at least 100 Mb / s.

The characteristics of servers in the client-server version does not make much sense, because in this case it all depends on the number of users and the specifics of the tasks that they solve in 1C.

  When choosing a configuration for the server, remember the following:

  • One workflow of a 1C server consumes an average of 4 GB (not to be confused with a user’s connection, since one workflow can have as many connections as you specify in the server’s settings);
  • The use of 1C and a DBMS (especially MS SQL) on the same physical server gives a gain when processing large amounts of data (for example, closing a month, calculating the budget according to the model, etc.), but significantly reduces performance during unloaded operations (for example, creating and conducting implementation document, etc.);
  • Remember that the 1C server and the DBMS must be in a bunch along the channel with a “thickness” of 1 GB or more;
  • Use high-performance disks and do not combine the roles of 1C server and DBMS with other roles (for example, file, AD, domain controller, etc.).

If after checking the equipment 1C still "slows down"

  We have a small company, 7 people, and 1C “slows down”. We turned to specialists, and they said that only the client-server option would save us. But for us, such a solution is not acceptable, it is too expensive!

Carry out routine maintenance in the database *:

1.   Run the database in configurator mode.


2.   Select "Administration" in the main menu, and in it - "Testing and correction".


3.   Set all the checkboxes as in the picture. Click Run.

  * This procedure may take from 15 minutes to an hour depending on the size of the base and the characteristics of your PC.

If this does not help, then we make a client-server connection, but without additional investments in hardware and software:

1.   Choose the least loaded computer in the office from the number of stationary (not notebook): it must have at least 4 GB of RAM and a network connection of at least 100 Mb / s.

2.   Activate IIS (Internet Information Server) on it. For this:





3.   Publish your database on this computer. There is available material on this topic on ITS, or contact a specialist from support.

4.   On users' computers, configure access to the database through the thin client. For this:


Open the 1C launch window.


Choose your working base. Here it is "Your base." Click "Edit." Set the switch to “On the web server”, indicate in the line below it the name or IP address of the server on which IIS was activated, and the name under which the database was published. Click "Next".


Set the “Main Launch Mode” switch to “Thin Client” mode. Click Finish.

  We have a rather big company, but not very big, 50 to 60 people. We use the client-server option, but 1C is terribly slow.

In this case, it is recommended to divide the 1C server and the DBMS server into two different servers. When dividing, be sure to remember: if they remained on the same physical server that they simply virtualized, then the disks on these servers should be different — physically different! Also, be sure to configure scheduled tasks on the DBMS server when it comes to MS SQL (more on this is described on the ITS website)

  We have a rather big company, more than 100 users. Everything is configured in accordance with the recommendations of 1C for this option, but with some documents 1C is very "slow", and sometimes a blocking error occurs at all. Maybe do a convolution of the base?

A similar situation arises because of the size of a very specific accumulation or accounting register (but more often - accumulation), because the register either generally “closes”, i.e. there is movement of the arrival, but there is no movement of consumption, or the number of measurements by which the remainder of the register is considered is very large. There may even be a mix of the two previous reasons. How to determine which register spoils everything?

  We fix the time when documents are held slowly, or the time of the user who has a blocking error.

  Open the logbook.



  We find the document we need, at the right time, for the right user with the event type “Data. Conduct”.



We look at the entire block of the transaction until the transaction is canceled, if there was a blocking error, or look for the longest change (the time from the previous record is more than a minute).

  After that, we make a decision, bearing in mind that collapse this particular register is in any case cheaper than the entire database.

  We are a very large company, more than 1000 users, thousands of documents per day, our IT department, a huge fleet of servers, several times optimized requests, but 1C “slows down”. We, apparently, have outgrown 1C, and we need something more powerful.

In the overwhelming majority of such cases, the “brakes” are not 1C, but the architecture of the solution used. When choosing a new program for business, remember that writing your business processes in the program is cheaper and easier than remaking them for some, all the more, very expensive program. This opportunity provides only 1C. Therefore, it is better to ask the question: "How to fix the situation? How to make 1C “fly” on such volumes? ” Briefly consider several options for "treatment":

  • Use parallel and asynchronous programming technologies that support 1C (background jobs and queries in a loop).
  • When designing a solution architecture, discard the use of accumulation registers and accounting registers in the “bottlenecks”.
  • When developing a data structure (accumulation and / or information registers), adhere to the rule: “The fastest table for writing and reading is a single-column table”. What is at stake will become more clear if you look at the typical mechanism of the RAUZ.
  • To process large volumes of data, use auxiliary clusters where the same database is connected (but in no case can this be done with interactive work !!!). This will allow you to bypass the standard 1C locks, which will make it possible to work with the database at almost the same speed as when working directly with SQL.

It is worth noting that 1C optimization for holdings and large companies is a topic for a separate, large article, so stay tuned for updates on our website.

Photo by Alena Tulyakova, IA “Clerk.Ru”

The article indicates the main mistakes made by novice 1C administrators, and shows how to solve them using the Gilev test as an example.

The main goal of this article is not to repeat the obvious nuances to those administrators (and programmers) who have not gained experience with 1C.

The secondary goal, if I have any shortcomings, will be indicated to me on Infostart most quickly.

The test of V. Gilev has already become a certain de facto standard. The author on his site gave quite understandable recommendations, but I will just give some results and comment on the most likely mistakes. Naturally, the test results on your equipment may vary, it's just for a guide, what should be and what you can strive for. I want to note right away that changes need to be done step by step, and after each step check what result it gave.

There are similar articles on Infostart, in the corresponding sections I will put links to them (if I miss something - please suggest in the comments, I will add). So, suppose you have 1C slows down. How to diagnose a problem, and how to understand who is to blame, administrator or programmer?

Initial data:

The tested computer, the main experimental rabbit: HP DL180G6, in the configuration 2 * Xeon 5650, 32 Gb, Intel 362i, Win 2008 r2. For comparison, Core i3-2100 shows comparable results in a single-threaded test. The equipment was specially not the newest; on modern equipment, the results are noticeably better.

For testing 1C and SQL server diversity, SQL Server: IBM System 3650 x4, 2 * Xeon E5-2630, 32 Gb, Intel 350, Win 2008 r2.

To test the 10 Gbit network, Intel 520-DA2 adapters were used.

File version. (the database lies on the server in the shared folder, clients connect over the network, CIFS / SMB protocol). Step-by-Step Algorithm:

0. Add the Gilev test database to the file folder in the same folder as the main databases on the file server. From the client computer we connect, run the test. We remember the result.

It is understood that even for older computers 10 years ago (Pentium on a 775 socket), the time from clicking on the 1C: Enterprise shortcut to the base window should take less than a minute. (Celeron \u003d slow work).

If your computer is worse than the Pentium on a 775 socket with 1 GB of RAM, then I sympathize with you, and it will be difficult for you to work comfortably on 1C 8.2 in the file version. Think about either upgrading (it's high time) or switching to a terminal (or web, in the case of thin clients and managed forms) server.

If the computer is not worse, then you can kick the administrator. At a minimum, check the operation of the network, antivirus, and HASP protection driver.

If the Gilev test at this stage showed 30 “parrots” and above, but the 1C working base still works slowly - the questions are already for the programmer.

1. For a guide, how much can a client computer “squeeze”, we check the operation of only this computer, without a network. We put the test base on the local computer (on a very fast disk). If the client computer does not have a normal SSD, then a ramdisk is created. So far, the simplest and most free is Ramdisk enterprise.

For testing version 8.2, 256 MB of ramdisk is enough, and! The most important thing. After rebooting the computer, with a working ramdisk, it should have 100-200 mb free. Accordingly, without a ramdisk, for normal operation of free memory should be 300-400 mb.

For testing version 8.3, a ramdisk of 256 MB is enough, but more RAM is needed.

When testing, you need to look at the processor load. In the case that is close to ideal (ramdisk), the local file 1s during operation loads 1 processor core. Accordingly, if during testing your processor core is not fully loaded, look for weaknesses. A little emotionally, but generally correct, the influence of the processor on the operation of 1C is described. Just for reference, even on modern Core i3 with a high frequency, the numbers 70-80 are quite real.

The most common errors at this stage.

  • Incorrectly configured antivirus. There are a lot of antiviruses, the settings for each are different, I can only say that with proper configuration, neither the web nor Kaspersky 1C interfere. At settings "by default" - about 3-5 parrots (10-15%) can be taken away.
  • Performance mode. For some reason, few people pay attention to it, and the effect is the most significant. If you need speed, then you must do this on both client and server computers. (Gilev has a good description. The only caveat is that on some motherboards if you turn off Intel SpeedStep you can’t turn on TurboBoost).
   In short - during the operation of 1C there is a lot of waiting for a response from other devices (disk, network, etc.). While waiting for a response, if the performance mode is on balanced, the processor lowers its frequency. A response comes from the device, you need to work 1C (to the processor), but the first cycles go with a reduced frequency, then the frequency increases - and 1C again waits for a response from the device. And so - many hundreds of times per second.

You can enable performance mode (and preferably) in two places:

  • through the BIOS. Disable modes C1, C1E, Intel C-state (C2, C3, C4). In different bios they are called differently, but the meaning is the same. Searching for a long time requires a reboot, but if you did it once, then you can forget it. If everything is done correctly in the BIOS, then speed will be added. On some motherboards, BIOS settings can make the Windows Performance Mode not play a role. (Examples of BIOS settings at Gilev). These settings mainly concern server processors or “advanced” BIOSes, if you haven’t found one yourself, and you don’t have Xeon - it's okay.

  • Control Panel - Power - High Performance. Minus - if the computer has not been tested for a long time, it will be more buzzing with a fan, it will bask more and consume more energy. This is a performance charge.
   How to check that the mode is turned on. We start the task manager - performance - resource monitor - CPU. We wait until the processor is busy with nothing.
   These are the default settings.

BIOS C-state enabled,

balanced power mode


   BIOS C-state enabled, high performance mode

For Pentium and Core, you can stop there,

you can still squeeze out a few "parrots" from Xeon


   BIOS C-state off, high performance mode.

If you do not use Turbo boost - this is how it should look

performance tuned server


And now the numbers. Let me remind you: Intel Xeon 5650, ramdisk. In the first case, the test shows 23.26, in the latter - 49.5. The difference is almost twofold. Figures may vary, but the ratio remains almost the same for Intel Core.

Dear administrators, you can scold 1C as you like, but if end users need speed, you need to enable high performance mode.

c) Turbo Boost. First you need to understand if your processor supports this function, for example. If it supports, then you can still quite legally get a little performance. (I don’t want to discuss frequency overclocking, especially servers, do it at your own risk. But I agree that increasing Bus speed from 133 to 166 gives a very noticeable increase in both speed and heat dissipation)

How to include turbo boost is written, for example,. But! For 1C, there are some nuances (not the most obvious). The difficulty is that the maximum effect of turbo boost is manifested when C-state is turned on. And you get something like this:

Please note that the multiplier is the maximum, the core speed is beautiful, the performance is high. But what will happen as a result of 1s?

But in the end it turns out that according to CPU performance tests, the option with a factor of 23 is ahead, according to Gilev's tests in the file version, the performance with a factor of 22 and 23 is the same, but in the client-server version, the option with a factor of 23 is horror horror (even if C -state set to level 7, it is still slower than with C-state turned off). Therefore, a recommendation, check both options for yourself, and choose the best one from them. In any case, the difference between 49.5 and 53 of the parrot is quite significant, all the more so without much effort.

Conclusion - turbo boost must be enabled. Let me remind you that it’s not enough to enable the Turbo boost item in the BIOS, you also need to look at other settings (BIOS: QPI L0s, L1 - disable, demand scrubbing - disable, Intel SpeedStep - enable, Turbo boost - enable. Control panel - Power supply - High performance) . And I would still (even for the file version) stop at the option where c-state is turned off, even though there is a multiplier and less. It will turn out somehow like that ...

A rather controversial point is the memory frequency. For example, here the memory frequency is shown as very influential. My tests did not reveal such a dependence. I will not compare DDR 2/3/4, I will show the results of frequency changes within the same line. The memory is the same, but in the BIOS we force lower frequencies.




And the test results. 1C 8.2.19.83, for the file version local ramdisk, for client-server 1C and SQL on the same computer, Shared memory. Turbo boost in both versions is turned off. 8.3 shows comparable results.

The difference is within the measurement error. I specially pulled out CPU-Z screens to show that with changing the frequency, other parameters change, the same CAS Latency and RAS to CAS Delay, which eliminates the frequency change. The difference will be when the memory modules physically change, from slower to faster, but even there the numbers are not very significant.

2. When we figured out the processor and memory of the client computer, we move on to the next very important place - the network. Many volumes of books have been written about tuning the network, there are articles on Infostart (, and others), here I will not focus on this topic. Before starting 1C testing, please make sure that iperf between two computers shows the entire band (for 1 gigabyte of cards - well, at least 850 mbit, or better 950-980), that Gilev’s advice has been followed. Then - oddly enough, copying one large file (5-10 gigabytes) over the network will be the simplest test of work. An indirect sign of normal operation on a 1 gigabit network will be an average copy speed of 100 mb / s, good work - 120 mb / s. I want to draw attention to the fact that processor weakness can be a weak point (including). The SMB protocol on Linux is rather poorly parallel, and during operation it can quite easily “eat” one processor core, and no longer consume it.

And further. With the default settings, the windows client works best with the windows server (or even windows workstation) and the SMB / CIFS protocol, the linux client (debian, ubuntu did not look at the rest) works better with linux and NFS (it works with SMB too, but on NFS parrots above). The fact that during linear copying of Win-Linux server to the NSF is copied to one stream faster does not mean anything. Tuning debian for 1C is the topic of a separate article, I'm not ready for it yet, although I can say that in the file version I got even a little more performance than the Win version on the same hardware, but with postgres with users over 50 I still have everything very bad.

The most important thing is what the "burnt" administrators know, but beginners do not take into account. There are so many ways to set the path to the 1c database. You can make servershare, you can 192.168.0.1share, you can net use z: 192.168.0.1share (and in some cases this method will also work, but by no means always) and then specify drive Z. It seems that all these paths point to the same thing the same place, but for 1C there is only one way, stably enough giving normal performance. So, the right thing to do is:

At the command line (or in politicians, or whatever is convenient for you) - do net use DriveLetter: servershare. Example: net use m: serverbases. I specifically emphasize NOT the IP address, namely the server name. If the server is not visible by name, add it to dns on the server, or locally to the hosts file. But the appeal must be by name. Accordingly - on the way to the database, access this disk (see picture).

And now I’ll show in numbers why such advice. Initial data: Cards Intel X520-DA2, Intel 362, Intel 350, Realtek 8169. OS Win 2008 R2, Win 7, Debian 8. The latest drivers, updates are applied. Before testing, I made sure that Iperf gives a full bandwidth (except for 10 Gbit cards, it only managed to squeeze 7.2 Gbit, later I will see why, the test server is not configured as it should yet). The disks are different, but everywhere there is an SSD (specially inserted a single disk for testing, no longer loaded with anything) or a raid from the SSD. The speed of 100 Mbit was obtained by limiting the settings of the Intel 362 adapter. No differences were found between 1 Gbit copper Intel 350 and 1 Gbit Optics Intel X520-DA2 (obtained by limiting the speed of the adapter). Maximum performance, turbo boost is off (just for comparability of results, turbo boost adds a little less than 10% for good results, for bad ones it may not affect at all). Version 1C 8.2.19.86, 8.3.6.2076. The figures are not all, but only the most interesting, so that there is something to compare.

   100 Mbit CIFS

Win 2008 - Win 2008

iP address

   100 Mbit CIFS

Win 2008 - Win 2008

address by name

   1 Gbit CIFS

Win 2008 - Win 2008

iP address

   1 Gbit CIFS

Win 2008 - Win 2008

address by name

   1 Gbit CIFS

Win 2008 - Win 7

address by name

   1 Gbit CIFS

Win 2008 - Debian

address by name

   10 Gbit CIFS

Win 2008 - Win 2008

iP address

   10 Gbit CIFS

Win 2008 - Win 2008

address by name

11,20 26,18 15,20 43,86 40,65 37,04 16,23 44,64
   1C 8.2 11,29 26,18 15,29 43,10 40,65 36,76 15,11 44,10
8.2.19.83 12,15 25,77 15,15 43,10 14,97 42,74
6,13 34,25 14,98 43,10 39,37 37,59 15,53 42,74
   1C 8.3 6,61 33,33 15,58 43,86 40,00 37,88 16,23 42,74
8.3.6.2076 33,78 15,53 43,48 39,37 37,59 42,74

Conclusions (from the table, and from personal experience. Only applies to the file version):

  • On the network, you can get quite normal numbers for work, if this network is properly configured, and the path in 1C is set correctly. Even the first Core i3 can well give 40+ parrots, which is pretty good, and it's not only parrots, in real work the difference is also noticeable. But! the limitation when several (more than 10) users work is not the network, but 1 Gbit is still enough, but blocking during multi-user work (Gilev).
  • 1C 8.3 platform is many times more demanding for competent network setup. Basic settings - see Gilev, but keep in mind that everything can influence. I saw an acceleration from the fact that I uninstalled (and not just turned off) the antivirus, from removing protocols like FCoE, from changing drivers to an older, but microsoft certified version (especially for cheap cards such as asus and long), from removing a second network card from the server . A lot of options, configure the network thoughtfully. There may well be a situation where the platform 8.2 gives the received numbers, and 8.3 - two or even more times less. Try playing around with the 8.3 platform versions, sometimes you get a very big effect.
  • 1C 8.3.6.2076 (maybe later, I haven’t searched for the exact version yet) over the network it’s still easier to configure than 8.3.7.2008. I managed to get normal network work (in comparable parrots) from 8.3.7.2008 only a few times, I could not repeat for a more general case. He did not understand much, but judging by the footbags from Process Explorer, the record there is not going as well as in 8.3.6.
  • Despite the fact that when working on a 100Mbit network, the schedule of its congestion is small (we can say that the network is free), the speed is still much less than 1 gigabit. The reason is network latency.
  • Other things being equal (a well-functioning network) for 1C 8.2, the Intel-Realtek connection is 10% slower than Intel-Intel. But realtek-realtek in general can give sharp subsidence out of the blue. Therefore, if there is money, it is better to keep Intel network cards everywhere, if there is no money, then Intel should be installed only on the server (your K.O.). Yes, and instructions for tuning intelligent network cards many times more.
  • The default antivirus settings (for example, drweb version 10) take away about 8-10% of parrots. If you configure it as it should (allow the 1cv8 process to do everything, although this is not safe) - the speed is the same as without an antivirus.
  • Linux gurus DO NOT read. A server with samba is great and free, but if you install Win XP or Win7 (or even better - server OS) on the server, then in the file version 1c will work faster. Yes, both samba and the protocol stack and network settings and much much more in debian / ubuntu is well tuned, but experts are advised to do this. It makes no sense to put Linux with the default settings and then say that it works slowly.
  • It’s good enough to check the operation of disks connected via net use with fio. At least it will be clear whether these are problems with the 1C platform, or with a network / disk.
  • For a single-user version, I can’t come up with tests (or a situation) where the difference between 1 Gbit and 10 Gbit would be visible. The only thing where 10Gbps for the file version gave a better result is the connection of drives via iSCSI, but this is the topic of a separate article. Still, I think that for the file version, 1 Gb of cards is enough.
  • Why 8.3 works much faster than 8.2 at 100 Mbps, I don’t understand, but there was a fact. All other equipment, all other settings are exactly the same, just 8.2 is tested in one case and 8.3 in the other.
  • Not tuned NFS win - win or win-lin gives 6 parrots, did not include in the table. After tuning, 25 received, but unstable (run-up in measurements more than 2 units). While I can not give recommendations on the use of windows and NFS protocol.
   After all the settings and checks, run the test again from the client computer, rejoice at the improved result (if it turned out). If the result has improved, there are more than 30 parrots (and especially more than 40), less than 10 users are working at the same time, and the working database still slows down - almost certainly the programmer’s problems (or you have already reached the peak of the file version capabilities).

Terminal server. (the base is on the server, clients connect over the network, the RDP protocol). Step-by-Step Algorithm:

  • Add the Gilev test database to the server in the same folder as the main databases. We connect from the same server, run the test. We remember the result.
  • In the same way as in the file version, we set up the processor. In the case of a terminal server, the processor generally bears the main role (it is understood that there are no obvious weaknesses, such as a lack of memory or a huge amount of unnecessary software).
  • Setting network cards in the case of a terminal server practically does not affect the operation of 1s. To ensure "special" comfort, if your server issues more than 50 parrots, you can play around with new versions of the RDP protocol, just for the convenience of users, faster response and scrolling.
  • With the active work of a large number of users (and here you can already try and connect 30 people to the same base if you try) it is very desirable to put an SSD disk. For some reason, it is believed that the disk does not particularly affect the operation of 1C, but all tests are performed with the controller cache turned on for recording, which is wrong. The test base is small, it fits in the cache, hence the high numbers. On real (large) databases, everything will be completely different, so the cache is disabled for tests.
For example, I checked the Gilev test with different disk options. He set disks from what was at hand, just a tendency to show. The difference between 8.3.6.2076 and 8.3.7.2008 is small (in the Ramdisk Turbo boost 8.3.6 version it gives 56.18 and 8.3.7.2008 gives 55.56, in other tests the difference is even smaller). Power consumption - maximum performance, turbo boost is disabled (unless otherwise stated).
   Raid 10 4x SATA 7200

ATA ST31500341AS

   Raid 10 4x SAS 10k    Raid 10 4x SAS 15k    Single SSD    Ramdisk    Ramdisk    Cache Enabled

RAID controller

21,74 28,09 32,47 49,02 50,51 53,76 49,02
   1C 8.2 21,65 28,57 32,05 48,54 49,02 53,19
8.2.19.83 21,65 28,41 31,45 48,54 49,50 53,19
33,33 42,74 45,05 51,55 52,08 55,56 51,55
   1C 8.3 33,46 42,02 45,05 51,02 52,08 54,95
8.3.7.2008 35,46 43,01 44,64 51,55 52,08 56,18
  • The included RAID controller cache eliminates all the difference between the drives, the numbers are the same for both sat and sas. Testing with it for a small amount of data is useless and is not some kind of indicator.
  • For platform 8.2, the difference in performance between SATA and SSD options is more than doubled. This is not a typo. If you look at the performance monitor during the test on SATA disks. then there you can clearly see "Active disk time (in%)" 80-95. Yes, if you enable the cache of the disks themselves for writing, the speed will increase to 35, if you enable the cache of the controller raid - up to 49 (regardless of which disks are currently being tested). But these are synthetic cache parrots, in real work with large databases there will never be 100% write cache hit ratio.
  • The speed of even cheap SSDs (I tested on Agility 3) is quite enough for the file version to work. The recording resource is another matter, here you need to look in each case, it is clear that the Intel 3700 will be an order of magnitude higher, but there the price is corresponding. And yes, I understand that when testing an SSD drive, I also test to a large extent the cache of this drive, the real results will be less.
  • The most correct (from my point of view) solution would be to allocate 2 SSD disks in the mirror raid for the file base (or several file bases), and do not put anything else there. Yes, with a mirror, the SSDs wear out the same way, and this is a minus, but at least the controller electronics are at least somewhat insured against errors.
  • The main advantages of SSD disks for the file version will appear when there are many databases, and each has several users. If the bases are 1-2, and users are in the region of 10, then there will be enough SAS disks. (but in any case, look at loading these disks, at least through perfmon).
  • The main advantages of the terminal server - it can have very weak clients, and the network settings on the terminal server are much less affected (again, your K.O.).
Conclusions: if you run the Gilev test on the terminal server (from the same disk where the work bases are) and at the moments when the work base slows down, and the Gilev test shows a good result (above 30), then you are to blame for the slow work of the main work base, most likely a programmer.

If the Gilev test shows small numbers, and you have a processor with a high frequency and the disks are fast, then here the administrator needs to take at least perfmon, and record all the results somewhere, and watch, observe, draw conclusions. There will be no definite advice.

Client-server option.

Tests conducted only on 8.2, because on 8.3, everything is quite seriously dependent on the version.

For testing, I chose different server options and the network between them to show the main trends.

   1C: Xeon 5520

SQL: Xeon E5-2630

   1C: Xeon 5520

SQL: Xeon E5-2630

Fiber channel - SSD

   1C: Xeon 5520

SQL: Xeon E5-2630

Fiber channel - SAS

   1C: Xeon 5650

SQL: Xeon E5-2630

   1C: Xeon 5650

SQL: Xeon E5-2630

Fiber channel - SSD

   1C: Xeon 5650

SQL: Xeon E5-2630

   1C: Xeon 5650 \u003d    1C: Xeon 5650 \u003d    1C: Xeon 5650 \u003d    1C: Xeon 5650 \u003d    1C: Xeon 5650 \u003d
16,78 18,23 16,84 28,57 27,78 32,05 34,72 36,50 23,26 40,65 39.37
   1C 8.2 17,12 17,06 14,53 29,41 28,41 31,45 34,97 36,23 23,81 40,32 39.06
16,72 16,89 13,44 29,76 28,57 32,05 34,97 36,23 23,26 40,32 39.06

It seems that I considered all interesting options, if something else interests me - write in the comment, I will try to do it.

  • SAS on storage is slower than local storage systems, even though storage systems have large cache sizes. SSDs, both local and on storage for the Gilev test, operate at a comparable speed. I do not know any standard multithreaded test (not only recordings, but all equipment) except for load 1C from the MCC.
  • Changing the 1C server from 5520 to 5650 gave almost a doubling of performance. Yes, the server configurations do not match completely, but the trend shows (nothing surprising).
  • Increasing the frequency on the SQL server of course gives an effect, but not the same as on the 1C server, the MS SQL server is very good at (if you ask for it) to use multicore and free memory.
  • Changing the network between 1C and SQL from 1 gigabyte to 10 gigabytes gives about 10% of parrots. Expected more.
  • The inclusion of the Shared memory effect still gives, although not 15%, as described in the article. Doing it is necessary, the benefit is quick and easy. If during installation, someone gave the SQL server a named instance, then for 1C to work, the server name must not be specified with FQDN (tcp / ip will work), not through localhost or just ServerName, but via ServerNameInstanceName, for example zz-testzztest. (Otherwise, there will be a DBMS error: Microsoft SQL Server Native Client 10.0: Shared memory provider: The shared memory library was not found, which is used to establish a connection to SQL Server 2000. HRESULT \u003d 80004005, HRESULT \u003d 80004005, HRESULT \u003d 80004005, SQLSrvr: SQLSTATE \u003d 08001, state \u003d 1, Severity \u003d 10, native \u003d 126, line \u003d 0).
  • for users less than 100, the only reason for posting to two separate servers is a license for Win 2008 Std (and older versions), which only supports 32 GB of RAM. In all other cases - 1C and SQL you definitely need to install it on one server and give it more (at least 64 GB) memory. Giving MS SQL less than 24-28 GB of RAM is unreasonable greed (if you think that you have enough memory for it and everything works fine - maybe you would have enough file version 1C?)
  • How much worse the combination of 1C and SQL in a virtual machine works is the topic of a separate article (the hint is noticeably worse). Even in Hyper-V, everything is not so clear ...
  • A balanced performance mode is bad. The results are quite correlated with the file version.
  • Many sources say that debugging mode (ragent.exe -debug) gives a strong decrease in performance. Well, it lowers, yes, but I would not call 2-3% a significant effect.
   There will be fewer tips for a particular case, because brakes with the client-server version of the work is the most difficult case, and everything is configured very individually. The easiest way to say that for normal operation you need to take a separate server ONLY for 1C and MS SQL, put processors with the maximum frequency (above 3 GHz), SSD disks for the base, and more memory (128+), do not use virtualization. It helped - excellent, you were lucky (there will be a lot of such lucky ones, more than half of the problems are solved by an adequate upgrade). If not, then any other options already require a separate trial and settings.

Very often people ask me questions like:

  • because of what the 1C server slows down?
  • computer with 1C is very slow
  • terribly slows down client 1C

What to do and how to defeat it, and so on in order:

Clients work very slowly with the server version 1C

In addition to the slow operation of 1C, there is also a slow operation with network files. The problem occurs during normal operation and during RDP

to solve this, after each installation of Seven or 2008 server I always start

netsh int tcp set global autotuning \u003d disabled

netsh int tcp set global autotuninglevel \u003d disabled

netsh int tcp set global rss \u003d disabled chimney \u003d disabled

and the network works without problems

sometimes optimal is:

netsh interface tcp set global autotuning \u003d HighlyRestricted

this is what the installation looks like

Configure Anti-Virus or Windows Firewall

How to configure the Anti-Virus or Windows firewall for 1C server operation (a bunch from 1C: Enterprise Server and MS SQL 2008, for example).

Add rules:

  • If the SQL server accepts connections on the standard TCP port 1433, then we allow it.
  • If the SQL port is dynamic, then you must allow connections to the application% ProgramFiles% \\ Microsoft SQL Server \\ MSSQL10_50.MSSQLSERVER \\ MSSQL \\ Binn \\ sqlservr.exe.
  • Server 1C runs on ports 1541, cluster 1540 and the range 1560-1591. For completely mystical reasons, sometimes such a list of open ports still does not allow connections to the server. To work for sure, enable the range 1540-1591.

Server / Computer Performance Tuning

In order for the computer to work with maximum performance - you need to configure it for this:

1. BIOS settings

  • In the server BIOS, turn off all settings to save processor power.
  • If there is a "C1E" & be sure to turn it OFF !!
  • For some not very parallel tasks, it is also recommended to turn off hyper trading in the BIOS
  • In some cases (especially for HP!), You need to go into the server BIOS, and turn OFF the items there, in the name of which there are EIST, Intel SpeedStep and C1E.
  • Instead, you need to find there items related to the processor, in the name of which there is Turbo Boost, and turn them on.
  • If the BIOS has a general indication of the energy saving mode & turn it on to the maximum performance mode (it can also be called “aggressive”)

2. Schema settings in the operating system - High performance

Servers with Intel Sandy Bridge architecture can dynamically change processor frequencies.

1C - a program designed to automate the activities of any enterprise. This utility many times simplifies many actions within the enterprise. However, users of this product have repeatedly noticed that 1C sometimes slows down. There may be plenty of reasons for this, and it is not at all necessary that the matter is in the program itself. It is likely that you do not match all the system requirements necessary for the program to function normally, but sometimes other reasons for the slow operation of this utility also appear.

What are the minimum system requirements for 1C?

As for all other software products designed for the computer, there are also minimal system requirements for 1C. We will analyze them now.

System requirements for 1C:

  • kernel speed: 2.4 GHz (for the client server), 3 GHz (for the file value);
  • memory (RAM): 8 GB (file version), 4 GB (for client server);
  • internet connection speed - at least 100 Mb / s;
  • free memory on the hard drive - at least 2 GB.

Over time, any company develops and grows. Accordingly, the number of employees in 1C is also increasing. It is no secret that with an increase in employees working with one accounting system, the correctness of 1C is significantly reduced if appropriate measures are not taken.

It all starts with complaints from employees that 1C is “slowing down”, “freezing” or “crashing” altogether with an error. And at some point it becomes impossible to work with the accounting system.

Figure 1 - Transaction blocking in 1C

How not to miss the very moment when the system needs "help"?

There are several main signs of this:
  • The number of users is already more than 10-20.
  • The size of the base is close to 4 Gb.
  • Strongly modified atypical configuration.
However, it is often quite different. The number of users and the database has not yet grown, and "braking" has already appeared in the accounting system. In this case, it is urgent to find out and eliminate the causes of poor performance.

If you have a non-standard configuration (modified), there is a risk that the program code may not be written optimally!

In this regard, the calculations are carried out incorrectly. And immediately notice this problem is very difficult. Initially, the refined system will work quite tolerably, but with an increase in the amount of data, requests to them begin to run for a very long time and the work comfort decreases significantly.

And here you can’t figure it out without a good 1C programmer.

If you do not want to resort to the services of such a specialist or if it is not possible for any reason, there are several recommendations for identifying a problem:

  • It is worth paying attention to suspiciously long operations.
      For example, a report is formed in a few minutes or one of the documents takes longer than a few seconds. Perhaps something is written incorrectly in the program code, for example, there is a selection of excess data.
       This should lead to the idea that the code is written incorrectly and there is a need to make corrections.
  • If a large number of users will work in the 1C database, 1C in file mode will cease to cope with the load. This may affect the speed of work.
  • If a certain group of users constantly accesses the same documents, their processing speed will be noticeably slowed down. For example, the sales department deals with customers. Each of the department employees often refers to certain documents. With repeated use, the speed of holding documents slows down. In this case, you need to think about using SQL.
  • Another critical factor for 1C operation is base filling.
    The maximum allowable amount of data in one table of the 1C database is 4 GB. If the process of filling 1C with data reaches a critical point, the database will stop working - it will be impossible to enter additional information. The system will say that there is not enough memory for new data. At the same time, it can be hosted on a new server, the resources of which are still not occupied by anything. We are talking about the virtual memory of the program itself. In this case, the database also needs to be converted to SQL.

These are the main problems with 1C, which bring inconvenience to users.

Base 1C can be compared with a car. And, like any car, it requires regular maintenance. Yes, you can "ride" it for a long time without looking "under the hood." However, when it completely ceases to "go", it will require substantial investment.
In this regard, we recommend conducting regular routine operations with the base:

  • Reindexing information base tables.
  • Checking the logical integrity of the infobase.
  • Recounting of totals.
  • Update full-text search indexes.

It is best to carry out these operations once a week - on weekends or at night, when no one is working in the database.

It is also worth mentioning the load on the server processor, memory and, most importantly, the average queue for the disk. It is desirable that it does not exceed the value "1", and the maximum permissible value is "3". However, this parameter is also relative, since disks can not cope with the load even when the queue is less than 1, especially if these are SATA disks with low random access speed for reading and writing, which are actively used in any database.

And, only if all this does not help, and the load on the hardware becomes peak, you should think about updating the server or, if there is none, about buying it.

An alternative to a substantial investment may be the transition from 1C to the "cloud", transferring these concerns to the cloud provider. Paying a small rental price, you can forget about problems with the performance and maintenance of the infrastructure. Having made this choice, you transfer all these problems to the provider. And it doesn’t matter how much it will be necessary to increase tomorrow the number of users of the system or how much the 1C base will grow.

Similar publications