Home / Security / Which server for 1s 83. C: Accounting on a separate server

Which server for 1s 83. C: Accounting on a separate server

To date, the 1C financial product has grown from an accounting application for accounting into a wide-format complex for accounting and supporting almost any type of business, claiming to compete with the world's "monsters" SAP R / 3 and Microsoft Dynamics AX (Axapta).

Russian companies are increasingly organizing their business processes using modern configurations 1C 8.3 "Trade Management", "Production Management", "ERP Enterprise Management" and the like. Accounting, marketing, production, sales departments are transferred to 1C, integration with IP-telephony and document management systems is being carried out. However, immediately after the intentions “let's work in 1C”, questions arise - on what resources will the central base of 1C work, what hardware will show the best result for a reasonable budget? It is easier for the giant enterprises of the public sector in this situation - a clear command was given to numerous full-time IT integrators and architects, large-budget tenders were launched with the obligatory condition for providing a turnkey concept and further support of the system by certified specialists. But what about companies that want to purchase and install one of the 1C: Enterprise products themselves, spending their budget wisely?

The most basic mistake, if you do not take into account the use of pirated or unverified software, is saving on hardware for 1C. These trends are especially common in startups and small companies. There is an opinion that it is not necessary to buy expensive server equipment with Intel Xeon processors, it is not necessary to pre-calculate the amount of RAM, the load on the CPU and the disk subsystem, that there is no need to create redundant disk arrays (Raid), use professional disk controllers with Cache-RAM and etc. Errors in the calculations of the IT architecture for 1C lead to sad consequences, which the company learns about already upon stopping business processes. Therefore, it is very important to pay attention to each hardware node of the server platform for 1C.

Examples of typical problems due to incorrect construction of an IT architecture for 1C:
  • "Braking" of the base and 1C interfaces due to the excess load on key resources (usually RAM or disk subsystem).
  • Errors and "crashes" of the 1C program due to the instability of the incorrectly selected equipment.
  • Downtime of the company due to the failure of the central hardware.
  • Partial or complete loss of 1C data due to random hardware or software failures.

Hardware resources of the server 1C

Let us consider below the most key hardware resources, the mistake in choosing which can ruin the entire enterprise automation project when creating a server under 1C on your own.

Central processing unit (CPU)

The number of physical CPU cores. The topic of eternal disputes on various 1C forums is what is more important than CPU frequency or multi-core. The roots of these contradictions go back to the past, to 1C 8.0 or even 1C 7.7. Indeed, the 1C executable processes of earlier versions were purely single-core, i.e. no matter how many cores the central processor provides - the enterprise server service 1C 8.0 or the "thick client 1C 7.7" always occupied only one "zero" core in operating system. Today, the picture has changed - the operating system boldly distributes the tasks of one 1C: Enterprise (rphost) process across several CPU cores (see Figure 1).




Figure 1 - CPU load during the operation of 1C server processes.


But this does not mean at all that if you buy a processor with the maximum number of cores, then a 1C server paired with a DBMS (most often DBMS means MS SQL) will show fantastic performance and rewriting accounting periods in the 1C program will be a matter of several minutes. It is necessary to understand the difference between the speed of performing a single operation and the process of processing a large amount of information simultaneously. The number of physical cores just allows you to solve the issue of stability and performance of simultaneous work with many different tasks by the 1C: Enterprise server and DBMS. Hence the conclusion - the greater the number of 1C users, the more the right number of cores will play a role for the comfortable simultaneous operation of these same users. The dependence of the number of users on the number of cores for the 1C server is shown in Table 1.


Number of concurrent users on the 1C:Enterprise server Processor type and model Number of cores used
Up to 10 users Custom Intel Core from 3.1Ghz No more than 2-4
Up to 20 users Server Intel Xeon from 2.4 Ghz 4 to 6
Up to 30 users Server Intel Xeon from 2.6 Ghz 6 to 8 cores
Up to 50 users Server Intel Xeon from 2.4 Ghz - in the amount of 2 pcs From 4 per processor

Table 1 - The ratio of the number of users on the 1C server and the recommended number of CPU cores.


CPU frequency. In contrast to the number of cores - the frequency of the central processor affects precisely the speed of processing one piece of a task at one time, which is the most popular criterion for 1C end users. The frequency of the processor is exactly the parameter, with an increase in which for a single user, the speed of processing requests by the 1C server and the DBMS will increase and the time during which the system will provide the final result to the end user will decrease. In support of this, the well-known specialist Gilev, in one of his articles based on practical tests, made an unambiguous conclusion - “the speed of 1C is much more influenced by the frequency of the central processor than its other parameters, whether it is a 1C end client or a 1C: Enterprise server ". This is the architecture of the 1C program.

Cache, virtualization and hyper threading. In the past, when multi-core processors were not yet so common, Intel invented special technology the central processor, simulating multi-core, the so-called "hyper-threading". Once enabled, one physical processor (one physical core) is defined by the operating system as two separate processors (two logical cores). We recommend turning off “hyperthreading” for the 1C server. This technology does not bring any acceleration of 1C.

Using virtual machines for the 1C:Enterprise server and DBMS, it must be taken into account that the cores of virtual machines are "weaker" than real physical cores, although they are called the same - "cores". There are no exact official coefficients, but articles on Microsoft technical portals recommend counting 4-6 processor cores in a virtual machine per physical core.

A cache is a scratchpad memory used by the processor to reduce the average access time to computer memory. In fact, it is an integral part of the processor, since it is located on the same chip with it and is part of the functional blocks. Everything is very clear here - the larger the cache, the larger "pieces" of information the processor can process. Typically, the size of the cache depends on the processor models - the more expensive the model, the usually more cache memory there. However, we do not believe that the size of the processor cache drastically affects the performance of the 1C server and DBMS. Rather, it belongs to the field of "fine tuning".

Processor type. Everyone knows that hardware is divided into server and user. Is it possible in some cases to use an inexpensive custom CPU as an alternative to a professional but expensive server CPU? It turns out - it is possible. Let's consider a table comparing the main parameters of two variants of Intel central processors (see Table 2).

Custom Intel® Core™ i7-6700T Processor (8M Cache, up to 3.60 GHz) Server Intel® Xeon® Processor E5-2680 v2 (25M Cache, 2.80 GHz)
Cache 8MB 25MB
Frequency system bus 8 GT/s DMI3 8 GT/s QPI
Command set 64-bit SSE4.1/4.2, AVX 2.0 64-bit AVX 2.0
Number of Cores 4 10
CPU base clock 2.8GHz 2.8GHz
Max. volume and type random access memory 64 GB non-ECC 768 GB ECC
Estimated cost 354$ 1 280$

Table 2 - Comparison of the main parameters of the home and server CPU from Intel.


As we can see, the server processor has much higher values ​​in the number of cores, cache size, support for more RAM and, of course, at a higher price. However, the server CPU practically does not differ from the user CPU in terms of support for certain processor instructions (instructions) and in clock frequency. From this we can conclude that for small organizations it is quite acceptable to use a custom central processor for the 1C: Enterprise server. The only issue is that a user processor cannot be installed in a server socket. motherboard and support server RAM with parity check (ECC), and the use of custom components entails risks to the stability of the entire system as a whole.

Random Access Memory (RAM)

RAM type. The bar of RAM (RAM) differs in its purpose - for multi-user server systems or for personal devices - PCs, laptops, nettops, thin clients, etc. As in the case of the CPU - the main parameters of the RAM modules are approximately equivalent - modern PC RAM practically does not lag behind the server one in the volume of one bar, nor in the clock frequency, nor in the type of DDR modules. Differences between server RAM and "home" RAM in the use cases and purpose of the hardware platform - this is also where its higher cost is formed:

  • Server RAM has ECC (Error Correction Code) parity - an encoding / decoding technique that allows you to correct errors in information processing directly by the RAM module
  • The server motherboard has many more slots for installing RAM modules than an ordinary PC
  • Server RAM contains registers (buffers) that provide data buffering (partial Registered or full Full Buffered), thereby reducing the load on the memory controller with many simultaneous requests. Buffered "FB-DIMMs" are incompatible with unbuffered ones.
  • Modules register memory also allow you to increase the scalability of memory - the presence of registers makes it possible to install more modules in one channel.

We can conclude that the use of server RAM modules makes it possible to install large amounts of RAM in one system, and ECC parity control techniques and the use of buffers allow the server operating system to work stably and quickly.

The amount of RAM. One of the key factors for high performance server 1C and DBMS is a sufficient amount of RAM. Of course, the actual RAM requirements depend on many factors - the type of 1C configuration, the number of 1C: Enterprise server processes, the size of the DBMS database, and so on. However, it is possible to derive an approximate dependence of the amount of RAM on the number of users (see Table 3).


RAM requirement for server 1c and DBMS Up to 10 users Up to 20 users Up to 30 users Up to 50 users
Server 1c:Enterprise 4-6 GB 6-8 GB 12-14 GB 18-24 GB
MS SQL server 4-6 GB 8-10 GB 16-18 GB 24-28 GB

Table 3 - Approximate ratio of the number of users of the 1C server and the recommended RAM for the processes of the 1C: Enterprise server and MS SQL server.


Regarding server processes 1C: Enterprise (rphost.exe) - modern 1C platforms do not allow manual mode indicate the number of server processes 1C. Instead, the system requires you to set parameters such as the number information bases and the number of users per rphost.exe process, after which it automatically determines the optimal number of 1C:Enterprise server processes. You can also configure the smooth release of RAM by the rphost.exe process if its volume exceeds a predetermined threshold. At the same time, the 1C server creates a new rphost.exe process, which gradually takes over the 1C tasks, allowing you to unload the required 1C process.

You also need to note that the amount of RAM allocated to the SQL service is considered sufficient if the hit of SQL data in the cache is at least 90%. This metric is quite handy because you can't just look at the amount of RAM consumed by the SQL server - the latest releases of SQL have dynamically consumed RAM - the maximum possible amount of RAM is captured and released as RAM is requested by other processes.

RAM frequency. In short, this is the bandwidth of the channels through which data is transmitted to the motherboard, and from there to the processor. It is desirable that this parameter coincides with the allowable frequency of the motherboard or exceeds it, otherwise the RAM transmission channel runs the risk of becoming a bottleneck. Within one type of DDR, increasing / decreasing the frequency does not drastically affect the performance of the 1C server and is more related to the area of ​​\u200b\u200b"fine tuning".

RAM timings. This is the delay or latency (Latency) of the RAM. This parameter is characterized by the data delay time during the transition between different modules of the RAM chip. Smaller values ​​mean faster performance. However, the impact on the overall performance of the server system, and even more so on the 1C:Enterprise server, is not high. Usually, only gamers and overclockers pay attention to these parameters, for whom every extra drop of performance is the most expensive thing.

Disk subsystem and hard drives HDD

hard drive controllers. The main device for connecting and organizing hard drives in a hardware system is the hard drive controller. It is of two types:

1. Built-in - the controller module is built into the system, the hard drive cage is connected directly to the motherboard. It is considered a more economical solution.

2. External - is a separate printed circuit board (device), which is connected to the motherboard connector. It is considered a more professional solution due to the fact that it has separate chips for conducting and controlling operations with hard HDDs. Recommended for important server systems such as 1C:Enterprise server and DBMS.

There is also a third type - a device for receiving / transmitting block data via iSCSI, FiberChanel, InfiniBand, SAS channels. However, in this version, the disk subsystem is "removed" to separate device data storage (SHD), connected to the server via an optical or copper cable. In our article, we are analyzing the requirements for a standalone server for 1C, so we will not consider this type.

Types and levels of RAID arrays. It is a data virtualization technology that combines multiple drives into a logical unit for redundancy and performance. Consider the most popular RAID specification levels:

  • RAID 0 (“Striping”) It has no redundancy, and distributes information at once across all disks included in the array in the form of small blocks ("stripes"). This greatly improves performance, but suffers from reliability. We do not recommend using this array type despite the performance gain.
  • RAID 1 (“Mirroring”, “mirror”). It has protection against failure of half of the available hardware (in the general case - one of the two hard drives), provides an acceptable write speed and a gain in read speed due to query parallelization. This type of array will quite “pull” a 1C + DBMS server up to 25-30 users, especially if SAS 15K or SSD disks are used.
  • RAID 10. Mirrored pairs of disks line up in a "chain", so the volume of the resulting volume may exceed the capacity of one hard drive. In our opinion, the most successful type of disk array, because it combines the reliability of RAID1 and the speed of RAID 0. In combination with SAS 15K or SSD drives, it can be used for 1C servers from 40-50 users.
  • RAID 5. Known for its economy. Sacrificing for the sake of redundancy the capacity of only one disk from the array, we get protection against the failure of any of the hard drives in the system. (its RAID 6 variant requires two extra hard drives to store the checksums, but retains data even if two drives fail). This type of array is economical, reliable and has a fairly tangible "read" speed. Unfortunately, the bottleneck of this array is the low write speed, which allows it to be comfortably used with 1C server configurations of up to 15-20 users. It is also optimal for applied purposes - storage of file data, document management archives, etc.

Types of hard drive interfaces. According to the type of connection, hard drives are divided:

  • HDD Sata Home. The cheapest option for hard drives, designed for use in home PCs or network media centers. It is strongly not recommended to use such devices in 1c servers due to the low fault tolerance and stability of operation - the components of these disks are simply not designed to work 24/7 and quickly fail.
  • HDD Sata Server. This name usually refers to hard drives with a Sata interface and a spindle speed of 7,200 rpm. The prefix "Server" means that such drives have been tested for performance in server systems and are designed for stable work in 24/7 mode. Usually used in 1C servers to store large amounts of information that does not require high processing speed. For example - 1c archive databases, exchange folders, upload files office documents etc.
  • HDD SAS Server. Differences between the SAS interface (the modern analogue of SCSI) and Sata interface some. Here, the average response time of the disk, and work in a common disk shelf, and work with the HDD controller at higher information exchange rates - up to 6 Gb / s (compared to Sata 3 Gb / s). But the main advantage is the existence of SAS disk models with a spindle speed of 15,000 rpm. It is this design feature allows SAS disks to carry out almost 3 times more IOPS compared to Sata Server HDD. Such SAS disks are small in size and are recommended for use with 1c main databases with a constantly high workload.
  • SSD drives. These drives differ from the previous ones not in the connection interface, but in their design - they are solid-state and have no moving parts, i.e. in essence, they are analogues of "flash drives". Such technologies allow SSDs to produce an “outrageous” number of I / O operations per second (from 10,000 operations on the simplest SSD models). However, this advantage also has a downside - the higher price of SSDs and their “life threshold”, which depends on the limit on the number of writes to SSD blocks. However, every year these discs are becoming more affordable and durable. Since the cost of SSD disks increases many times depending on the volume, it would be most reasonable to use them for small, but super-loaded 1c databases that require high access speed, as well as for TempDB temporary databases.

IOPS is the number of I/O operations per second. In fact, IOPS is the number of blocks of information that can be read or written to the media in 1 second of time. That is, in its purest form - this is the key parameter of the speed of information processing by the hard disk, which affects the performance of the 1C server. If we take for comparison a standard block of information 4kb, then we can roughly distinguish the following IOPS indicators (see Table 4).


HDD IOPS Interface
7,200 rpm SATA drives ~75-100 IOPS SATA 3Gb/s
10,000 rpm SATA drives ~125-150 IOPS SATA 3Gb/s
10,000 rpm SAS drives ~140 IOPS SAS
15,000 rpm SAS drives ~175-210 IOPS SAS
SSD drives From 8,000 IOPS SAS or SATA

Table 4 - IOPS indicators on various types of hard drives when working with a 4kb data block.


Of course, in its pure form, IOPS is of little use for calculating the final calculations and requirements for the disk subsystem of the 1C server. After all, the total performance of the disk subsystem consists of the type of RAID array, types of disk and indicators of the speed of its interface, response time (Latency), random access time, the percentage of read and write operations, and many other factors. However, this parameter, in our opinion, is a key indicator of the speed of the disk subsystem, and at the stages of developing a server architecture, it helps to determine what type of hard disks will generally be most suitable for certain needs. (see RAID calculator)

practice test

What is the relationship between the number of 1C users and the number of iops? Our team conducted a practical test (see Table 5) to measure the load on the disk subsystem a certain amount sessions 1C. Since the 1C system is a programmable environment and each company can have its own set of business processes in 1C, we needed to be tied to a certain reference configuration for testing. In this capacity, a specialized configuration of TsUP 1C was chosen, developed for testing and debugging. On its basis, our 1C programmers added a number of queries that simulate the normal operation of a conventional enterprise, with the formation of accounting queries, postings, reporting and conducting operational documents.


System disk Database disk
Iteration Users IOPS write IOPS read IOPS write IOPS read
Averages
1 12 9,1 0,1 13,1 1,5
2 20 7,9 0,1 21,8 0,4
3 32 5,2 0,006 36,1 5,2
4 40 7,7 0,013 27,52 1,3
5 52 7,7 0,006 32,04 0,94

Table 5 - Results of a practical test on the load on the disk subsystem.


The test results show that the lion's share of the load on the disk subsystem occurs when 1C is written to the database of the DBMS server and to the system disk of the operating system (which hosts the 1C:Enterprise cache server files by default).

At the same time, we carried out practical measurements of already operating 1C UPP 8.2 databases during the test period - 5 working days. They show that, on average, a 1C + DBMS server consumes twice as many iops “for writing” than “for reading”. Such a difference between synthetic tests and monitoring statistics of a real 1C server is due to both periodic sampling of information data from the database during the working day, and regular reading of the database during backup or DBMS replication.

Other components of the hard drive, which are worth paying attention to.

  • Physical size (form factor). To date, almost all known drives for personal computers and servers have a size of 3.5 or 2.5 inches. Note that 2.5-inch drives are not produced in large volumes.
  • Random access time- time for which HDD guaranteed to perform a read-write operation on a specific area of ​​the magnetic disk. As a rule, more high results have server disks. This is enough important parameter when building an array of disks for the 1C DBMS server.
  • Spindle speed- the number of revolutions of the hard disk spindle per minute. Everything is simple and clear here - the access time and the average data transfer rate of the hard disk depend on the speed of rotation of the spindle with magnetic plates.
  • Hard disk buffer capacity- A buffer is a temporary memory designed to smooth out differences in the read / write speed of a hard disk and data transfer through the interface.
  • Reliability- is defined as mean time between failures (MTBF). As a rule, reliability directly depends on the manufacturer, price and environment of use of the hard drive. We consider reliability to be an important hard drive parameter that affects the quality of the 1C server.

The right choice: home or server hardware

The cheapening of hardware components and the active growth of the potential capacities of "home computers" lead to another fatal misconception - small businesses are actively using workstations as a platform for collaborating with 1C databases. At the same time, without realizing that in addition to the parameters of the core frequency, the amount of memory and the possibility of using budget SSD drives in a regular PC, there are more systemic, deeper and more important requirements for the operation of hardware in a commercial structure (see Table 6).

To solve the issue of organizing a 1C server, we offer rental of 1C cloud servers in Tier III class data centers. The economic feasibility of choosing a server rental can be found in the article.


Options Server Personal Computer
Sufficiency of computing power V V
Guaranteed operability of the system in 24/7 mode V X
Reliability and stability of key hardware components V X
Possibility remote control power and console (IPMI) V X
Budget cost of the hardware platform X V

Table 6 - Comparison of home and server hardware according to the criteria required for the high-quality operation of the 1C server.

Fault-tolerant work 1C

Of course, one of the important requirements for the server part of 1C is the stability of its operation and resistance to failures. Microsoft and 1C itself have made a lot of efforts in this direction, creating technologies for clustering their services at a fairly serious level (see Table 7).


Fault tolerance of SQL servers Based on the concept of a single shared data warehouse. The built-in SQL Server clustering technology combines two SQL servers into one cluster with a single virtual IP address and a single database. Thus, when the main SQL fails, queries are automatically transferred to the backup.
The second option is the recently appeared AlwaysOn, a technology for automatic regular replication of DBMS databases between the primary and backup SQL servers. At the same time, the duplicate SQL server is physically located on a different storage, which increases the resistance to risks
Failover service server 1C:Enterprise 1C Enterprise servers are combined into an active-active software failover cluster with automatic failover and saving current sessions.

Table 7 - Fault tolerance of SQL and 1C servers.


However, each technology has both pros and cons. In addition to the key advantages, you need to know some features of 1C and SQL clustering () in order not to end up with a deterioration in service performance:

  • SQL clustering uses virtual IP. And this means that the interaction of the 1C: Enterprise server and MS SQL will always occur according to network interface, even if both services are on the same operating system. Which, accordingly, will slow down the work of 1C in comparison with the classic version of the architecture recommended by 1C itself - the use of Shared Memory. In principle, this obstacle can be "bypassed" using, for example, MS SQL Log Shipping technology. However, in this case, switching to a backup SQL server will no longer be automatic, and this option cannot be considered a full-fledged cluster.
  • A SQL cluster requires a large budget. If we are talking about classical clustering of MS SQL service, a single database storage is required, connected to the main and backup SQL servers. Typically, this role is played by expensive storage systems, which increases the budget by an order of magnitude. If we are talking about the newfangled AlwaysOn, then a single database storage is not required, the technology works with local drives primary and backup servers over the network. But you need a version of SQL Server Enterprise, the license for which costs 4 times more than for a regular SQL Server Standard.
  • Number of licenses. Despite the fact that the second SQL server does not process data and is in reserve, licenses will need to be purchased for both servers - both the main and the backup. Particularly painful for the budget are SQL Server Enterprise licenses to implement a distributed cluster of AlwaysOn High Availability Groups.
  • You don't have to use cheap custom hardware for something as important as an enterprise-wide accounting system. Price in this case directly determines the quality, stability and durability of such a platform.
  • When choosing a server platform, we recommend paying attention to the presence of two power supplies, a remote IPMI card, and the manufacturer's brand. Of course, everyone chooses a solution based on their budget, top brands are sometimes too expensive and not entirely appropriate, but you shouldn’t save on the manufacturer at all, this can lead to uncontrollable force majeure in working with 1C. We personally use Supermicro server platforms in combination with Intel server CPUs.
  • There is an opinion, confirmed by practice, that the performance of 1C depends more on the higher frequency of the CPU than on the number of cores provided.
  • No need to save on the amount of RAM allocated for the 1C server and SQL service. RAM is currently a fairly cheap resource, and its shortage (even by 10-15 percent) will lead to a strong drop in 1C system performance, because. a slower swap system will be enabled. Plus, swap will give an additional load on the disk subsystem, which will worsen the situation even more.
  • The EFSOL company offers comprehensive services for the selection of a 1C server, which includes: 1C server design, purchase, configuration and maintenance.
  • An alternative to creating your own 1C server is to rent a server for 1C. Cloud technologies allow, at low monthly costs, to get a reliable fault-tolerant service for comfortable work in 1C.

System integration. Consulting

When choosing which server is needed for 1C, it should be remembered that while users work with it, many data read and write operations per second will be performed.

Most likely, it’s immediately clear why competent server design for 1C is so important - if the “hardware” was initially chosen incorrectly and does not correspond to the load on the system, then there is a risk that or even work intermittently, that important data will be lost. On the other hand, create a server under 1C, buy all the hardware and software can cost a significant amount for the company, so it is advisable to select equipment in such a way as to avoid unnecessary costs.

Server selection for 1C

When our specialists need to make a configuration choice for the 1C server, the first thing they ask is how many users will work with 1C in the company and what set of services they plan to use, what they will be, who will administer the 1C servers and how. We start from this information when creating a 1C server.

Requirements for the server 1C

In the hardware structure of the 1C server, the characteristics of the processor, RAM, disk subsystem and network interfaces will be important for us.

It is necessary that they ensure stable and sufficiently productive operation of the following components:

  • operating system;
  • database server (most often it is);
  • 1C server part (not for all cases, since a small company with 2-10 users can work with 1C in file mode);
  • user work in Remote Desktop mode;
  • work of remote users through thin client or web client.

Choosing a processor for a 1C server

The optimal number of processor cores is usually calculated based on the fact that you need to reserve 1-2 cores for the operation of the OS, 1-2 cores for the operation of the SQL database, 1 more for the operation of the application server, and approximately 1 core for every 8-10 simultaneous user sessions (so that users do not complain later that the 1C server slows down).

Please note that the speed of request processing depends not so much on the number of cores, but on the clock speed of the processor, and the number of cores affects the stability of work more with a large number of users and simultaneous tasks from them.

How much memory does a 1C server need

In addition to the above, if you need a 1C server for 100 or more users, we recommend deploying a cluster of at least two 1C physical servers.

We propose to calculate the amount of required RAM based on the following indicators:

  • 2 GB will be required for the operation of the operating system
  • at least 2 GB for the MS SQL Server cache, and it is better that this value is 20-30% of the actual volume of the database - this will ensure a comfortable user experience with it
  • 1 - 4 GB for 1C application server
  • 100 - 250 MB will require one user terminal session, depending on the set of functions of the 1C server, the configuration used

Here are our approximate calculations of the parameters of the server 1C 8.3:

It is better to purchase RAM with a margin - this is one of the most important factors in the high performance of a 1C server and at the same time it is now one of the cheapest components. If there is not enough memory on the 1C Enterprise server, this will be very noticeable during operation, therefore, when it comes to the question of which 1C server to choose, always pay attention to ensuring that it has enough RAM.

Server 1C: equipment for the disk subsystem

When choosing which server is needed for 1C, it should be remembered that while users work with it, many data read and write operations per second will be performed. This parameter - at what speed the hard drive allows you to process data - is also one of the key to the speed of the 1C server.

When designing a 1C server, we recommend that you follow the requirements for the disk subsystem equipment as follows:

  • It doesn’t matter which server you create for 1C, in no case do we recommend using single disks in servers - it is advisable to organize them into RAID arrays (RAID 10 for large or RAID 1 for small databases), where the database tables will be located.
  • We recommend moving index files to a separate SSD for faster access to them
  • TempDB - on 1-2 (RAID 1) SSDs.
  • Place OS and user data on RAID 1 of SSD/HDD.
  • For log files, allocate a separate logical disk from the array or a physical SSD disk.
  • If possible, use hardware controller- we have seen situations where a powerful and expensive server slowed down due to insufficient controller performance.

Server selection for 1C

In this article, we have provided some tips and approximate calculations on how to choose a server for 1C, we hope they will be useful to you.

In conclusion, let's add one more thing - you should not try to save money by using a user computer for a 1C server (as is often done in small companies) - user hardware is much less reliable and fault-tolerant than server hardware of similar performance. It is not worth risking the accounting system of your enterprise. If purchasing the right hardware is out of your budget, you might want to consider deploying 1C in the cloud.

If it is difficult for you to figure out which server to choose for 1C Enterprise 8.3, how to make a 1C server, because you have not encountered this task before, you can always contact a system integrator company so that experienced technical specialists will help you design, buy, install and set up a suitable server for 1C.

To begin with, I propose to highlight several scenarios of work:

1.) Working with the file base through a shared resource (web server)

2.) Working with the file base in the terminal

3.) Working with server (MSSQL) database

Working with the file base through a shared resource (web server)


Everything is pretty simple here. If this regular forms and 1-3 users. Then on the "server" (the machine on which the base will lie, select:

  • fast screws- pay attention to the spindle speed (we take 7200rpm). For example, we do not take the green series from WD, we take black or red. See Seagate's Constellation series.
  • CPU- cores are not as important as their frequency. 1C uses multi-core rather poorly (not at all), so you won’t get any benefits from an 8-core processor, a 2-core processor with a higher frequency will do it. For example, core i3 4360 - this is currently the maximum frequency for intel (4ghz in turbo mode).
  • RAM - she won't play a role. Considering how modern applications devour memory, put 8GB
  • network- well, actually, you won’t really benefit from a 1Gb network, but nevertheless, if an 8-wire twisted pair is stretched (you can look in the connectors), then it makes sense to put a gigabit switch, at the same time file sharing will be faster.
    And the final touch in this scenario - no need to host the database somewhere on a separate machine - long-running operations will be performed much faster locally than over the network. Put this car on workplace, from where it is planned, for example, to close the month or to update information security.

Another point, if the base is on managed forms. Here, if everything is done as described above, you will get brakes. However, there is a way out:

  • SSD* instead of the usual screw will save us. Take a 120GB drive, since even taking into account the growth of the exchange rate, they are acceptable. I recommend paying attention to intel 520/530 series, kingston v300. Better yet, just read the reviews on the latest models, because. this market is developing quite rapidly and new products are entering the market
    *Note: If you will combine disks in a RAID with mirroring, for example, RAID1. In this case, there is such a moment: most SSD drives trim is required to clean up garbage (mainly for fairly old models), the command may not be supported in raid mode and the drive will degrade in speed as it works. To avoid this problem, you can use at least two methods: ideally, purchase an enterprise level SSD, for example, intel DC3500. If it seems expensive, you can use a bundle: motherboard with a chipset
  • CPU- similar to the previous paragraph. The higher the frequency, the better.
  • RAM - large she won't play a role. Considering how modern applications devour memory, put 8GB

If 1 user works locally with the database, then this is enough for his comfortable work, but the speed of network work through a shared resource will still be slow. But here there is a way out - work through a web server. On the Internet, you can find a large number of articles that describe how to organize work with 1C in a similar way, I will not dwell on this in this article. The only thing I will share with you my observations: it is preferable to set up work for users not through a web browser, but through a thin client (when we add a new database to the IS list, there is an item "on the web server" on the IS placement page). This, according to my observations, is faster than through the browser. In addition, when working through a browser, there are errors in the interface (shifted PM, etc.), which are not present when working through a thin client.

Actually, using this recipe (ssd, processor with a high frequency, web server, thin client). You can dispel the myth "if the number of users is more than 1 (according to some version, more than 0 :)) - you need a server base *.

*Although, of course, with the proviso that this is not an SCP or a database > ~ 4GB in size, but the number of users does not exceed 4 (these are the maximum database sizes and the number of users that I saw, maybe someone met cases when through a web server with more people worked with the file base? Write in the comments)

Working with the file base in the terminal

Let's move on to the next option. We have a terminal server and a file base. Here everything is similar to scenario 1, except for the processor:

  • SSD drive instead of a regular screw.*
    *Note: be sure to assemble the disks in a RAID with mirroring, for example, RAID1. In this case, there is such a point: most SSD drives require trim to clean up garbage (mainly for fairly old models), in raid mode the command may not be supported and the drive will degrade in speed as it works. To avoid this problem, you can use at least two methods: ideally, purchase an enterprise level SSD, for example, intel DC3500. If this seems expensive, you can use a custom class SSD, but then make sure that its rewriting capacity is sufficient for your scenario.
  • CPU- Here it makes sense to take corei5 instead of i3, because 1C will work on the terminal, additional 2 cores will not interfere, but do not forget about the frequency.
  • RAM there is such a stable expression among admins: there is never a lot of memory). From my practice, 7 people, when working in BP3, occupy 8-12GB on the terminal (it depends on how many documents are open for each user). For ordinary forms, the amount of memory can be divided by 2 :). An approximate calculation can be done as follows: 256 MB for the terminal session itself + 1.5 GB for 1C

Working with server (MSSQL) database


This scenario is the most complex and, perhaps, requires a separate article. I propose in this article to consider only the basic principles that affect performance

  • Placement of SQL server and server 1C. On different machines or on one. There is such a point: if they are on the same machine, then communication between them occurs through the shared memory protocol, and in this case we get a bonus in performance, which is not there when they are on different machines.
  • CPU. And here is already useful and high clock speed and multi-core. Because we have a SQL server process, if it is on the same machine, and several 1C rphost server processes that will load the processor cores. Separately, I want to highlight two-processor systems (i.e. when there are two sockets on the motherboard for and more than a socket). Even if you take with one empty socket "in reserve, buy a processor later, if you suddenly need it." I have seen a large number of two-socket servers that, until the deep end of life, stood with an empty second socket. Although, if the company pays ... why deny yourself the pleasure :)
  • RAM. In its work, SQL Server * actively uses RAM, if it is not enough, it will climb onto disks, which, even in the case of ssd, are slower than RAM. Therefore, it is not worth saving on memory here. Budget as much as possible (do not forget, of course, about common sense :)), and leave free slots on the motherboard so that you can always deliver an additional bar.
    *Note: do not forget to limit the maximum RAM used by the SQL server so that it is enough for the OS and terminal sessions, and also increase the steps for increasing tmp and the SQL base (the default step is 1mb, which is very small, set 200 MB for base and 50 MB for log)
  • disk subsystem. The thought may appear that if the amount of RAM is larger than the size of the base, then it will all lie in memory and everything will fly. It may well have been ... before the first write operation :) which will write to disks. And this is where hard drives will break you off :) Use SSD drives. And here, don’t save on non-desktop SSDs anymore, get normal enterprise-level SSDs. Intel DC3700 -200GB resource 3.7 petabytes (10 overwrites of the total volume of the drive per day for 5 years), can be found for 24000r/piece + second for RAID1=48000. The license will take a lot more.

Look like that's it. If questions/complaints/suggestions - wellcome in the comments;)

1C:Enterprise 8 can be a resource-intensive application even with a small number of users. Choosing a server for 1C, any owner would like to avoid "birth injuries" - potential bottlenecks embedded in it. On the other hand, today few people buy servers with excess capacity, “for growth”. It's good if the load profile can be removed in advance - then it's easier to design a server for a specific configuration of company applications.

For definiteness, let's consider the "1C:Enterprise 8.2" platform in its popular basic configurations "Accounting", "Trade and Warehouse", "Payroll and Human Resources Management", "Commercial Enterprise Management" and, in part, "Manufacturing Enterprise Management". We proceed from the fact that for enterprises with 10 or more employees working in 1C, “1C: Enterprise 8.2. Apps server". Let's take into account the option of working in the Remote Desktop mode, with the number of simultaneous database users up to 100-150. The recommendations will also apply to more “heavy” DB 1C, but “severe cases” always require an individual approach.

Processors and RAM

If the company is very small (2-7 users in the system), the database is small (up to 1GB), and 1C:Enterprise 8.2 works in file mode on the user's computer, then we get a classic implementation of a file server. Even the Intel Core i3, especially the Intel Xeon E3-12xx, can cope with such a task in terms of CPU load. The amount of RAM required is quite simple: 2GB for the operating system and 2GB for the system file cache.

If the company has 5-25 1C users, the database size is up to 4GB, then the 1C:Enterprise 8.2 application should have enough 4-core Intel Xeon E3-12xx or AMD Opteron 4xxx. In addition to 2GB of RAM for the OS, it is necessary to allocate 1-4GB for 1C:Enterprise 8.2. Application Server" and the same amount for MS SQL Server as a cache - a total of 8-12GB RAM. For small databases, it is desirable to cache at least 30% of the database in RAM, and preferably all 100%.

Known (although not particularly advertised) fact: “1C: Enterprise 8.2. Application Server does not like it very much when the operating system unloads it into a swap file on the hard drive, and tends to sometimes lose response. Therefore, on the server where the "Application Server" is running, there should always be a supply of free space in RAM - especially since it is inexpensive today.

In larger companies, 1C users usually work through remote access to the application (Remote Desktop) - that is, in terminal mode. As a rule, with 10-100 1C users with a database of 1GB or more, “1C:Enterprise 8.2. Application Server" and the user application "1C:Enterprise 8.2" run on the same server.

To determine the required processor resources, it is assumed that one physical core can efficiently process no more than 8 user threads - this is due to the internal architecture of the processors. As practice shows, for 1C + Remote Desktop tasks, you should not take server processors of lower lines with low frequencies of computational cores and a truncated architecture. If there are few users (up to 15-20), one high-frequency Intel Xeon E3-12xx processor will suffice. At the same time, at least one of its physical core (2 threads) will go to the needs of SQL Server, one more (2 threads) - to 1C:Enterprise 8.2. Application Server", and the remaining 2 physical cores (4 threads) - for the OS and terminal users. With more than 20 1C users or more than 4GB database volumes, it's time to switch to 2 processor systems on Intel Xeon E5-26xx or AMD Opteron 62xx.

The calculation of the required amount of RAM is relatively simple: 2GB must be given to the OS, 2GB or more - MS SQL Server as a cache (at least 30% of the database), 1-4GB - under "1C: Enterprise 8.2. Application Server", the rest of the server's memory should be enough for terminal sessions. One terminal user, depending on the configuration, consumes in the applications "Accounting", "Trade and Warehouse" - 100-120MB, "Salary and Personnel Management", "Management of the Trade Enterprise" - 120-160MB, "Management of the Manufacturing Enterprise" - 180-240MB. If the user additionally launches MS Word, MS Excel, MS Outlook on the server, then another 100MB should be allocated for each application. As a rule, the minimum for a terminal server is 12GB RAM.

For example, for a 1C server with the entire software package, 50 terminal users in the Trading Enterprise Management configuration, and an 8GB database, the computing power of two Intel Xeon E5-2650 processors (8 cores, 16 threads, 2.0 GHz) will be optimal. RAM will need at least 2 (OS) + 4 (SQL) + 4 (1C-server) + 8 (160 "UTP" * 50 users) = 18GB, and preferably 24-32GB (6-8 DIMM channels of 4GB each).

Disk subsystem

Most complaints about the slow operation of 1C:Enterprise 8 servers are due to a misunderstanding of what types of I / O operations are performed on them, on what data and with what intensity. Often, it is the disk subsystem that is the key to ensuring sufficient server performance as a whole - after all, for loaded databases, the biggest problem is locking tables when many users work with them at the same time or during bulk downloads / uploads / postings. Monitoring and optimization of the disk subsystem of the server.

1C has 5 data streams for the disk subsystem with which it works:

  • database tables;
  • index files;
  • temporary files tempDB;
  • SQL log file;
  • log file of 1C user applications.

The data structure in 1C is object-oriented, with many objects and relationships between them. To work with data tables, the number of read and write operations that the disk subsystem can perform in a period of time (Input Output Operation per Second, IOPS) is extremely important. At the same time, its ability to deliver a high streaming data rate (in MBp / s) is much less important. A very modest base of 200-300MB with 3-5 users can generate up to 400-600 IOPS in peaks. A database for 10-15 users and a volume of 400-800MB is capable of delivering 1500-2500 IOPS, 40-50 users of a 2-4GB database generate 5000-7500 IOPS, and databases for 80-100 users easily reach 12000-18000 IOPS.

Of course, the average load on the disk subsystem can be 10-15% of the peak. Only in reality, it is the performance during the period of peak loads that is important: automatic downloads of data from other systems, data exchange of a distributed system, or a period rerun.

Modern drives in read and write operations with random access (Random Read / Write) alone cope with such loads:

Intel 910 400 GB

2400 - 8600 IOPS

It is clearly seen that:

  • the bottleneck for both HDD and SSD is write;
  • traditional HDDs are not competitors of SSDs in terms of read speed in IOPS, even theoretically, the difference exceeds two orders of magnitude;
  • even not the most modern desktop SSD is 3-40 times (depending on configuration) faster than any HDD in terms of write speed in IOPS, server SSD is 12-40 times faster than HDD;
  • maximum performance in IOPS is provided by PCIe SSD class Intel 910 or LSI WarpDrive.

Single disks are not used in database servers, only RAID arrays. To further calculate the real performance of the disk subsystem, you need to take into account the costs (“penalty”) for writing to IOPS, which are incurred by the disk group in RAID:

If you collect 6 disks in RAID 10, then for each record of 1 IOPS of data, 2 IOPS of physical disks will be spent, and if in RAID 6, then 6 IOPS of disks. Thus, when calculating the write load capacity of a disk group, you must first add up the IOPS of all disks in the RAID group, and then divide them by the “penalty”.

Example 1: 2 SATA 7200 HDDs in RAID 1 will provide write: (100 IOPS *2) / 2 = 100 IOPS.

Example 2: 4 SATA 7200s in RAID 5 will provide: (100 IOPS *4) / 4 = 100 IOPS per write.

Example 3: 4 SATA 7200s in RAID 10 will provide: (100 IOPS *4) / 2 = 200 IOPS per write.

Examples 2 and 3 show why RAID 10 is preferred for storing databases that have a typical 68/32 read/write distribution.

From these three tables, it is clear why the performance of a typical "gentleman's set" 2 HDD SATA 7200 in RAID 1 is not enough for a server: at peak loads, the queue of disk accesses grows, users wait for a response from the system, sometimes for many hours.

How to increase the write performance of the disk subsystem? Increase the number of disks in a RAID group, move to disks with a higher rotation speed, select a RAID level with a lower write penalty. Caching by a RAID controller with Write back mode enabled helps a lot. Data is written not directly to disks (as in Write Through mode), but to the controller cache, and only then, in batch mode and in an ordered form, to disks. Depending on the specifics of the task, write performance can be increased by 30-100%.

Under lightly loaded or relatively small databases (up to 20GB), an inexpensive way to “extract IOPS” is suitable - hybrid RAID from SSD / HDD. More and do not need a branch database for 3-15 users in a distributed structure like a network of cafes or service stations.

For large (200GB or more) databases with a long historical data trail, or for servicing several large databases, SSD caching (LSI CacheCade 2.0 or Adaptec MaxCache 3.0 technologies) can be effective. According to the experience of operating such systems, it is in 1C tasks that they can be used relatively inexpensively and without significant changes in the storage infrastructure to speed up disk operations by 20-50%.

The champion in terms of performance in IOPS is predictably RAID arrays on server SSDs - both traditional, using a SAS RAID controller, and PCIe SSDs. Their popularity is hindered by two limitations: technological (the performance of RAID controllers or the need to radically break the storage structure) and the selling price.

Separately, it should be said about the storage of index files and TempDB. Index files are updated very rarely (usually once a day), but they are read very, very often (IOPS). Such data simply needs to be stored on an SSD, with their reading rates! TempDB used to store temporary data is usually small in size (1-4-12GB), but very demanding on write speed. Index and temporary files have in common that their loss does not lead to the loss of real data. This means that they can be placed on a separate (even better - on two separate volumes) SSD. At least on the onboard SATA controller of the motherboard. From the point of view of reliability and performance, under TempDB it is desirable to give a mirror (RAID1) from the SSD, it is possible on the on-board controller, but with the obligatory shutdown of all write caches. Desktop SSDs can also cope with this role - like the Intel 520 series, where hardware data compression when writing to TempDB will be just right. The removal of these tasks from a shared storage system to a dedicated high-speed subsystem has a positive effect on the performance of the system as a whole, especially at times of peak loads.

In cases where it is possible to ensure the fastest possible response of administrators in case of failures, and when there are complex computational tasks (warehouse or transport logistics, production in the SCP, volume exchanges in the URDB), TempDB is transferred to RAMDrive. This solution allows you to win sometimes up to 4-12% of the overall system performance. Some inconvenience arises only if the server is restarted: if RAMDrive does not start automatically, administrator intervention will be required to manually start - otherwise the whole system will become.

Another important component is the log files. They have an unpleasant feature for any disk subsystem - they generate an almost constant stream of small write accesses. This is imperceptible at medium loads, but greatly degrades the performance of the 1C server at peak loads. It makes sense to move the log file (in particular, the SQL log file) to a separate physical volume that does not have high IOPS requirements and will be written to almost linearly. For peace of mind, you can create a mirror from inexpensive and voluminous SATA / NL SAS (for Full log), or inexpensive desktop SSDs of the same Intel 520 series (Simple log, or Full log, with its daily Backup and cleaning).

In general, we can say that the arrival of SSDs in servers has opened up new opportunities for increasing the performance of mass servers - due to tiered data storage and reasonable disk I / O configuration.

The disk subsystem of the "ideal server under 1C" looks like this:

1. Database tables are hosted on RAID 10 (or RAID 1 for small databases) of reliable server SSDs with a mandatory hardware RAID controller. For high IOPS requirements, consider a PCIe SSD option. For large databases, SSD caching of HDD arrays is effective. If the used 1C configuration and data structure are not too demanding on IOPS, and the number of users is small, a traditional array of HDD SAS 15K rpm will suffice.

2. Index files are moved to a fast and inexpensive single SSD, TempDB - to 1-2 (RAID 1) SSD or RAMDrive.

3. A dedicated volume (single physical disk or RAID-1) on a SATA/NL SAS HDD or low-cost SSD, or a logical disk on a RAID array containing the server operating system and user files/folders.

4. Operating system and user data are stored on RAID 1 of HDD or SSD.

If the IT infrastructure is virtualized, it is highly desirable that SQL Server be installed not as a virtual machine, but directly on a physical server, on bare metal. The price of the issue is from 15 to 35% of the performance of the disk subsystem (depending on the hardware, drivers, virtualization tools and volume connection methods). In a virtualized SQL server environment, connecting volumes with database tables, index files, and TempDB to a VM is mandatory in exclusive mode via Direct Access.

Network interfaces

When building 1C:Enterprise 8 systems for small and medium-sized enterprises (up to 100-150 active users at the same time), losses in network operations via the Ethernet interface should be minimized. Ideally, serve both SQL Server, and "1C: Enterprise 8 Application Server x64", and 1C user sessions in Remote Desktop with one physical server. Controversial in terms of fault tolerance, this recommendation allows you to get the most out of hardware and software, and through the use of virtualization gives a certain level of security and “environment repeatability” on other equipment.

Why exclude Ethernet from the chain SQL server -> 1C:Enterprise 8 application server -> 1C:Enterprise 8 user session? The Ethernet network interface, with its packing of data into relatively small blocks for transmission, will always create additional delays: both when packing / unpacking traffic, and during the transmission itself (high latency). In 1C:Enterprise 8, rather large data arrays are transferred for processing and display along the entire chain, in some situations - in both directions. When transferring data directly from one process to another within the server's RAM (on the same server without virtualization), or through a virtual network interface (within the same physical server, with good server network adapters with the transfer of RAM blocks between VMs) delays are much lower. Modern dual-processor servers with large RAM and a disk subsystem on SSD allow you to comfortably serve a 1C database for 100-150 active users.

If the use of several physical hosts is unavoidable for loaded databases, it is desirable to connect all servers via 10Gb Ethernet. Or at least 2-4 aggregated 1Gb Ethernet connections with TCP/IP hardware acceleration (TCP/IP Offloader) and hardware virtualization support.

Most of all, budget solutions suffer from performance losses on Ethernet ports. It's no secret that 1Gb network adapters, soldered on most server motherboards, are not designed to handle heavy network traffic. Even if the board has 2 or 3 GbE ports, they are usually implemented on desktop chips. Sufficient for management, they generate additional overhead costs for servicing network exchanges, especially in a virtualized environment. The entire process of data transfer through such a chip is provided by the resources of the processor, RAM and the load on the internal buses. Such chips do not provide any acceleration of IP traffic transmission, each received and transmitted Ethernet packet requires a separate interrupt for the processor. In a virtualized environment, network interface performance losses can reach 25-30%. The most unpleasant thing is that it is the network interface that is overloaded by monitoring tools and may not be noticed. The central processor is blown away for it, and if it does not work, then it is idle waiting for a response from the network card. It is desirable to exclude ports on desktop chips from the data flow in virtualized environments, leaving them for server management tasks. Under intense network traffic, it is worth adding a discrete network card on the server chipset.

Fault tolerance or acceptable downtime?

Discussions about server performance are almost always accompanied by arguments about server reliability. Fault tolerance always comes at an additional cost, especially when supporting continuous production processes. Without belittling the role and place of 1C, we can say that most of its users solve the “performance / reliability” dilemma in different planes: they fight for the first with optimization of hardware solutions, for the second - with the organization of processes and procedures. When applications are moderately critical, the focus in maintaining health is not on individual server protection, but on minimizing infrastructure downtime as a whole.

Of course, for enterprises with a relatively large number of simultaneously connected users (25-150) and hosting all applications on a single server, it is imperative to use uninterruptible power supplies, redundant power supplies for the servers themselves, hot-swap disk baskets and hot-standby RAID arrays. But no hardware can replace the planned backup of the data itself. Having a daily (more precisely, nightly) backup and an online file with a Full SQL log, you can completely restore the 1C database in a relatively short period.

Permissible downtime of the central 1C system for small and medium-sized enterprises is 1-2 accidents per month, lasting 1-4 hours. In fact, this is a huge margin of time - if you are ready for recovery in advance. A necessary condition for a quick restart is the availability of images of all virtual and physical servers in the form of a VM on a separate storage / volume - to restore the infrastructure part itself on a backup server. Mandatory daily backup (as well as weekly and at the end of the period) to another physical device and Full SQL log for cases where the loss of data "from the beginning of the working day" is critical and difficult to recover manually. If you have replacement equipment, you can keep within 1-2 hours to restore working capacity in general, albeit with lower productivity. Well, where 24×7 continuity is required, the priorities will be the choice of the appropriate architecture, equipment with a minimum number of points of failure and full-fledged clustering technologies. But that's a completely different story.

Original article: http://ko.com.ua/proektirovanie_servera_pod_1s_66779

With the permission of the editor of the journal "Computer Review"

In order to ensure the effectiveness of the programs executed on the Enterprise 8 platform, it is necessary not only buy 1C, but also to choose the right server solution.

Currently implementation of 1C 8 carried out in several versions. The most popular solution is a dedicated file server. This option includes a dedicated PC or a small server, an installed server OS, as well as setting up shared access to a folder with 1C: Enterprise. This option is quite simple and affordable, but it is not able to provide high performance and reliability.

If an organization needs to ensure reliability and high performance, then, as a rule, they choose implementation of 1C 8 using an industrial DBMS - Microsoft SQL Server. In this case, Windows Server 2003 is used as the operating system, and the hardware must meet high requirements.

This solution is more expensive, but it has its own advantages, such as high performance and fault tolerance. The system also allows for efficient backup, provides a high level of data protection and eliminates mandatory indexing in case of failures.

In order for the system to work properly, it must be implemented by a qualified 1C programmer. Because the inexperienced 1C programmer can negate all the advantages - a large database volume with a low-quality server configuration significantly reduces the performance of the 1C product.

It's also worth noting that this desktop deployment option requires client licenses to connect to Windows Server 2003/2008. In case of high loads on the 1C infobase, the performance of Windows SBS 2003/2008 may be insufficient. In this case, it is possible to allocate an additional server, Microsoft SQL Server 2005/2007.

Another method that is often used when implementing 1C is a terminal server. The Terminal Connection Service built into Windows Server 2003 allows you to get a large performance reserve, the ability to work safely and fully, as well as a high level of protection.

List of software for the implementation of programs 1C: Enterprise.

As a rule, the following software is used to implement programs on the 1C: Enterprise platform: Windows 7, Vista, XP Professional, Windows Server 2003-2008, Windows Small business server.

Windows XP Professional has long been the base version of the OS and is installed in many organizations. Windows 7 is a fairly new operating system for personal computers that provides high performance through the integration of networks, technologies and systems. Computers equipped with Windows Vista, XP Professional and 7 operating systems can be used as entry-level servers. These operating systems support up to 10 connections, but the speed and security leave much to be desired.

Windows Server 2003 or 2008 are the most popular server operating systems that allow you to implement 1C:Enterprise solutions , ensure reliability and ease of maintenance.

Windows Small Business Server 2008 is a software product that consists of a whole package of server products and additional components. This option is suitable for small companies that do not plan serious loads on the 1C Enterprise information base. The main advantage of Windows SBS 2008 is its low price.

So before buy 1C, you need to consider what load the database will be subjected to and, in accordance with this, select the type of server.

The release was prepared by the licensed software store 1cmarket.ru


Comments and reviews

Network sources have revealed the detailed characteristics of the Black Shark 2 Pro smartphone, which will be officially...

HTC has expanded its range of budget smartphones with the Wildfire E model, which is priced at 9,000 rubles...

LG has announced that their TVs will support Apple AirPlay 2 and HomeKit technologies. By s...

The Phanteks company presented a unique solution for assembling a custom CBO the day before. New Glacier D140 with...