6.1. General information
When working with 1C:Enterprise, you may need to perform system administration duties, such as:
- Maintaining the user list
- Assigning user rights
- Creating backups
- Generating technological log for error analysis
Designer includes a variety of administrative tools designed to perform the above tasks.
1C:Enterprise can create lists of users that are allowed to log on. This list is used for user authorization upon logon. It should be noted that 1C:Enterprise user list is not a part of configuration: each company where 1C:Enterprise is being used needs to create it manually.
Logon password can be set for any user. The password is used to verify user rights to operate 1C:Enterprise.
Creating backups is another example of a crucial administrative task. To ensure data recovery with minimum losses in case of database damage, the backup procedure must be performed on a regular basis. The greater amount of data changed daily, the more frequent backup required.
This chapter covers the 1C:Enterprise administration activities that can be performed in Designer.
6.2.1. General information
To display the list of users, click Administration‑ Users.
The window with the user list includes a toolbar and a field with two columns:
- The Name column contains the list of users allowed to log on to 1C:Enterprise 8.
- The Full name column contains full names of the users.
Users with access passwords are marked with the lock icons (for example, Seller in fig. 41).
Users with no role or authentication defined are marked with question mark icons (for example, Sales manager in fig. 41).
In the Actions menu, you can add or delete users, configure the list appearance (filters, column content and order, sorting), or export the list to a spreadsheet or text document.
To add a new user, click Actions ‑ Add in User list window. The window with user parameters will be displayed.
On the Main tab, the name and full name of the user are displayed.
TIP. It is recommended to give meaningful names to users, based on their last names, job positions, professional functions, and so on. Later, these names will be used by the employers to log on to 1C:Enterprise.
The authentication method must be set for the user.
NOTE. Client applications for Linux and macOS do not support OS authentication. A thick client running under any supported operating system does not support OpenID authentication (in any case).
Each Authentication… check box (1C:Enterprise authentication, OS authentication, OpenID authentication) indicates whether a corresponding authentication method is enabled. These check boxes do not affect the order of authentication attempts. The OpenID authentication means any of the following authentication types supported by the 1C:Enterprise: OpenID itself, OpenID Connect, Unified System for Identification and Authentication (USIA). When assigning authentication types, please remember:
- When no Authentication… check boxes are selected, the user will not be able to log on to the application.
- To attempt authentication using the OpenID protocol, the infobase publication on a web server must be configured in a specific way.
- The user will not be able to log on to the application if the user performs OS or OpenID authentication but the check box allowing this type of authentication is cleared.
- To disable OS or OpenID authentication, you can also use the client application startup command-line parameters.
IMPORTANT! There must be at least one user in the system that has administrative rights and allows 1C:Enterprise authentication.
If the User cannot change password check box is selected, the user cannot change their password (this option only applies to 1C:Enterprise authentication).
If the Show in list check box is selected, the user is displayed in the user selection list when connecting to the 1C:Enterprise infobase. If 1C:Enterprise authentication is disabled for the user, the Show in list check box becomes unavailable and the user is not displayed in the user selection list when connecting to the infobase.
TIP. If the infobase is published on a web server accessible from the Internet or the infobase has a large number of users, it is recommended that you clear the Show in list check box for all users. This recommendation is particularly important for users that have infobase administration rights.
The Unsafe operation protection check box specifies whether protection from unsafe operations is enabled for this user.
On the Other tab, available roles and language are displayed. If multiple roles are defined in the configuration, you can assign several roles to the user. Besides, you can select the 1C:Enterprise run mode for the user. When using Auto value, the run mode specified in the Main run mode configuration property is used. When a user requires a special run mode, you can assign it here. For example, when a user works in managed application mode, You need to set the Run mode field to Managed application.
You are not required to fill in all fields In user properties editing field‑ this can be done later.
If the system has user separation enabled), the data separation tab is also displayed in the user parameters.
6.2.3. Cloning users
Cloning an existing user is a fast and easy method of creating a new user. When you clone a user you do not have to create a user from scratch ‑ you simply copy a user and then edit their properties.
To clone a user, select the user from the user list and click Actions ‑ Clone.
Name of the user might be transformed during cloning, for uniqueness reasons. All other properties of the cloned user are identical to the source user (except the password).
6.2.4. Setting password
To avoid logging on to 1C:Enterprise under another user's name, a personal password can be created for each user allowed to log on. Just like username, the password confirms user's right to access 1C:Enterprise.
Enter the password in the password entry field. A password can contain alphanumeric characters. Password length must not exceed 255 characters.
The password you enter is displayed as a string of asterisks.
In the Confirm password field, enter the password again. If the passwords do not match, the following warning is displayed once you click OK: Password and password confirmation do not match. The password is not changed.
To cancel the password change, click Cancel. Please understand that clicking Cancel cancels both the password change and any other changes you might have made in this dialog box.
IMPORTANT! The assigned password is not viewable, so you need to pay attention and memorize the entered password.
If a user forgets a password, you need to assign a new password to them.
Users with passwords are marked with the lock icon in the user list (for lock icon example, ‑ see Seller in fig. 41).
6.2.5. Deleting users
To delete a user, select them in the user list and click Actions‑ Delete in User list window.
To confirm user deletion, click Yes when prompted.
To edit user parameters, click Administration ‑ Users in the Designer menu. Select the user in the list and click Actions ‑ Change in the User list window.
In the User window, you can change parameters of the selected user.
To streamline viewing of the user list, you can use filters. In the user list, click Actions ‑ Set filter…
The list can be filtered based on the role, language, run mode, or user authentication type. If separators (common attributes with Data separation property set to Separate) are enabled, you can also filter users by separators.
184.108.40.206. General information
Authentication ‑ is a procedure that verifies whether the provided ID (name) belongs to the user. 1C:Enterprise supports multiple authentication types, which are described in further sections.
The user can be authenticated by 1C:Enterprise by providing their username and password (typed in authentication dialog box, passed as command-line parameters, or passed in the infobase connection string for an external connection or automation server). In this case, user name and password verification is performed by 1C:Enterprise.
The user can be authenticated implicitly through the operating system functionality. To enable this authentication type, you need to match the 1C:Enterprise user to an operating system user. On startup, 1C:Enterprise prompts the operating system for information on the currently authenticated OS user. For this purpose, Windows uses SSPI interface and Linux uses GSS-API.‑ Then the match between the operating system user and 1C:Enterprise user is verified. If the matching user is found ‑ 1C:Enterprise user is authenticated and the authentication dialog box is not displayed.
NOTE 1. Client applications for Linux or macOS do not support operating system authentication.
NOTE 2. Operating system authentication is not supported if the client application connects to the infobase using the Apache web server on Windows.
NOTE 3. To ensure stable OS authentication in Windows for web client or thin client connection via web server, add the infobase address to the list of reliable websites using the web browser properties dialog box.
The operating system user is described in the following format: \\domain_name\ username. The username contains Latin letters only. The format of domain name and username may depend on domain controller settings and its account settings. Correct name of the operating system user can be found in the records of CONN event in the technological log. Look for Txt property that starts with Srvr: DstUserName2:. For example, 30:30.551013-0,CONN,2,process=rmngr,OSThread=24204,t:clientID=3,Txt=Srvr: DstUserName2: d1.d2\user1(d1.d2\user1) event means that the OS user name in the infobase user description must be as follows: \\d1.d2\user1.
When forced 1C:Enterprise authentication is required, specify /WA- command-line parameter in the startup command line of the client application. The /WA+ command-line parameter forces OS authentication (enabled by default).
OpenID (http://openid.net/) ‑protocol allows the user to authenticate in many unrelated resources, systems, and so on, using the single account. 1C:Enterprise uses a protocol based on OpenID 2.0 under a Direct Identity model.
NOTE. This authentication method cannot be applied to web services published from 1C:Enterprise.
The general procedure is as follows:
- The user attempts to log on to 1C:Enterprise.
- 1C:Enterprise identifies that OpenID authentication is enabled for the infobase (using the default.vrd publication file).
- An authentication request is sent to OpenID provider. The OpenID provider must be able to receive requests from the address of the infobase publication.
- If an interactive action is needed (for example, the first authentication for this user ID is performed, or the user authentication data has expired), the provider informs 1C:Enterprise that username and password are required. 1C:Enterprise performs the interactive action and returns the requested data to the OpenID provider.
User authentication data is stored in cookie files located in web browser storage. Thin client uses own storage.
- If the provider authenticates the user, a flag is returned to 1C:Enterprise indicating that the user is authenticated.
OpenID authentication only works when the infobase is accessed over HTTP or HTTPS. This means that OpenID authentication is only available for web client, mobile client, and thin client connected to the infobase via the web server. During OpenID authentication, cross-domain requests may occur when using the thin client or Mozilla Firefox, Google Chrome, Safari, Microsoft Internet Explorer 8 and 9 browsers. In Microsoft Internet Explorer 6.0 and 7, the user is prompted for confirmation after entering username and password. If the user confirms operation, the authentication procedure continues.‑ Otherwise, the user is prompted to enter username and password again.
An OpenID provider can be a 1C:Enterprise infobase published on a server in a special way, or an information system that has OpenID Authentication 2.0 and extension of this protocol implemented on the 1C:Enterprise platform. The address of the OpenID provider is specified in default.vrd file (<rely> element) when publishing an infobase that is an OpenID provider's client.
It is important to understand that the key field used to match the 1C:Enterprise infobase user and OpenID provider user is a value specified in the Name property of the infobase user. In other words, a user is only able to log on to the infobase if the Name property in the infobase contains an ID returned by the OpenID provider. For description of the returned certificate, refer to documentation of the OpenID provider used.
User password is set at the OpenID provider. If the OpenID provider is an 1C:Enterprise infobase, the password is set in this infobase. The password set in the infobase that operates as the OpenID provider's client is ignored during OpenID authentication. If a third-party OpenID provider is used, the password is set by this provider. After the OpenID provider's user password is changed in the user storage, the 1C:Enterprise follows the following rules:
- The user is considered authenticated in any currently running sessions until these sessions are terminated
- When creating a new session, the user is prompted for the password even if the user authentication data has not expired yet.
When forced OpenID authentication is required, specify /OIDA+ command-line parameter (enabled by default) in the startup command line of the client application. The /OIDA‑ command-line parameter is intended to force disable OpenID authentication.
- OpenID Authentication 2.0 (see http://openid.net/specs/openid-authentication-2_0.html).
OpenID Connect (http://openid.net/connect/) ‑protocol is an expansion of the OAuth 2.0 authorization protocol. OpenID Connect allows 1C:Enterprise to verify user identities based on the authentication by the third-party provider. This protocol is applicable when using thin, mobile or web client. 1C:Enterprise cannot be the OpenID Connect provider. Third-party providers are used for this purpose. OpenID Connect protocol support also means the potion to use the Unified System for Identification and Authentication (USIA).
To match 1C:Enterprise users and authentication provider users, email messages are used. The OpenID Connect provider provides email services for 1C:Enterprise. The user email address must be specified in the Name property of the infobase user.
So far as a mobile client is concerned, authentication is provided using a web browser supported by a mobile device:
- OS Android: Google Chrome.
- OS iOS: 9.0 or later.
If your mobile device is not in compliance with the aforesaid requirements, repeated authentication is required on the side of OpenID Connect provider in a mobile client. To start forced authentication when you log in next time, run Log out command in the mobile client.
220.127.116.11.1. General information
The platform can perform user authentication independently, or it can use the results of authentication performed by another resource that it trusts (operating system or OpenID provider). In any case, the user enters a username and a password. If the correct username/password pair is entered, the platform considers that the user is identified and grants them access to the application.
This conventional scheme is simple and convenient, but has one significant drawback. You need to remember the password, and it must be short and simple for that. But such password is easy to hack. For a password to be difficult to hack, it must be long and complex. But such password is not easy to remember. For this reason, in reality it all comes down to people using simple and the same passwords for different resources.
Two-factor authentication ‑ is a method that allows, on the one hand, to complicate hackers' access to other people's data, and on the other hand, ‑ it is a solution that allows mitigating the disadvantages of classical password protection to some extent.
Two-factor authentication requires the user to have two of the three possible types of authentication data:
- Something they know and remember: This is the username and password.
- Something they own. This can be a user cell phone or email.
- Something inherent to them. In this case, some physical feature of the user may be used: a fingerprint, a portrait, an iris pattern.
The point of two-factor authentication is that the user must double confirm their identity ‑ in order to gain access to the application solution. Moreover, they need to do that in different ways. For example, entering the username/password (and this will be the first authentication factor), and then enter the code sent to their cell phone (and this will be the second authentication factor).
The verification of the first authentication factor is performed by 1C:Enterprise platform itself, and a third-party service, which is called the second factor provider, is used to process the second authentication factor.
Second factor provider (we may use "provider" term in this section) ‑ is an HTTP service that provides a software interface for performing certain actions. The provider of the second factor can be, for example, the 1C:Enterprise infobase, where a set of HTTP services that allow forwarding messages or performing authentication, is implemented. It can be a third-party service that sends SMS or e-mail messages, it can be a service that generates codes of the second authentication factor or a service that interacts with the user through its own mobile application, etc. All that matters is that the provider can be accessed via HTTP requests.
Two-factor authentication can be used in any 1C: Enterprise infobase version and for any client application.
18.104.22.168.2. Options for using the second factor
So, the standard 1C:Enterprise authentication (the first factor) is as follows:
- The user starts the client application. The client application prompts the user to input the first authentication factor ‑ i.e. login and password. The user inputs these, and the client application sends these to the server.
- The server checks the username and password for correctness.
- If the data entered is correct, the server checks that the user needs only one authentication factor to use the application. If the second factor is not used, it is considered that the user is fully identified and can start operations. This is a common authentication scenario that exists in the platform now.
If two-factor authentication is set for the user, then the first two steps are performed, as before ‑ i.e. entering the login/password and checking these data for 1C:Enterprise servers.
The use of the second factor can be performed in two ways:
- The 1C:Enterprise server itself generates a second factor and checks if the user correctly entered the value of this factor. The provider only performs the transport function ‑ i.e. transmitting of the second factor value to the user. In this case both the 1C:Enterprise server and the user know using which channel the value of the second factor will be transmitted.
The procedure is as follows:
- The 1C:Enterprise server informs the application that the user needs to specify a second authentication factor.
- The application displays the second factor input form.
- The 1C:Enterprise server generates and sends the value of the second factor (for example, a certain number) to the user, which the user must enter in the form opened by the application. The second factor value can be sent to the user by e-mail or SMS.
- The user receives the second factor data and enters them in the dialog opened by the application. The application sends the second factor data to 1C:Enterprise server.
- The 1C: Enterprise server verifies that the entered data matches the data generated and sent to the user be the server.
- If the values of the second factor transmitted by the application and generated by the 1C: Enterprise server match ‑ the user is identified and is granted access to the application.
- 1C:Enterprise server uses a third-party service that sends the result of the second factor application by the user to the 1C:Enterprise server. In this case, the 1C:Enterprise server has no information of the second factor type used. A method for transmitting the second factor is also determined by the user selected second factor provider. 1C:Enterprise server only has information that there is a trusted service, which, in response to the requirement to use a second authentication factor, informs ‑ where the second factor application was successful or not.
The procedure is as follows:
- The 1C:Enterprise server informs the client application that the user must authenticate the second factor on the provider side.
- The client application shows the form to the user, in which the user must execute a certain action after the user is authenticated by the provider of the second factor.
- The server sends an HTTP request to the provider with a request to authenticate the user.
- The provider begins the authentication procedure. The authentication method is at the discretion of the provider.
- After the authentication procedure is completed, the user reports this to the client application.
- The client application passes this information to the 1C:Enterprise server, which is requested by the provider of the second factor about the authentication results.
- The provider informs the authentication result to the server. If it is successful, then the user is considered identified and he is granted access to the application.
Each of the methods reviewed in this section has a certain support from the 1C:Enterprise system. Setting of application of the second authentication factor is reviewed in the next section.
22.214.171.124.3. Second factor setting
Setting of application of the second factor in the 1C:Enterprise system is divided into several parts:
- Setting of request templates that are sent to providers.
- Binding a request template to an infobase user.
- Requests parameterization.
Let's take a closer look at each part.
To describe the HTTP request that should be sent to the provider, the TemplateOfSettingsOfTheSecondAuthenticationFactor object is used. Choice of one of the options for the second authentication factor is carried out in the process of setting the template parameters. In case when it is necessary to implement the first option of the second authentication factor (1C:Enterprise server itself generates, sends and checks the value of the second factor), the HTTPRequestToAuthentication and the HTTPRequestToAuthenticationMethod properties are used. The first property contains description of the HTTP request (an object of the HTTPRequest type), and the second parameter allows you to specify which HTTP method will be used to request the second authentication factor to the provider.
In case when it is necessary to implement the second authentication option (1C:Enterprise server only initiates authentication in the provider of the second factor and receives its result), the parameters reviewed above remain and are used for their intended purpose ‑to start using the second authentication factor. To check the authentication result, two additional properties are used: HTTPRequestForChekingTheAuthenticationResult and MethodOfHTTPRequestForCheckingTheAuthentication Result.
Each template has its own name (property of the same name), which will allow you to identify the template when executing further actions.
When forming an HTTP request (for setting any property), one should keep in mind the following features:
- The HTTP method is specified as a string. This is because the HTTP specification allows the use of custom verbs (methods).
- The request text may contain parameters. Parameters ‑ is some text starting with the "&" character. For example, you can use the & user_name parameter to specify a user name. These parameters will be replaced with real values at the time of sending the request (will be reviewed below). The reason for this decision is the assumption that the HTTP requests of the second authentication factor for different users will be almost the same. Differences will be observed only in some information that is specific to a particular user. For example, a conditional request to send an SMS message with the code of the second factor will look like this: . When sending a request, the & phone_number parameter will need to be replaced with the user's actual phone number.
The platform provides a predefined & secret variable, which contains the value of the second factor generated by the platform.
Having all the information, let's look at an example of creating a template for working with the second factor according to the first option (the platform generates, sends, provides input and verification of the second factor):
Request = New HTTPRequest; Request.Resource Address = "&addr"; Request: SetBodyFromString ("Enter the & secret value. Do not tell this value to anyone!", "Utf-8"); Provider = TemplatesOfSettingsOfTheSecondFactorOfAuthentication.CreateTemplate(); Provider.HTTPRequestForAuthentication = Request; Provider.HTTPRequestMethodForAuthentication = "POST"; Provider.Name = "Request - response"; Provider.Record();
The first three strings form the HTTP request that will be used by the platform. The remaining strings create the template of the second factor provider, which will send the request using the POST method and have the name Request‑ response.
As probably it has already become clear if it is necessary to create a template for the provider of the second factor working according to the second option (all actions are executed by the provider itself, the platform only initiates the operation start and requests the authentication result), then the above example should be processed in such a way that:
- The authentication request met the requirements of the provider used.
- A request was generated (and set in the template) to check the authentication results. This request will be executed after the user "inform" the client application that he had passed authentication with the provider.
After we saved one or several templates for providers, it became possible to assign one of the templates to the user. Keep in mind that you can assign multiple templates to one user and specify how they will be processed.
To set up user settings, two properties of the InfobaseUser object are used:
- SettingSecondAuthenticationFactor‑ here you need to assign an array of objects of the SettingSecondAuthenticationFactor type.
- ProcessingSettingsForTheSecondAuthenticationFactor ‑ describes what the platform will do if several providers of the second factor are specified, and the first (in the traversal order) provider returned an error.
After specifying the above properties, the object describing the infobase user should be recorded.
We need to consider the last moment: where will the platform get the values that will be substituted for the variables in HTTP requests? To do this, we’ll take a closer look at the SettingSecondAuthenticationFactor object. This object contains two fields:
- SettingsTemplateName ‑ here you should specify the name of the template of the setting of the second factor provider, which was specified in the Name property when setting the provider template.
- Parameters ‑ you need to assign a match to this property. The match should contain as many elements as the parameters are contained in the setting template of the second factor provider (with the exception of the &secret parameter). The key of the correspondence element is the name of the parameter (without the "&" character), and the value ‑is the value of the variable.
Now we have all the information in order to indicate to the infobase user the necessity to execute two-factor authentication when entering the infobase.
ProviderParameters = New Map; ProviderParameters.Insert("addr", "http://hostname/resource"); UserSettings = New SettingSecondFactorOfAuthentication; UserSetting.SettingTemplateName = "Question - answer"; UserSetting.Parameters = ProviderParameters; UserSettings = New Array; UserSettings.Add(UserSetting); User = InfobaseUsers.SearchByName("Seller"); User.SettingsOfSecondFactorOfAuthentication = UserSettings; User.ProcessingOfSettingOfTheSecondFactorOfAuthentication = TypeOfProcessingOfSettingsOfTheSecondFactorOf Authentication.UseNextInCaseOfError; User.Record();
The platform will replace parameter names with the actual values in the following properties:
- property HTTPRequest.ResourceAddress;
- property HTTPRequest.Headings;
- body of object request HTTPRequest (the method of specifying the request body does not affect the substitution work);
- property TemplateOfSettingOfTheSecondFactorOfAuthentication.HTTPRequestMethodForAuthentication;
- property TemplateOfSettingOfTheSecondFactorOfAuthentication.HTTPRequestMethodForCheckingAuthenticationResult.
It remains to mention that if the provider of the second factor selected for the user is "broken",‑ the user can not get an access to the infobase. In this case, the term "broken" means any event that does not allow the provider to complete its task: there is no Internet for access from the server to the provider, there is no Internet for access from the provider to the user, an error has occurred on the provider’s side, etc.
It is also necessary to understand that if you plan to use a third-party provider of the second factor, then the services of this provider may be paid and the provider may put forward additional conditions and restrictions that lie outside the 1C:Enterprise system and are not considered in this documentation.
126.96.36.199.4. OpenID Authentication and two-factor authentication
The 1C:Enterprise program system supports authentication using the OpenID protocol. If the information system uses OpenID authentication, then the second factor should be requested by the OpenID provider. This is true, including if the 1C:Enterprise system infobase is specified as an OpenID provider. In other words, two-factor authentication must be configured in the infobase that is the OpenID provider.
So far as devices which support biometric authentication (i.e. they have a fingerprint sensor, face or iris scanner etc) are concerned, Use biometrics check box is available in a mobile client infobase setting and authentication (on a mobile device) dialog boxes. Whenever this check box is checked, the following mechanism is activated:
- When you log in for the first time, user names and passwords you enter for the infobase and authentication using OpenID and web server are stored in a secure storage.
- When you further log in again, the following logic is implemented:
- First, log in is ensured without using data available in a secure storage to verify that authentication is not currently required or there is relevant authentication data disclosed by OpenID provider. If a mobile device stores relevant authentication data, a query to provide biometric data for authentication purposes is made only when authentication data life cycle expires.
- If the previous step failed, a user is prompted to undergo biometric authentication using a mobile operating system interface. Kind of authentication specified in the user's mobile device settings is used.
- If biometric authentication is abandoned by a user, ‑ a standard authentication dialog box is displayed (you must enter your user name and password).
- As soon as biometric data is accepted by a mobile device, data saved during prior successful authentication is recovered from the secure storage. The said data is used for authentication purposes.
- If this data is not accepted by a mobile device, data saved during prior successful authentication is deleted from the secure storage. Subsequently, a standard authentication dialog box is displayed (you must enter your user name and password).
- No biometric data is used, if authentication is made through OpenID Connect, as in this scenario a protected web page of OpenID Connect provider is used to perform authentication. So far as this kind of authentication is concerned, you cannot insert provider's user name and password previously saved by default. Moreover, OpenID Connect supports repeated authentication on this device without any time limit, while there is no need to enter user name and password.
If your mobile device supports authentication through OpenID Connect ‑, this method is used for authentication purposes. Whenever authentication through OpenID Connect is abandoned, ‑ you can log in fast using biometric data. To repeatedly authenticate through OpenID Connect, an attempt to log in must fail first.
If the infobase user is assigned native roles from extension, the user is marked in the list with a special icon.
Roles that are present in the extensions connected to the infobase, and are present in the list of roles available for assignment to the user. Extension roles place at the end of the extensible configuration role list. There is an option to set or unmark extension roles.
Sometimes you need to determine which users are connected to the infobase at the moment.
To get the list of active users, click Administration ‑ Active users. A list of users currently working with the database will be displayed.
The current line displays data of the user who opened the form (current session). The current user is marked in the list with an icon. Data separation column specifies details of separators which are specified for a user in Designer (see Data separation tab in user properties). There are no values disclosed in this column for separators which apply to a specific session when a form is opened.
Using the Actions menu you can customize appearance of the list or export it to a spreadsheet or text document. The list of active users can be sorted by any column.
6.4. Session start lock
6.4.1. Manually, for all
1C:Enterprise allows to prohibit users from creating new sessions with the infobase. You can prohibit user sessions with the infobase so that any users attempting to access the infobase will get a custom error message instead. This capability can be useful when, for example, you need all current users to close their sessions and make sure that no new users can connect to the infobase.
When working in client/server mode, you can enable this lock with 1C:Enterprise server cluster administration utility.
To connect to the infobase regardless of the lock, use the /UC<permission code> command-line parameter and UC=<permission code> connection string parameter. If a permission code is specified for the lock, you need to enter the code in the /UC parameter to bypass the lock and connect to the infobase. If a permission code contains spaces, you need to enclose it in quotes.
When using the web client or thin client working via the web server, you can specify the permission code in UC parameter of the connection string of descriptor file. In this case, additional publication of the infobase on a web server is recommended.
When the session start lock is set and permission code is set to 123, you need to enter /UC123 in the client application startup command line in order to bypass the lock.Software method
In any run mode, session locks can be enabled using the 1C:Enterprise language. The SessionsLock object of the 1C:Enterprise language is used for this purpose. You can create it in constructor and configure the required properties for locking new connections.
The global context method SetSessionLock() enables the lock and GetSessionsLlock() method ‑ gets enabled lock.
Password attack is one of the methods which can be used to be granted unauthorized access to infobase data. In this scenario, a malicious user tries to hack a password using a pre-defined algorithm, until it is cracked and a password for a selected user becomes available thereto. To avoid the said manipulations, 1C:Enterprise supports a dedicated mechanism available in the infobase client/server mode only.
An administrator manages this mechanism by way of entering the following infobase parameters (this dialog box is displayed, when you run Main menu ‑ Administration‑ Infobase parameters command):
- Max number of failed authentication attempts ‑ defines the number of attempts allowed to be made by a user to enter their password, before access is blocked. Access is blocked, as soon as the aggregate number of successive attempts to authenticate which failed becomes equal to N+1, where N‑ is a parameter value. In other words, if this parameter is equal to 2, a user will be blocked as soon as their third attempt to authenticate fails.
If this parameter is equal to 0, this mechanism is disabled, and the aggregate number of failed attempts to authenticate is not monitored by the platform.
- Blocking duration when the aggregate number of failed attempts to authenticate is exceeded (in seconds) ‑ defines the time period when a user is unable to authenticate, if they attempt to enter a wrong password in excess of the number of attempts specified in Max number of failed authentication attempts.
- User name add-on codes when authentication is blocked ‑ allows it to block authentication attempts made by already blocked user. Add-on codes are separated by ";". As such, a user name is generated on the basis of the name of an existing user who have been already blocked using one of the add-on codes. So far as a user with their name generated using add-on codes is concerned, the aggregate number of authentication attempts is equal to that specified for an ordinary user. As soon as all available attempts to authenticate are made, the said "extra" user is as well blocked.
The mechanism works as follows:
- A malicious user enters a user name and attempts to hack a password by way of entering a password expected to be a valid password of a user. As soon as the aggregate number of failed attempts to authenticate is exceeded, ‑ the user name entered by a malicious user is blocked.
- If a blocked user attempts to log in using their name and password, ‑ a warning is displayed specifying that the user has been blocked.
- Whenever add-on codes are specified in an infobase, these can be used by a user. As such, a user needs to enter their name with an add-on code. When you use add-on codes, mind the following: an add-on code is analyzed, only if the user name you enter is not already included in the list of infobase users. User name add-on codes specified in the settings are successively subtracted from a user name. Further, it is verified whether an infobase user with such name already exists, or not. The following things should be noted in this description: avoid using add-on codes which are similar to the end of the name of an existing user. If such a user is blocked by the mechanism under review, they will not be able to log in using add-on codes. Moreover, start each add-on code using the so-called "technical" symbols, which are unavailable in the user name, for instance "!", "^" etc.
To view a list of blocked users, use a form available in Designer and run Main menu ‑ Administration ‑ Blocked authentication command. This form is available to each user with Administration or DataAdministration rights assigned thereto. For information having regard to blocked users, see the event log.
Details of blocked users are stored by the server cluster auxiliary function service. It means that:
- If a single administrator is blocked, to log in using their user name ‑, the server cluster has to be reloaded.
- Failed log-in attempts are counted from the recent successful log-in without any time limitation. However, when you reload the server cluster, all counters for all infobase users are reset.
To manage a blocking mechanism, use AuthenticationBlock global context object. This object is designed as well to modify mechanism settings (use GetSettings()/SetSettings() methods for that purpose). Moreover, a list of current blocks can be displayed using GetBlocks() method.
In 1C:Enterprise language you can forcefully unblock all or certain blocked users. For that purpose, get a list of current blocks displayed as UserAndInfobaseAuthenticationBlock object array. Further, you need to make up a list of users to unblock (based on UserAndInfobaseAuthenticationBlock object properties). Finally, call Unblock() method for this object in relation to blocked users so selected.
6.5. Regional infobase settings
Regional infobase settings affect the format of date, time, numbers, logical constants, as well as string order in infobase lists. To start this mode, click Administration ‑ Regional infobase settings.
If a property is not set, the default 1C:Enterprise settings for numbers, dates, and time for the specified language (country) will be used. Language (country) is specified during infobase creation.
Language (Country). Choosing the language (country) for this infobase.
IMPORTANT! If PostgreSQL DBMS is used, you cannot select an arbitrary language (country) for the existing infobase. You can only select a language (country) that uses the same database collation order as the current language (country). For example, Russian (Russia) can be changed to Belorussian (Belarus) but cannot be changed to Ukrainian (Ukraine).
If IBM DB2 is used, you cannot change the language (country).
First weekday property is used to specify what day of the week is considered the first day of the week in the country. If you set this property to Auto, the first day of the week is chosen based on the country specified in Language (Country) property. For example, if you choose English language, Sunday will be set as the first day of the week, and if you choose Arabic, Saturday will be the first day. Any day of the week can be chosen as the first day of the week.
In infobases created with 1C:Enterprise 8.3.6 or older, the value of the first day of the week is not stored in the infobase. If compatibility with 8.3.6 and earlier versions is enabled for an application deployed in this infobase, Monday will be used as the first day of the week (and you will not be able to change this value). If the compatibility mode is set to Not used for this infobase (or compatibility version is earlier than 8.3.6), the first day of the week will be selected according to Language (Country) property. The First weekday property is assumed to be set to Auto.
In infobases created with 1C:Enterprise 8.3.7 or later, the value of the first weekday is stored in the infobase. When creating an infobase, the First weekday property is set to Auto. The value of this property can be changed; the changed value is stored in the infobase. Setting the mode for compatibility with 8.3.6 and later will make Monday the first day of the week and First weekday property will be uneditable. However, the actual settings will be saved and will become effective after setting the compatibility mode to Do not use (or if the compatibility version is later than 8.3.6). If you edit regional settings in 1C:Enterprise 8.3.6 and later, the value of First day of the week property will be cleared.
If the Use regional setting of the current session property is set, the values similar to Number and Date are displayed (including input fields, calendar, and calculator) according to regional settings of a current session. These settings are determined based on the regional settings of a client computer, but can be re-configured using the /VL command-line parameter.
In the lower part of the dialog box, examples of number, date and time formats for the selected regional settings are displayed.
Values of Boolean type are displayed in accordance with the interface language. This can be set using the/L parameter.
Decimal separator. You can choose a separator between integer and fractional parts of a number from the drop-down list, or type it in the input field. An example is displayed to the left of the input field.
Digit group separator. You can choose a separator of digit groups in a number from the drop-down list, or type it in the input field. An example is displayed to the left of the input field.
Grouping. This property sets the format for digit grouping in the integer part of a number. You can choose the format string from the drop-down list, or enter it manually.
The grouping format is: <number of digits per group><separator>… …<0>.
Any character can be used as a separator, except digits.
For example, 3,2,0 means that digits will be grouped in the following way (digits are counted left to right, applies to integer part only):
- The first group includes the first 3 digits
- It is followed by the group separator (specified in OS settings or in Group separator property)
- All other digits are grouped by twos
The 0 character at the end of a format string means "the same applies until the end of the number."‑ So, if you remove 0 from the above string and enter 3,2, the grouping will change:
- The first group includes the first 3 digits
- It is followed by the group separator
- The second group includes the next 2 digits
- It is followed by the group separator
- The last group includes all remaining digits
If you enter 0 in this field, digits in the integer part of numbers will not be grouped.
Negative number format. You can choose the negative numbers format from the drop-down list. If you choose Auto, negative numbers format is governed by the OS settings.
Date format. Specifies the date format. You can use the following characters in different combinations:
Day of month. Numbers below 10 are displayed without a leading zero
Day of month. Numbers below 10 are displayed with a leading zero
Month number. Month numbers below 10 are displayed without a leading zero
Month number. Month numbers below 10 are displayed with a leading zero
Month name (in words)
The last two digits of the year. Years below 10 are displayed without a leading zero
The last two digits of the year. Years below 10 are displayed with a leading zero
All four digits of the year
The above mentioned characters and groups of characters can be entered in any sequence. You can specify the separators for day, month, and year.
Time format. Specifies the time format. You can use the following characters in different combinations:
hours, in 12-hour (h) or 24-hour (H) format. Hours below 10 are displayed without a leading zero
hours, in 12-hour (hh) or 24-hour (HH) format. Hours below 10 are displayed with a leading zero
minutes. Minutes below 10 are displayed without a leading zero
minutes. Minutes below 10 are displayed with a leading zero
seconds. Seconds below 10 are displayed without a leading zero
seconds. Seconds below 10 are displayed with a leading zero
The above mentioned characters and groups of characters can be entered in any sequence. You can specify the separator characters to separate hours, minutes, and seconds.
IMPORTANT! When using regional settings to specify format of date in input field, only choose settings supported by the input field.
False, True. Specifies logical constants. You can choose these from the drop-down list, or enter manually.
Infobase parameter settings affect data lock timeout and determine whether restrictions need to apply for user passwords.
You can configure the following parameters:
Data lock timeout (sec.)
Determines the maximum waiting time before the transaction lock is set by the database server. For example, when the current transaction needs to set lock on a database record but the record is already locked by another transaction, the current transaction will wait until the lock is released or the number of seconds specified in this parameter passed. The parameter also determines transaction lock timeout in 1C:Enterprise managed lock mode.
Changing this parameter (using this dialog box or 1C:Enterprise language) requires administrative rights and enables exclusive mode for infobase access.
Changes of the data lock timeout value become effective immediately for all databases except IBM DB2. In IBM DB2, you need to restart the database after the data lock timeout value is changed.
Minimum password length
Defines the minimum length of user password. If Password complexity validation is enabled, the minimum length of the user password is 7 characters.
Password complexity validation
When this parameter is enabled, user passwords must meet the following requirements:
- The password length must not be less than value of Password minimal length parameter
- The password must include characters from at least three of the following groups:
- Uppercase letters
- Lowercase letters
- Special characters
- The password must not match the username
- The password must not be an alphabetical sequence of characters.
Enabling these restrictions for infobase user passwords does not affect the existing passwords. Restrictions will be applied only after the current password is changed or a new infobase user is added. However, password verification is performed according to the current infobase settings. In particular, this means case-sensitivity check is enabled for passwords when Password complexity validation is enabled.
For example, if the user password is PaSs and Password complexity validation is disabled, the user can enter their password as: pass or PASS or PasS, and still be able to log on. After enabling Password complexity validation, the user cannot log on until they enter the case-sensitive password: PaSs.
Passive session hibernation timeout (sec.)
A session that has no activity for the specified time becomes Hibernating.
Hibernating session termination timeout (sec.)
The hibernating session is terminated after the specified time has passed.
Number of totals recalculation jobs
Defines the number of system background jobs used to recalculate register totals upon infobase restructuring or testing and introducing respective patches. The default value is 4, i.e. to recalculate totals 4 background jobs are started in a row. This parameter is applicable in infobase client/server mode only.
Maximum number of failed authentication attempts
For detailed description of this parameter, see By default, when password is under attack.
Block duration when the maximum number of failed authentication attempts is exceeded (in seconds)
For detailed description of this parameter, see By default, when password is under attack.
Username add-on codes used when authentication is blocked
For detailed description of this parameter, see By default, when password is under attack.
The infobase parameters can be changed or received from the 1C:Enterprise language using the following methods:
- Infobase lock timeout‑ SetDataLockTimeout()/GetDataLockTimeout().
- User password minimal length ‑ SetUserPasswordMinimalLength()/GetUserPasswordMinimalLength().
- User password strength check flag ‑ SetUserPasswordStrengthCheck()/GetUserPasswordStrengthCheck().
- Passive session sleep timeout ‑ SetPassiveSessionSleepTimeout()/GetPassiveSessionSleepTimeout().
- Passive session termination timeout ‑ SetPassiveSessionTerminationTimeout()/GetPassiveSessionTerminationTimeout().
- Number of totals recalculation jobs‑ SetNumberOfTotalsRecalculationJobs()/GetNumberOfTotalsRecalculationJobs().
- Infobase time zone ‑ SetInfobaseTimeZone()/GetInfobaseTimeZone().
- Full-text data search mode‑ SetFullTextSearch()/GetFullTextSearch().
- The first year of the century‑ SetBeginningOfTheCenturyOfInfobase()/ReceiveBeginningOfTheCenturyOfInfobase()/BeginningOfCenturyOfSession().
The parameter is used in cases where it is necessary to define the whole year of the date from the last two digits. When the first year of the century is set to "1950" (the default value), then the numbers of the year "49" will correspond to the year "2049", and the numbers of the year "50" will correspond to the year "1950".
When the infobase parameters are set in transaction using the 1C:Enterprise language (using the methods listed above), the corresponding "GET" method returns:
- In current session:
- Before transaction end ‑ the latest value
- After transaction commit ‑ the latest value
- After transaction rollback ‑ the value at transaction start
- In another session:
- Outside of transaction in record-locking databases (Microsoft SQL Server, IBM DB2) ‑ the latest value, not later than 20 seconds after the value is set After transaction rollback ‑ the value at transaction start, not later than 20 seconds after the rollback
- In transaction and for versioned databases (file mode, PostgreSQL, Oracle Database) ‑ the latest value, no later than 20 seconds after committing the transaction in which the value was set
In the client/server mode, when the parameter value is set from the thick client side, the change is immediately visible at the server side, and vice versa.
An infobase can be saved to a file on hard disk. To save the infobase data to a file, click Administration ‑ Dump infobase. This will open the standard file selection dialog box. Select a directory and specify the name of infobase data dump file.
The export functionality allows to:
- Obtain an infobase image regardless of the data storage method
- Transfer an infobase between DBMS's or file modes
Before exporting the infobase, it is recommended to test it using Designer or a third-party utility, and fix all the problems found.
It is not recommended to use this method to create infobase backups, for the following reasons:
- The export file may be impossible to load if the exported infobase contains errors
- The export procedure takes a long time
- The export procedure requires exclusive mode
- The export procedure has high RAM requirements
NOTE. Switching the infobase to exclusive mode does not automatically switch the MS SQL database to single-user mode.
To restore an infobase from a file, click Administration ‑ Restore infobase.
This will open the standard file selection dialog box. Select a directory and specify the name of the infobase data dump file.
When restoring the infobase from a file, you need to ensure that free disk space (for temporary files) approximately equal to the expanded size of the infobase is available:
- For file mode ‑ on the computer where the infobase import is performed
- For client/server mode ‑ on the computer hosting 1C:Enterprise server
The size of the resulting database may be several times larger than the size of the .dt data dump file.
IMPORTANT! Restoring destroys the current infobase irrevocably.
To speed up the infobase import when using Microsoft SQL Server, it is recommended to set the recovery mode to Simple or With incomplete logging for the database. You can temporarily change the mode before import, or permanently if you do not need to restore the database often. Before changing the database restoring mode, you should backup the database.
An infobase dump file (.dt) created by 1C:Enterprise 8.1 and 8.2 can be imported to 1C:Enterprise 8.3. If you try to import a configuration with unknown compatibility mode, an error is displayed indicating the required version. Importing 1cv8.dt files generated in version 8.3.1 and later into 1C:Enterprise versions prior to 8.3.1 is not allowed. The only exception is when the configuration property Compatibility Mode is set to Version 8.2.16 in 1C:Enterprise 8.3.1 and later.
IMPORTANT! You should create a backup before performing any operation that may damage the infobase data.
IMPORTANT! When creating an infobase backup (including restoration) in file mode, no connections to the infobase (including Designer) are allowed.
You can create infobase backups in any file management application. Open the infobase directory in the file manager of your choice. To create a backup copy of the infobase, simply copy file 1Cv8.1CD to another directory. To restore the infobase from backup (in case of data loss, damage, etc), just copy the backup file to the original infobase directory.
You can use specialized data backup and recovery software instead.
To make a backup more informative, we recommend to include log files stored in the 1Cv8Log subdirectory of the infobase directory. We also recommend to restore the infobase along with the logs (1Cv8Log directory). This provides you with history of infobase operations performed before the backup.
IMPORTANT! You should create a backup before performing any operation that may damage the infobase data.
IMPORTANT! When you restore an infobase using DBMS tools, no connections to the infobase (including Designer) are allowed.
It is recommended to perform infobase backup in client/server mode using available DBMS tools.
On iOS, use the standard system functionality to create infobase backups. On Android OS, use specialized utilities.
You can set up backup creation procedure directly on the mobile device. Two backup options are available:
- Before application update
- Before application start
To set up backup options, open the dialog box of the application properties and click Administration. Select Backup from the menu.
Specify the following backup settings:
- Backup location (on this device) field indicates the path in the local file system of the device where the backups are created. By default, this property indicates backup directory located in the document directory on the mobile device.
- Create backup on application update radio button enables backup creation on application update.
- Auto backup frequency (days) property specifies the backup frequency. If this property is set to 0, the automatic backups are not performed. Otherwise, the backup is performed before the application start if the number of days since the previous backup is greater than the value of this property.
- Number of backups property determines capacity of the backup storage. If this property is set to 0, the number of backup copies is only limited by the free space available on the selected drive. Backups created before mobile application updates are also considered when calculating the number of backup copies.
The form also contains buttons that create (Create backup) and restore a backup (Restore...). When you choose to restore a database, you are prompted to select the backup from which the database will be restored. The list is retrieved automatically from the contents of the directory specified in the Backup location (on this device) property.
6.10.1. General information
A variety of abnormal situations can occur while 1C:Enterprise is running: computer or mobile device power loss, operating system stops responding, hardware failures, etc.‑ If such an emergency occurs while writing data to 1C:Enterprise infobases (especially when in file mode), it can damage an infobase. External signs of infobase damage can vary; in more severe cases, the damaged infobase cannot be opened.
Verify and repair procedure is designed to diagnose and eliminate damage in infobases in file or client/server mode. This procedure can be used both for infobases used on personal computers and for infobases used on mobile devices.
6.10.2. On a personal computer
To start the repair procedure, click Administration‑ Verify and repair in Designer. The following dialog box opens:
In the list of checks and verification modes, select the actions you need performed. Multiple checks can be performed independently. In both modes (file and client/server), you can check the logical and referential data integrity, recalculate totals, restructure and re-index the database tables. In the file mode, you also can compress database tables.
For some distributed infobases that can provide data containing references to objects not located in the tested infobase, clearing the Check referential infobase integrity check box allows to disable creation of "non-existent" data and, as a consequence, prevents copying such data to other nodes of the distributed infobase.
While verifying and repairing the infobase, the main table of each corresponding object (dictionary, chart of characteristic types, chart of calculation types, chart of accounts) is checked for containing no more than one record per predefined item per data area. If any duplicates are found, the predetermined data flag is cleared from them and the deletion flag is set instead.
Several groups of settings are located below the list of verification modes:
- In the first group, you select the action to perform: Verify only, or verify and repair. In the first case, the infobase is checked but no changes are made to it. In the second case, the infobase is checked and actions specified in the second group of settings are performed. The names of radio buttons are self-explaining.
- In the second group, you select actions to perform if any references to non-existent objects are found or a partial loss of data in existing objects is detected.
- In the third group of controls, you control how lengthy verification and repair procedures run in multiple sessions.
The Pause verification in check box specifies the time interval after which verification will be interrupted and the verification and repair parameters will be saved for the next Designer session.
The Resume verification check box allows to continue the verification procedure paused in the previous verification and repair session.
Verification and repair events are displayed in the event log.
Click Execute to start verification. Verification can be interrupted by pressing CTRL + Break.
Verification determines whether exclusive mode can be set and sets exclusive mode if possible. If exclusive mode cannot be set, a warning is displayed: Cannot enable exclusive mode. Active users are detected. To get information about active users, open the list of active users (click Administration‑ Active users).
If exclusive mode is set, execution of the selected actions starts and verification progress information is displayed.
NOTE. Switching the infobase to exclusive mode does not automatically switch the MS SQL database to single-user mode.
When verification is complete, the exclusive mode is disabled.
When Compress infobase tables is selected for file mode, the infobase file is additionally optimized by moving all data required to open the infobase to a continuous data block in the beginning of 1Cv8.1CD file. This optimization accelerates the opening of the infobase, especially for configurations with a large number of infobase tables and configurations located on the network resources. After restructuring the infobase tables, the effect of table compression is lost and it is recommended to compress the infobase tables again. You can also perform infobase table compression by using a Designer batch run command line.
1C:Enterprise distribution package includes a file mode database recovery utility (chdbfl.exe).
To start the repair procedure, open the application properties for editing and click Administration. Click Verify and repair in the menu.
In the verification settings form, specify:
- What actions to perform
- Verify only, or verify and repair (Repair automatically check box)
- What to do if references to non-existent objects are found
- What to do if partial loss of the infobase object data is detected
Then, click Execute button in the upper right corner.
Execution of the selected actions starts and verification progress information is displayed. To interrupt verification and repair procedure, click Cancel.
To cancel assignment of the distributed infobase master node, use the /ResetMasterNode command in Designer batch run command line. This operation is equivalent to calling the SetMainNode(Undefined) method of the ExchangePlansManager object.
This may be necessary, for example, when you need to separate a subtree of the distributed infobase into an independent infobase or to reassign a distributed infobase node.
To delete a data area or the entire infobase, use the /EraseData command in Designer batch run command line. The area to delete is determined by using the /Z parameter of the startup command line.
To delete data, the user on whose behalf the deletion is performed needs the Administration right and exclusive access to the infobase.
IMPORTANT! If no separators are used in the session or data deletion is performed in a shared infobase, all infobase data will be deleted.
6.13.1. General information
To perform administrative duties, it is often required to find out which events occurred at a particular point in time or what actions a particular user performed.
Event log is intended for these purposes. Events are registered in this log. The administrator can get the history of user activities from the event log.
The event log is not a part of a database and is not saved when exporting/importing an infobase.
1C:Enterprise logs the major actions by users who modify infobases, perform routine operations, log on, log off, etc.
Event log is supported both in Designer mode and 1C:Enterprise mode. Event log can be generated in either of two formats:
- Sequential format (.lgf).
- SQLite format (.lgd).
Depending on 1C:Enterprise version, different default log formats are used:
- In 1C:Enterprise version 8.3.12 and later, the default format is sequential.
- In 1C:Enterprise versions 8.3.5 to 8.3.11, the default format is SQLite.‑
- In 1C:Enterprise version 8.3.4 and earlier, only sequential format is supported.
Changing the event log format while 1C:Enterprise is running is supported. The selected format of the event log is saved in the infobase. As a result, when you restore the infobase from a backup or a dump file (.dt), the event log format is also restored.
Each event in the event log is identified by a string. The system events use a combination of _$ and $_ characters (for example, _$InfoBase$_.MasterNodeUpdate or _$PerformError$_)._$InfoBase$_.MasterNodeUpdate will be displayed as a string Infobase. Update master node. Using these character combinations in names of events written by the WriteLogEvent() method is prohibited. The events generated by this method are displayed as is.
Event log in .lgd format is stored in a SQLite database file. Log location:
- For file mode: in the 1Cv8Log subdirectory of the infobase directory.‑
- For client/server mode: in the 1Cv8Log subdirectory of the infobase directory of internal files directory of the cluster.‑ The directory name can be determined from the data registry file of the cluster.
- For file mode: in the 1Cv8Log subdirectory of the infobase directory.‑
- For client/server mode: in the 1Cv8Log subdirectory of the infobase directory of internal files directory of the cluster.‑ The directory name can be determined from the data registry file of the cluster.
To save the event log, open it and click File ‑ Save copy. Specify the directory, the file name, path, and type (the default event log file format is *.lgf). Export in XML format is also supported.
Example of exported event log:
<v8e:EventLog xmlns:v8e = "http://v8.1c.ru/eventLog" xmlns:xsd = "http://www.w3.org/2001/XMLSchema" xmlns:xsi = "http://www.w3.org/2001/XMLSchema-instance"> <v8e:Event> <v8e:Level>Warning</v8e:Level> <v8e:Date>Event date</v8e:Date> <v8e:Application>Enterprise</v8e:Application> <v8e:ApplicationPresentation> 1C:Enterprise</v8e:ApplicationPresentation> <v8e:EventName>event name</v8e:EventName> <v8e:EventPresentation> event presentation</v8e:EventPresentation> <v8e:UserID>00000000-0000-0000-0000-000000000001</v8e:UserID> <v8e:UserName>Johnson</v8e:UserName> <v8e:Computer>JohnsonPC</v8e:Computer> <v8e:MetadataName>Catalogs.Products</v8e:MetadataName> <v8e:MetadataPresentation> Catalogs Products</v8e:MetadataPresentation> <v8e:Comment>Comment</v8e:Comment> <v8e:Data xsi:type="xsd:string">Some data</v8e:Data> <v8e:DataPresentation>Data description</v8e:DataPresentation> </v8e:Event> </v8e:EventLog>
6.13.5. Viewing event log archive
To view the archive entries of the event log:
- In Designer: Select Main menu ‑ File ‑ Open to open the standard file selection dialog box, and specify the Event log (*.lgd, *.lgf) file type. Select an archive file and click Open.
- In the standard event log viewer: Select More ‑ View from file to open the standard file selection dialog box, select an archive file, and click Open.
The automatic update settings and update interval settings are generated by the standard list setting mechanism for the tabular field.
188.8.131.52. General information
By using the menu item Administration ‑ Event log setting you can specify the detail level of event logging. In case of remote network access, you can only save your settings when no other users are logged on to the configuration (except the administrator).
When you create a new infobase, events of all levels of importance are logged and a new log file is created daily.‑
If the event log is in a sequential (.lgf) format, you can specify log splitting into periods. This is because the log records are stored in files. Each file contains records for a specific period. The period length is specified in the Split event log storage by periods field. A new file is opened each (as specified in the settings):
If the event log is in SQLite format, all records are stored in a single file and period splitting is not available.
184.108.40.206. Reducing size of event log
A large number of records may accumulate in the event log eventually. To reduce the number of records, open the event log settings and click Reduce size. This opens the following window:
All records preceding the date specified in the Delete events older than field are deleted. Pay attention that all records will be deleted in the event log splitting period (see above for the description of the Split event log storage by periods field) that includes the specified date. For example, if the log is split by months and the deletion date is May 14, 2009, the log records up to May 31, 2009 (inclusive) will be deleted. Also remember that the event log splitting period may change with time, so that the period to be deleted will be determined not by the current split period, but by the period that was set during the date specified in the Delete events older than field.
If you need to save the event data before deleting the records, select the Write deleted events to file check box and specify the name of the archive file.
If the event log is stored in a sequential (.lgf) format, this allows you to reduce the log on a regular basis and still be able to view the log events that have already been deleted. To do this, if you reduce the log by writing the deleted entries to a file, select the Keep log storage split by period and merge with the previously saved log check box.
TIP. You can also use the /ReduceEventLogSize KeepSplitting command to keep splitting by periods when you start Designer in command line mode.
220.127.116.11. Changing event log format
You can change the format of the event log. Depending on the infobase version, different types of access to the infobase file are required to change the log format:
- File mode ‑ exclusive access to the infobase file is required.
- Client/server mode ‑ format can be changed while users are logged on. Further:
- When you change the log format, no log events can be lost, even those that were written during and after the format change.
- If an error occurs during event log conversion from sequential to SQLite format, the log retains the sequential format.
- If an error occurs during event log conversion from SQLite to sequential format, the log retains SQLite format if the error occurs while writing the sequential log; or the log retains sequential format if the error occurs while reading from the SQLite log.
The event log format change is an event written in the event log as Infobase. Update event log settings (_$InfoBase$_.EventLogSettingsUpdate). The comment indicates to which format the event log was converted.
To change the event log format, click the Change format hyperlink.
This will open the format change confirmation dialog box, which depends on the current event log format.
When converting from sequential format, the dialog box is as follows:
When converting from SQLite format, the dialog box is as follows:
During conversion, you can also enable the event log splitting by periods.
IMPORTANT! Changing the format of the event log takes a significant amount of time.
After you change the format from sequential to SQLite, the event log files in sequential format are saved. If necessary, you can delete these files using the operating system functionality. After you successfully change the log format from SQLite to sequential, the event log file in SQLite format is deleted.
6.14.1. General information
1C:Enterprise supports maintenance of technological log storing information from all 1C:Enterprise applications.
The technology log is intended as an assistance tool for 1C technical support service to diagnose and detect errors in 1C:Enterprise applications, and to analyze the technological characteristics of application performance.
The components and properties of the technological log may change when the platform updates are released.
Since the technological log is a set of text files stored in different directories, it can be used by application developers to analyze operating modes of the 1C:Enterprise and applications.
The technological log can be stored on any computer where 1C:Enterprise is installed. The technological log settings are stored in a configuration file that describes:
- Directory where the technological log files are kept
- Types of data written to the technological log
- Retention time of technological log files
- Parameters of dump files generated on application crash
By default, no configuration file is available. This means that the technological log is enabled and configured to keep the minimum dumps into the following directory on application crash:
%USERPROFILE%\Local Settings\Application Data\1C\1cv8\dumps
For Windows Vista and later, the directory is:
If necessary, you can configure the event log by using a separate configuration file. This file must have logcfg.xml name and be located in the 1C:Enterprise configuration files directory.
NOTE. To enable a technological log on Windows, make sure the user of the process writing to the technological log has full rights to access the technological log directory and to read the technological log owner directory.
Every 60 seconds, 1C:Enterprise automatically polls the configuration files directories for logcfg.xml file and, if found, analyzes its content. Thus, you can modify the parameters of the technological log immediately, without having to restart the 1C:Enterprise applications.
Volume of the technological log can be significant, so it is advisable to specify retention time for the log file storage. After the retention time expires, 1C:Enterprise will delete the outdated log files. If the directory in which these files were located becomes empty after deleting the outdated files, the directory is also deleted. This ensures that the entire technological log directory tree does not contain outdated files and directories.
IMPORTANT! If 1C:Enterprise is running on Linux or macOS, the OS controls the crash dump generation. In this case, the information about the fact of an emergency termination of the process and the number of the signal that caused the termination are placed in the technological log.
IMPORTANT! Please take note that the directory of the technological log is not intended to store any files unrelated to the technological log. Do not keep dumps in this directory, and do not keep 1C:Enterprise technological log in a directory with any unrelated files. If any unrelated files are found in the technological log directory, the directory is considered invalid and the log is not created.
A basic configuration file looks like this:
<config xmlns="http://v8.1c.ru/v8/tech-log"> <log location="c:\1c\logs" history="1"> <event> <eq property="name" value="conn"/> </event> </log> <dump location="c:\1c\dumps" create="1" type="2"/> </config>
This configuration file specifies that:
- All events of establishing or losing connection to the server are written to the technological log
- Technological log files are located in C:\1c\logs directory
- Technological log files are stored for 1 hour
- Dump files are placed in the C:\1c\dumps directory
- Dump files contain all available information (the entire process memory)
In the absence of a configuration passport of the following parameters
- Technology log ‑is off.
- Technology journal is enabled by default.
- Dumps of the minimum size
- Dumps are saved to the%USERPROFILE% \ Local Settings \ Application Data \ 1C \ 1cv8 \ dumps directory of the current user profile (or %LOCALAPPDATA%\ 1C \ 1cv8 \ dumps for Windows Vista and later).
The configuration files in Linux and macOS are almost identical to Windows, with the following exceptions:
- The configuration file must be located in the 1C:Enterprise configuration files directory.
- The directory in which the technological log will be generated must be writable for the user on whose behalf the application works (server, client applications, web server extensions, etc.), which writes data to the technological log.
The default technological journal is intended to record events that occur in emergencies (as determined by 1C:Enterprise). A fixed event filter is automatically created for this log; this filter cannot be changed.
The default technological log has the following settings:
- The default directory with technological log files:
- Windows: %USERPROFILE%\Local Settings\Application Data\1C\1cv8 (or%LOCALAPPDATA%\1C\1cv8\logs for Windows Vista or later).
- Linux: ~/.1cv8/1C/1cv8/logs.
- macOS: ~/.1cv8/1C/1cv8/logs.
- Records are deleted from the default technological log after 24 hours.
- SYSTEM events with Error level are written to the default technological log.
These settings can be changed using the <defaultlog> element. The event generation rules for the default technological log are configured using the <system> element.
6.14.4. Technological log structure
The technological log is a directory with subdirectories containing files with accumulated technological data. The log directory has the following structure:
<log directory> <OS process ID> <single process log files>
Each log file contains events for 1 hour and is named yymmddhh.log, where:
- yy‑ the last two digits of the year
- mm‑ month number
- dd‑ day number
- hh‑ hour number
Log files are in text format. Information about completion of each event is recorded on a new line.
16:08.8750-9060,CALL,0,process=rphost,p:processName=DebugControlCenter,t:clientID=221,t:applicationName=Debugger,t:computerName=COMP1,Interface=5cf29e71-ec34-4f01-b7d1-3529a3da6a21,Method=0 16:08.8911-1,DBPOSTGRS,2,process=rphost,p:processName=Database,t:clientID=216,t:applicationName=1CV8,t:computerName= COMP1,t:connectID=125,Usr= User2,Trans=1,dbpid=58152,Sql="SELECT 1::INT8 FROM PG_CLASS WHERE pg_catalog.pg_table_is_visible(OID) AND RELKIND='r' AND RELNAME='params' LIMIT 1",Result=PGRES_TUPLES_OK 16:08.8913-1,DBPOSTGRS,2,process=rphost,p:processName=Database,t:clientID=216,t:applicationName=1CV8,t:computerName= COMP1,t:connectID=125,Usr=User2,Trans=1,dbpid=58152,Sql="SELECT Creation,Modified,Attributes,DataSize,BinaryData FROM Params WHERE FileName = 'ibparams.inf'",Result=PGRES_TUPLES_OK
The event end string is recorded in format: mm: ss.tttttt-d, <name>, <level>, <key properties>, where:
- mm‑ minute in the current hour
- ss‑ second in the current minute
- tttttt‑ microsecond in the current second
- d‑ event duration in microseconds
- <name>‑ event name
- <level>‑ event level in the current thread stack
- <key properties>‑ <key property>, <key property>, ...
- <Key property>‑ <name> = <value>; <name>, <name>, <value>‑ arbitrary text. If it contains “end of line” or “comma” characters, the text is enclosed in quotation marks or apostrophes, depending on which characters are less common in the string, and quotation marks or apostrophes in the text are doubled.
6.14.5. Setting up memory dump generation
18.104.22.168. On Windows
This section contains an example of the technological log configuration file (logcfg.xml) that enables crash dump creation.
<config xmlns="http://v8.1c.ru/v8/tech-log"> <dump location="C:\Program Files\1cv8\dumps" create="1" type="3"/> </config>
The memory dumps will be saved to the C:\Program Files\1cv8\dumps directory and will include the entire process memory contents plus an additional data segment.
The user on whose behalf the client application is running or the server must have full rights to the directories:
- Temporary files directory
- Technological log directory
- Dumps directory
The user on whose behalf the client application is running or the server must have the right to read the directories:
- Configuration files directory
- Directory that owns the dumps directory
If in the logcfg.xml file you have configured to receive query plans, then such a file should be located in the configuration file directory of the corresponding application:
- for the client-server option‑ in the directory of configuration files available to the 1C: Enterprise server;
- for the file variant with direct connection in the directory of configuration‑ files available to the required version of the client application;
- for the file version with a connection via a web server in the directory‑ of configuration files available to the extension of the web server serving this information base.
This section describes the steps to configure Linux to enable generation of crash memory dumps.
NOTE. The recommendations in this section are fully applicable to Fedora Core 4 and similar versions. For other Linux versions, the name and syntax of the commands described here may be different. For details, refer to the help system of your Linux version.
By default, crash dumps are disabled. Suppliers of Linux distributions recommend to include the creation of dumps only on computers intended for development, but not on production computers.
Generation of crash dumps is configured for all processes executed on behalf of a specific user. In order to enable automatic generation of dumps, add the following lines to the /etc/security/limits.conf file:
Where <username>‑ is the name of the user on whose behalf the application of the “1C: Enterprise” system is running.
To ensure clear understanding which crash dump is related to which process, and to keep the dumps in a specific directory, it is recommended to set a dump name generation template. The template can be specified for a single session, or permanently.
IMPORTANT! Applying settings described in this section affects all processes of all OS users. This means that crash dumps by other users (if enabled) will be given the selected name template and saved under the specified path .
IMPORTANT! The steps described below must be performed on behalf of root.
To set a name template and location for the crash dumps, use the command:
sysctl -w kernel.core_pattern=/tmp/core.%e.%p
This setting will be valid until the computer is restarted. In the above example, the dumps will be placed in the /tmp directory and the names of the dumps will be generated from:
- core prefix
- Name of the executable file
- ID of the process that initiated crash dump generation
To apply the name template and the path on the permanent basis, you must add the following line to the /etc/sysctl.conf file:
In order for the changes to take effect, run the command:
The path specified in the settings must be allowed for writing for the users on whose behalf the applications that generate crash dumps are running.
22.214.171.124.1. General information
This section describes the steps to configure macOS to enable generation of crash memory dumps.
By default, crash dumps are disabled.
Generation of crash dumps is configured for all processes. To enable automatic generation of dumps, run the following command on behalf of a user with administrative rights:
sudo launchctl limit core unlimited
The command is valid until the computer is restarted.
126.96.36.199.3. Determining dump names and location
The crash dumps are placed in the /cores/ directory. Files will have names like core. <Pid process>. The <pid of the process>‑ is the identifier of the operating system process that terminated abnormally.
6.14.6. Sample technology log files
In the examples below, it is assumed that 1C:Enterprise is installed in the default directory C:\Program Files\1cv8.
Please take note that volume of the technological log can be significant. Therefore, you need to ensure there is enough free space on the disk where the technological log files will be stored.
Below are some examples of logcfg.xml files containing some of the most common technological log configurations.
188.8.131.52. No technological log
If the logcfg.xml file is not in the 1C:Enterprise configuration files directory, the technological log is not created. If the logcfg.xml file is required to configure dump generation properly, it should contain no log elements. The following example is for generating a complete dump on application crash. Dumps are placed in C:\v8\dumps directory.
<config xmlns="http://v8.1c.ru/v8/tech-log"> <dump location="C:\v8\dumps" create="1" type="3"/> </config>
184.108.40.206. Full technological log
The configuration file below is for writing all events, together with all properties, to the technological log. The log will be retained for 1 week (168 hours). The volume of the technological log files will be very large; however, it can be useful for analyzing complicated emergencies. This configuration is recommended for use during the testing phase and during error investigation.
<config xmlns="http://v8.1c.ru/v8/tech-log"> <log location="C:\v8\logs" history="168"> <event> <ne property="name" value=""/> </event> <property name="all"> </property> </log> </config>
220.127.116.11. Database calls
The following configuration file specifies that the technological log will contain only 1C:Enterprise database calls and error information. The volume of the technological log files is less than for the full technological log, but it still can be significant.
<config xmlns="http://v8.1c.ru/v8/tech-log"> <log location="C:\v8\logs" history="168"> <event> <eq property="name" value="dbmssql"/> </event> <event> <eq property="name" value="dbpostgrs"/> </event> <event> <eq property="name" value="db2"/> </event> <event> <eq property="name" value="dboracle"/> </event> <event> <eq property="name" value="excp"/> </event> <property name="all"> </property> </log> </config>
18.104.22.168. Administrator actions and errors
This configuration file creates a compact technological log that contains information about starting and closing applications, establishing and closing connections with the 1C:Enterprise server cluster, cluster administrator actions, and 1C:Enterprise errors. This detail level is generally sufficient for investigating errors both in the configuration and in the 1C:Enterprise platform.
<config xmlns="http://v8.1c.ru/v8/tech-log"> <log location="C:\v8\logs" history="168"> <event> <eq property="name" value="admin"/> </event> <event> <eq property="name" value="conn"/> </event> <event> <eq property="name" value="excp"/> </event> <event> <eq property="name" value="proc"/> </event> <event> <eq property="name" value="qerr"/> </event> <event> <eq property="name" value="scom"/> </event> <property name="all"/> </log> </config> Errors and long operations
This configuration file generates a technological log that includes all information from the previous example, and information on all operations that take longer than 10 seconds. This can be useful for detecting user actions that required a long time to complete, for optimization purposes. The duration of events is expressed in hundreds of microseconds.
<?xml version="1.0" encoding="UTF-8"?> <config xmlns="http://v8.1c.ru/v8/tech-log"> <dump create="false"/> <log location="C:\v8\logs" history="168"> <event> <eq property="name" value="admin"/> <gt property="duration" value="100000"/> </event> <event> <eq property="name" value="conn"/> <gt property="duration" value="100000"/> </event> <event> <eq property="name" value="excp"/> <gt property="duration" value="100000"/> </event> <event> <eq property="name" value="proc"/> <gt property="duration" value="100000"/> </event> <event> <eq property="name" value="qerr"/> <gt property="duration" value="100000"/> </event> <event> <eq property="name" value="scom"/> <gt property="duration" value="100000"/> </event> <property name="all"/> </log> </config>
6.15. Referential integrity monitoring
6.15.1. Basic concepts
A large portion of 1C:Enterprise data is stored in reference form. For example, when adding documents, many attributes of a document can be filled in by choosing a value from a value list or a document from a document list. Such attributes are references to the items of the respective lists.
Using references allows you to avoid storing multiple copies of the same data in different places. For example, after entering and printing a number of documents, it turned out that the name of the organization of the counterparty to which these documents were written out was indicated incorrectly. Since the name of the counterparty was entered into the documents by choosing from the list‑ of counterparties, it is enough to edit the name of the counterparty only in the list, the changed name will be reflected in the documents automatically, and it will be enough just to rebuild the printed forms.
However, if you remove the contracting organization from the list, then in all the documents in which it was used, so-called ‑“unresolved references” will remain references to a non-existent object.
To avoid such situations, 1C:Enterprise includes a mechanism for referential integrity monitoring, which will be discussed further in this section.
The referential integrity monitoring mechanism implements a two-stage procedure for deleting data objects referenced in lists or documents.
During the first stage, the user marks the objects for deletion. At the same time, an object marked for deletion is practically no different in use from an ordinary object.
At the second stage, the system administrator or another person for whom the corresponding rights are defined (the Interactive Deletion of the Labeled for the respective types of lists and documents is set) performs a special procedure ‑for deleting marked objects, which is implemented as the standard function Delete the Marked Objects. In the course of this procedure, a full analysis of all references to tagged objects takes place, and only those objects to which references are either missing or located in objects that are also marked for deletion can be deleted.
In fact, the procedure for deleting objects marked for deletion is a scheduled procedure. It is recommended to be performed on a regular basis as the marked objects accumulate over time.
6.15.2. Enabling referential integrity monitoring
The 1C: Enterprise system allows you to delete unnecessary or outdated information in two modes:
- Direct removal of objects ‑does not analyze the use of the deleted object in other database objects.
- Using the control of referential integrity‑ objects are first marked for deletion, and then the presence of references to these objects in other objects is checked.
IMPORTANT! Deletion rights (direct deletion or referential integrity monitoring) are set up for each role assigned to users, for each type of object (lists and documents) at the application design stage.
If the user works in direct deletion mode, additional responsibility is put on the user and on the system administrator that assigns user rights and determines system response to unresolved links. Referential integrity monitoring may be disabled, for example, for application debugging purposes. If referential integrity monitoring is not used, the objects are deleted directly (without marking them for deletion), and unresolved references can occur.
The most radical method to enforce referential integrity monitoring is to disable the rights to directly delete objects for the entire configuration. This ensures that users cannot directly delete any objects in the application. The users will only be able to mark objects for deletion.
Rights for direct deletion, marking for deletion, and clearing deletion mark, are granted for each type of configuration objects separately. If this type of rights for the selected set of rights (roles) is set to InteractiveDeletion, users with this role can directly delete objects of this type. Rights are granted during application development stage.
The right to mark objects for deletion and clear deletion mark is granted in a similar way.
Of course, only disabling the InteractiveDeletion option in the configuration ensures that all users use the referential integrity monitoring mechanism consistently.
IMPORTANT! Note that it is also possible to directly delete objects using the 1C:Enterprise language. Therefore, parts of a configuration can perform direct deletion regardless of the referential integrity monitoring mechanism. In this case, the responsibility for data integrity lies with the developers of a specific system mechanism.
6.15.3. Direct deletion of objects
If the referential integrity monitoring mode is not used (InteractiveDeletion right is granted for a specific user for a specific type of configuration objects), the user can use the Delete directly menu item (Shift + Del key combination, or the corresponding toolbar button) to delete objects from lists of lists or document journals. The objects will be deleted without checking whether they are referenced by other objects.
6.15.4. Setting and clearing deletion marks
When using the referential integrity monitoring mechanism in the lists of lists or document journals, Mark for deletion/Unmark for deletion item is available in More (All actions) menu. Select this menu item to mark an object for deletion. Objects marked for deletion are marked with a crossed-out object icon on the left.
IMPORTANT! When marked for deletion of a posted document, it becomes undelivered.
Selecting the menu item More ‑Mark for deletion / Uncheck deletion (All actions ‑Mark for deletion / Uncheck deletion) marks an object for deletion, and for an object marked for deletion, removes the deletion mark from it.
IMPORTANT! When removing deletion mark from a document, it does not become posted. You need to post the document manually.
Ability to set or clear deletion marks is also regulated by access rights of a user (separate rights are required for setting and clearing deletion marks).
6.15.5. Using objects marked for deletion
Mostly objects marked for deletion are used in the same way as regular objects. They are displayed in lists, they can be searched for, and so on. Objects marked for deletion can be opened and modified normally.
Documents marked for deletion cannot be posted. If you attempt to post a document marked for deletion, an error message is displayed and the document is not posted.
6.16. Standard functions
Standard functions‑ are a set of system tools designed to perform various service operations that may be required when performing actions to administer an information base.
Access to standard functions is possible only in 1C: Enterprise mode. To gain access to standard functions, you must enable the corresponding parameter in the settings window (Service ‑ Parameters ‑ Display "All functions" command).
NOTE. The standard function windows do not support navigation links and cannot be added to the user's favorites list.
Below is a complete list of standard features with brief descriptions.
Displays a list of users currently logged on to 1C:Enterprise.
Availability of this function is determined by the ActiveUsers right
Views the event log.
Availability of this function is determined by the EventLog right
Find references to objects
Find objects with references to a selected object.
Posts and re-posts documents for the selected period, or restores the posting sequences in the configuration
Delete marked objects
Deletes objects marked for deletion.
Performs scheduled operations with registers
Full-text search management
Manages full-text search feature
Configuration extensions management
Manages configuration extensions
Collaboration system management
Registers infobases with 1C:Dialog
Database copy management
Designed for creating copies of the database and forming the structure of created copies
To run a standard function, open the All functions window, select the Standard branch, and select a standard function from the list.
For detailed description of all standard features, see below.
6.16.2. Active users list
A list of users currently working with the database will be displayed.
Information about the user that opened the window (established the current connection) is displayed in bold.
The lower part of the window displays the total number of users working with this information base.
Open event log‑ opens the log.
The user's work ‑opens the log with the selected selection for the selected user. This action can also be performed by clicking on the hyperlink with the user name (User column).
22.214.171.124. General information
To perform administrative duties, it is often required to find out which events occurred at a particular point in time or what actions a particular user performed.
Event log is intended for these purposes. Events of all types are stored in it. The administrator can get the history of user activities from the event log.
1C:Enterprise logs the major actions by users who modify infobases, perform routine operations, log on, log off, etc.
The event log is viewed in the event log form.
Each event is logged on a separate line. In the left column Date, time, the icon displays the type of event (see fig. 59). To view the event, select More View‑ the current event in a separate window (All actions‑ View the current event in a separate window).
When working with the system, the following types of events may occur:
If an event is associated with data, then the View‑ More Open Data for Viewing option becomes available (All Actions View the current event‑ in a separate window). With it, you can view the data associated with the event.
An event can be either transactional or independent (determined from 1C:Enterprise language). The default is independent event recording.
Note that there is a set of predefined events that are generated at the system level. For such events, transactionalities are also set at the system level. Thus, data change events, document postings are transactional, and the start and end of a session are‑ independent. Below is a complete list of predefined events.
- Authentication error
- Open ID provider:
- Configuration change.
- Modify database configuration.
- Update master node.
- Change event log settings.
- Infobase parameters change.
- Modify regional settings.
- Delete infobase data.
- Run background database configuration update.
- Complete background database configuration update
- Cancel background database configuration update.
- Pause background database configuration update.
- Continue background database configuration update.
- Install predefined data update.
- Update predefined data.
- Reducing size of event log.
- Cannot truncate the event log.
- Cannot modify the event log parameters.
- Cannot modify the infobase parameters.
- Start dumping to file.
- End dumping to file.
- Cannot dump to file.
- Start restoring from file.
- End restoring from file.
- Cannot restore from file.
- Error of changing the database configuration.
- Cannot modify the database configuration extension.
- Verify and repair:
- Background job:
- Successful completion.
- Force termination.
- Runtime error,
- Access denied.
- Access granted.
- Adding error.
- Modifying error.
- Deleting error.
- Authentication lock.
- Authentication unlock.
- Authentication unblock error
- Runtime error
- Edit maximum period of calculated totals
- Edit minimum period of calculated totals
- Undo posting
- Change standard OData interface content
- Set predefined data initialization
- Initialize predefined data
- Initialize predefined data, data not found
- Add predefined data
- Edit predefined data
- Delete predefined data
When a transaction is started, the transaction start event, Transaction.Start, is written to the log and assigned a transaction ID. When a transaction is completed and committed, Transaction.Commit event is written to the log and transaction status for Transaction.Start record is set to Committed. When a transaction is canceled, Transaction.Cancel event is written to the log and transaction status for Transaction.Start record is set to Canceled. When a transaction is terminated, Not completed transaction status remains.
IMPORTANT! When you open the event log, the default event filter (excluding transaction-related events) is set.
Log records corresponding to canceled transactions and transactions with an undefined status are displayed in a light-grey font.
In addition to viewing the logbook of the current information database, it is possible to view a fragment of the logbook previously saved in the LGD or LGF format. To do this, use the command View ‑More from file (All actions‑ View from file).
126.96.36.199. Interval setting
In the settings, specify the period and click OK.
You can also open period settings by double clicking the contents of the Date, Time column.
You can set filters by period, user, event, computer name, connection number, event importance level, or comment. When setting a filter by period, note the following:
- The filter is set for specific time
- When manually editing the starting or ending date, you need to specify the time as well
- When choosing the starting or ending date from the calendar, the time is set automatically: for the field from: time is set to 0:00:00; for the field to: time is set to 23:59:50
- When selecting a period using ... button, the time is set to the beginning of the first day of the period and to the end of the last day of the period.
If multiple types of applications were running, you can indicate in the list of applications which events of which applications are filtered.
The event list indicates which types of events are filtered.
The Data group contains data for event filtering. Information on events is presented in the Metadata, Data, and Data presentation columns.
The Metadata field contains a list of metadata presented in the configuration. Select check boxes for the metadata items to use in filters.
In the Data field, an infobase object to use for event filters is selected.
The Data presentation field contains the string presentation.
In the Other group, additional selection parameters are indicated:
- Transaction status‑ The transaction statuses are selected.
- A transaction‑ indicates a specific transaction.
- Sessions‑ indicate session numbers (separated by commas).
- The working servers ‑are selected by the central servers of the clusters (for the client-server version of the work).
- The main IP ports‑ are the IP ports of the cluster managers (for the client-server operation).
- Auxiliary IP ports‑ select the auxiliary ports of the cluster managers (for the client-server version of the work).
Click OK to set the filters.
The filter preview is displayed to the right of the Filter button. The filter preview is preceded by Disable: hyperlink. Clicking on this hyperlink disables filter.
Deletion of marked objects is a multi-stage procedure. The stages are strictly consequential. Before each stage, you can interrupt the procedure by closing the window. The following describes in detail the actions of the system and the user at each stage.
188.8.131.52. Selecting deletion option
At the first stage, you choose the deletion option: full or selective deletion.
184.108.40.206. Full deletion
If you choose Full deletion, all marked objects will be deleted. Referential integrity monitoring is enabled for deletion. Deletion may fail for some of the objects, since some of them can be referenced by other objects.
The list of objects that were not deleted (if any) is displayed after the deletion procedure is completed.
220.127.116.11. Selective deletion
If you choose Selective deletion, a list of objects marked for deletion will be generated. The list of infobase objects marked for deletion will be displayed.
Select check boxes for any objects you want to delete.
If a check box is selected for an object in the list, this means that the object will be deleted.
Selecting a check box in this list does not set a deletion mark for an object. Likewise, clearing a check box in this list does not remove a deletion mark from an object.
Double-click on an object to open the form of this object. This allows you to view objects and decide whether they need to be deleted.
At this stage, you can switch to other windows and modes, or make any corrections, without closing the object deletion list.
To delete the objects, click Delete. Referential integrity monitoring is enabled for deletion. Deletion may fail for some of the objects, since some of them can be referenced by other objects.
If the infobase contains references to objects selected in the deletion list, a warning is displayed: Cannot delete objects: <number>, because they are referred by other infobase objects. These objects will not be deleted.
Click Next to display a list of not deleted objects, containing a list of detected references. References are displayed for the selected object.
When you select a reference from the list, you can open it for viewing and editing. This allows you to make changes to the object (apply another reference) so that the marked object can be deleted.
To exit the marked object deletion mode, click Close.
This mode allows the system administrator to find objects that refer to the selected object.
In this mode, the user can select an object and get a list of references to it from other infobase objects.
Select an object in the Object field and click Find references. All infobase objects are searched for references to the specified object (determined by application). After completing the search, you can analyze the found references. To open a reference form, click Open (if allowed) or click the hyperlink. To search for references to an item of Found references list, open the context menu for the selected line and select the Find references command. This opens an object reference search window and starts the search for references to the objects.
At this stage, you can switch to other windows and modes, without closing the search window.
6.16.6. Document posting
This service performs batch document posting or reposting, or restoration of sequences.
18.104.22.168. Document posting
Document posting function is used to post documents of selected types within the specified period.
In the upper part of the window, in the Period field, a document posting period is specified. To set the posting period, select a standard period or click Custom period and set the period manually. If you clear both boundaries of the custom period, the posting will performed without any period restrictions (indicated by a message to the right of the period selection field).
The document posting window contains a list of the types of documents that can be posted. The list of documents includes only those types of documents for which the current user has the InteractivePosting right.
The list of selected documents to be posted is edited by double-clicking a document or by click the Add >, Add all >>, < Remove, and << Remove all buttons (multiple selection is available).
Above the list of document types there are check boxes determining which documents will be posted: posted documents (will be reposted), unposted documents, or both.
After setting all the necessary parameters, click Post. Before posting the document, the date of the first and last posted document is determined, based on the posting mode and the list of posted documents.
When posting a batch of documents, the documents marked for deletion are not posted, even if they satisfy the posting conditions. If an error occurs during document posting, the system behavior depends on the value of the Stop posting if an error occurs check box. If the check box is selected, posting will be aborted. If the check box is cleared (default value), posting will proceed and the documents that were posted with errors will be saved.
Once the posting is complete, the number of posted documents is displayed. If any errors were detected during the procedure, a form containing a list of documents with errors is opened.
If the error list contains nothing but Document posting error message, this means that an error occurred during document posting but the document did not generate any error messages.
Double-clicking the line with the document name opens it for viewing.
During the posting, the status pane displays information about the actual document posting period, the current posting date, and the total number of posted documents.
You can abort the document posting procedure by pressing Ctrl + Break.
22.214.171.124. Restoring sequences
All documents in the 1C: Enterprise form a single chronological sequence. Each document has a date and time. Even if two documents have the same date and the same time, they are still arranged in a sequence, determined by the order of their entry into the system. Document date and time are subject to change. Thus, regardless of the order of entry, documents can be arranged in a sequence that reflects the actual order of events that occurred in the economic life of the company.
During posting, a 1C:Enterprise document is registered in several accounting systems supported by 1C:Enterprise.
The algorithm of document posting, as a rule, reflects data recorded in the document attributes. However, in some situations, the document posting algorithm also analyzes and uses the current totals. For example, if a document writes off goods or materials at average cost, the posting algorithm will analyze the balances of goods (materials) at the time of the document in order to determine the write-off amount. If the write-off is performed using LIFO or FIFO method, the posting algorithm will analyze the existing balances of goods (materials) by lots at the time (position) of the document.
Obviously, documents that use these totals (for example, totals by batches) should be posted in strict sequential order. However, in practice, it is often necessary to back-enter or back-correct the documents due to data input errors and late receipt of documents. Back-entering or back-correcting a document invalidates all register records generated by the documents following this one. For example, if it became clear that the quantity of goods was incorrectly indicated in one of the incoming invoices at the beginning of the month, then in all subsequent expenditure invoices writing off existing lots, it is necessary to re-analyze the balances considering the changes made and re-record movements of the registers. Therefore, all documents that analyze the balances following the corrected document must be reposted.
For automatic control of document reposting, document sequences are used. Each sequence of entered documents provides control over the posting order of the documents of the specified types. Thus, there may be several independent sequences in 1C:Enterprise.
The sequence recovery mode allows you to automatically repost all documents related to the sequence, from the current position of the sequence border to the specified moment. The current position of the sequence boundary is determined by the date from which the document posting sequence must be restored.
The table displays a list of existing sequences for which the current user has the Edit right. In the Border column, the current position of the sequence boundary is displayed for each sequence. To restore all sequences, click Restore all.
To restore one or several sequences, click Restore. All documents related to the selected sequences will be reposted, starting from the earliest border of the selected sequences and up to the specified position . If multiple sequences are selected, the selected sequences will be restored in the listed order. If a single sequence is selected, it will be restored.
The Stop restoring sequences if an error occurs check box determines the system behavior if an error is detected during the sequence recovery. If the check box is cleared (default value), sequence recovery will proceed regardless of the errors. Otherwise, the process will be stopped if any error is detected.
You can abort the sequence recovery procedure by pressing Ctrl + Break.
126.96.36.199. General information
This service performs regulatory actions with registers available in the application. The list of actions includes enabling and disabling totals, recalculating totals, working with aggregates, and more.
All work with the results is divided into two sets of possibilities:
- Frequently used features (opened by default)‑ This mode provides simple means for performing the most frequently used actions with register totals.
- Full capabilities‑ provides full access to the management capabilities of the totals and aggregates of the application solution.
The list includes only those accumulation and accounting registers for which the current user has the right to Manage the Results, and for which all separators are used in the current session, which they are part of (if separators exist in the application solution). Both totals modes work with this list.
To switch between the modes, use the hyperlink in the lower right part of the window. The current mode is memorized so that you automatically switch to this mode the next time you use the totals management.
Further we will describe both modes in more detail.
188.8.131.52. Frequently used features
Frequently used features include setting a period of calculated totals, enabling totals, restructuring and filling of aggregates, and obtaining optimal aggregates.
184.108.40.206.1. Set the period of calculated totals
This operation sets a period of calculated totals for all accumulation and accounting registers that have totals enabled. For accumulation registers, the period is set to the end date of the previous month, since the most typical scenario of using the accumulation register is to get current balances. For the accounting registers, the period will be set at the end date of the current month, since the most typical scenario of using the accounting register is to get turnovers for the current month.
TIP. You can run this operation at the beginning of each month to improve the performance of the registers.
220.127.116.11.2. Enable totals usage
This operation enables using totals for all registers that have totals disabled, except for current accumulation registers that are in aggregate mode.
TIP. This may be necessary, for example, when a massive register data editing operation is aborted, disabling the usage of totals to speed up work.
18.104.22.168.3. Rebuild and fill
This operation rebuilds and fills all current accumulation registers that have the aggregate mode enabled and their usage is allowed.
TIP. This operation can be scheduled when using aggregates.
22.214.171.124.4. Get optimal aggregates
Calculates optimal aggregates for all current accumulation registers that have aggregates specified in Designer.
TIP. You can run this operation both before enabling the aggregates usage and during 1C:Enterprise operation.
126.96.36.199. All available features
All features mode allows you to get full access to all tools for working with totals (Totals tab) and aggregates (Aggregates tab) of accumulation registers and accounting registers.
188.8.131.52.1. Operations with totals
The Totals tab shows the list of accumulation, accounting and information registers (that have totals enabled) available to the user.
The list shows the current state of the system registers. The checkboxes indicate those modes that are currently enabled for each register:
- Totals‑ of the use of totals;
- Current totals‑ state of use of current totals;
- Minimum period of totals‑ minimum stored period of totals of the register of accumulation;
- The period of the totals‑ is the maximum stored period of the totals of the accumulation register;
- Split totals‑; state split totals;
- Units / totals‑current mode of using aggregates or totals for current accumulation registers for which aggregates are specified in the configurator.
Units / totals current mode of using aggregates or totals for current accumulation registers for which aggregates are specified in the configurator. So, for example, the gray color in the Totals splitting column means that the totals splitting for the selected register is prohibited in Designer.
You can enable or disable modes or calculate totals here.
Multiple selection is available for all commands. Each command you run will be executed for all selected registers. If an error is detected during the execution of the command, the system behavior depends on the state of the Stop data processor on the first error check box. If the check box is cleared (default value), the data processor will continue to run regardless of the error and all selected registers will be processed; otherwise, the data processor will be aborted.
If the register supports the aggregate mode, double-clicking the contents of the Aggregates / Totals column will open the Aggregates tab and put the mouse pointer on the register with the same name as on the Totals tab.
184.108.40.206.2. Operations with aggregates
The tools on the Aggregates tab are intended for managing aggregates of current accumulation registers.
The top list contains current accumulation registers of the current configuration that have aggregates specified in Designer. The bottom list (Register aggregates... :) contains aggregates specified for the register, aggregate usage indicators, and statistics on aggregates.
You can switch the register usage mode, enable or disable aggregates usage, or perform basic operations with aggregates.
When calculating the optimal aggregates, a directory will be requested in which the file with the list of optimal aggregates for the selected register will be placed. The register will be marked in bold if it is recommended to replace the existing aggregates with a calculated list of optimal aggregates.
When saving the optimal aggregates, the file name is generated as follows: AggregateName.xml. So, for the Sales register of fig. 71, the optimal aggregates file name will be Sales.xml.
1C:Enterprise supports full-text data search capabilities. Forms for entering the search conditions are designed during configuration development.
To enable or disable the full-text search, use tumbler Full-text search:. To perform this operation, exclusive access to the infobase is required. It means that you cannot enable or disable the full-text search while any other users are logged on to the infobase.
The search index is generated after clicking Update index. To optimize the index generation procedure, main and additional indexes are used. Additional index is generated when users enter data. It contains information on the data entered after the last update of the main index.
Index clearing (click Clear index to start) is used to delete an index, for example, to free the disk space occupied by the index files. You may need to rebuild the index after clearing.
These buttons are only available for users with the AdministrativeFunctions right.
The last indexing date is indicated in the Index created on field.
Clicking the Additional parameters button opens the management of the following parameters:
- Maximum size of the data indexed ‑ enables to set the maximum size of the data (indexed by the full-text search and) stored in one attribute of the configuration object.
- for compatibility mode for version 8.3.7 or earlier. Equal to 0. Meaning there are no size limits of the objects indexed.
- for compatibility mode for version 8.3.8 or later. Equal to 1 MB. Means that the full-text search will not index the objects that are larger than 1 Megabyte.
- Maximum number of the indexing jobs ‑ enables to set the number of the concurrently running background jobs that refresh the index of the full-text search.
Behavior of the software during installation of the default mode depends on the compatibility mode of the configuration:
- version 8.3.11 or earlier ‑ indexation is carried out by one background job.
- version 8.3.12 or later ‑ background jobs number for index refreshing is chosen automatically. At that their number cannot exceed 4.
This setting is relevant only for client/server mode.
- If the compound words break down mode ‑ is enabled, the full-text search will search for the meaningful parts of the compound words. Works only for the Russian language by default. Other languages require formation of the custom full-text search dictionary. After changing the parameter value, you need to rebuild the full-text search index. For example, when searching for the word "bread":
- Breakdown of compound words is enabled ‑ the "MosBread" word will be found.
- Breakdown of compound words is turned off ‑ the word "MosBread" will not be found.
The changes apply after clicking Set. The values are only applied for the parameters whose new values differ from the values set currently. The button is only available for users with Administration right and with the full-text search activated.
This dialog box manages the configuration extensions in 1C:Enterprise mode.
Dialog box for configuration extensions management is available for users with the ConfigurationExtensionsManagement right. Users may need the Administration right to specify the security profile for the extensions activated.
Standard actions with the extensions can be performed here:
- Add a new configuration extension from file (Add). When adding the extension, the unsafe actions protection system displays a warning message.
After adding the extension, you can select Protect from unsafe actions check box for it. If the check box is cleared, the unsafe actions protection system will not prompt the user while adding the extension.
- Delete an attached extension (Delete). When deleting remember that the data extension deletion is done in two steps. First step is to deactivate the extension ‑ check box removed Active. Second step — the extension can be removed from the infobase.
When deleting an inactive data extension, you will be prompted to confirm the deletion.
- Deactivate the extension without deleting it from the infobase. The Active check box is designed for this purpose. The extensions with this check box cleated will not load when the session starts. Deactivated extension (from the point of view of the internal components) is equal to the deleted extension, with the difference for the deactivated extension that stored data (for data extension) structures are not deleted.
- Replace the existing extension version with a new version (Import).
- Save configuration extension to file (Save).
- Update the list of extensions.
- Restart the client application to apply the changes made to the extensions (Restart). A restart is executed without additional warnings.
Using the Manage main roles button, you can manage the assignment of the main roles of permissions to users.
This dialog displays information that a user has been assigned all the main roles of a particular extension. Further:
- Enabled check box ‑ The user has all the main roles of the selected extension installed.
- Disabled check box ‑ the user does not have any of the main roles of the selected extension.
- Check box in the third state (grey) ‑ Some of the main roles of the extension are shown to the user.
All extensions column is designed to display (and install) the main roles of all extensions for the specific user. All users string is designed to display (and install) the main roles of the specific extension for all users.
Assigning main roles to users can be done in the following ways:
- In the extension management dialog, using the Use main roles for all users check box. This check box leads to assignment of all the main roles of selected extension to all the users. Exactly the same action can be executed in the Management of the main roles dialog by setting the check box in the Use main roles for all users string in the column with the desired extension.
- Manually indicate to a specific user that he is assigned all the main roles of the extension. So, in the example on fig. 76, all the main roles from Extension1 and Extension2 are assigned to the Administrator user. At the same time, one of the main roles Extension1 was assigned to the Seller user, and this was done in the configurator. Therefore, for the Seller user, the check box in the Extension1 column is displayed in the third state (gray square).
- Specify that all infobase users need to establish the main roles of any extension by setting the check box at the intersection of the All users string and the column of the desired extension. In this case, the main roles will be assigned to all current users of the infobase. This action will not apply to new users. In order for the extension main roles to be distributed to all users (including those that do not currently exist), select the Use main roles for all users check box in the column with the desired extension.
- Indicate that a specific user needs all the main roles of all extensions connected to the configuration. This can be done by setting the check box at the intersection of the string with the username and the All extensions column. For example, as indicated by the Administrator user in the figure above.
- You can set to all current users the main roles of all extensions using the check box located at the intersection of the All users string and the All extensions column.
- Set to all users (including those that are not already existing) the main roles of all extensions using the check box located at the intersection of the Use main roles for all users string and the All extensions column. Or manually set the check box in the Use main roles for all users column (in the extension list) for all extensions.
In case when the security profiles are to be setup (on 1C:Enterprise server), the Details group should be used, which contains the value of the checksum (field Checksum), necessary for filling the property of the same name in the description of the available external module.
You can check whether an extension (or all extensions) is compatible with the current infobase. Use commands Check applicability and Check applicability for all in menu More for this. You also can check the applicability when adding and loading the extensions. The same-named check box in the extensions list form is used for this purpose.
Configuration extension can be added for the whole infobase as well as for a particular data area. The extension applicability scope is indicated in the lower part of the form (field Scope when adding the configuration extension). Value of this property is defined as follows:
- Configuration has no separators that can separate the extensions. In this case, the extension can be only attached with scope Infobase.
- In the current session, all separators that can separate extensions are conditionally disabled. In this case, the extension can be only attached with scope Infobase.
- In the current session, separators that can separate configuration extensions are used. In this case, the extension can be only attached with scope Data separation.
- In the current session, separators that can separate configuration extensions are not used. In this case, the extension can be attached with scope Infobase or Data separation. In this case, the extension scope can be changed, excluding the case when the extension extends data. For data extensions, the scope cannot be changed.
- Besides, one infobase cannot contain the extension that is attached with the scope Infobase and with the scope Data separation at the same time.
- Separation of configuration extensions.
This standard function performs administrative actions to interact with collaboration system:
- Register the current infobase in 1C:Dialog service to be able to use collaboration system.
- Cancel application registration in the service.
- Perform/cancel merging of the collaboration system applications.
- Lock/unlock users of the collaboration system.
To be able to register the infobase, the user needs CollaborationSystemInfobaseRegistration right.
The computers running the collaboration system need access to wss://1cdialog.com using IP port 443.
If the application is not yet registered in 1C:Dialog, you are immediately prompted to register.
The email of the subscriber owner and representation of the application registered are entered. After clicking Get code, an email message containing the registration code issued by the service will be sent to the provided email. Email message with the registration code will be sent from the email address firstname.lastname@example.org. Enter the code into the Registration code field and click Register.
A successful registration message will be displayed.
After registration completion and in the case if the application is already registered in the collaboration system, a form opens that offers the following actions: set up shared access to the application, manage user blocking and unregister this application.
If you click Application shared use hyperlink, a shared usage settings form opens.
This form lists all applications pairs and rules of matching the users and the discussion contexts.
If you need to join the applications, click Add.
This dialog box shows the list of all applications of the current subscriber owner. First, select two or more applications to be joined. Then, indicate the method of users matching and necessity to match the conversation contexts. Clicking Ok will initiate the operation and will update the list of shared applications.
Clicking Cancel will cancel the shared use of two applications.
Clicking the Users hyperlink opens a form with a list of infobase users.
Users are allocated according to their status in the opened list:
- The current user is highlighted in bold. In this fig. 82, the user is the Administrator.
- Those users who cannot be blocked are marked in gray font. Such users are present only in the infobase. In the fig. 82, Anonymous is an example of such a user.
- Locked users are indicated by the crossed-out font, and the Purchasing manager is an example of such a user.
- Normal font shows users who are present in the collaboration system and who can be locked. In fig. 82, an example of such a user is Former employee.
Infobase and Collaboration system columns, respectively, show the presence of the user in the infobase itself and in the collaboration system. Locked column displays the user’s locked status. Lock and Unlock buttons allow you to perform homonym operations with the current user or with selected users.
When clicking the hyperlink Cancel registration the user is prompted to confirm this action.
After clicking Cancel registration, you cannot use the collaboration system in the infobase any more. To re-enable the collaboration system, you need to repeat the registration procedure. After the re-registration, your access to the discussions and messages will be restored.
This standard function is designed to work with database copies. Database copies are used to operate the data accelerator mechanism. Detailed description of the database copy mechanism and Data accelerator.
On the left side of the form there is a list of copies of the current database. On the right side ‑, a list of objects whose tables will be copied to the copy is indicated for each copy.
When you create another copy, a form to edit database copy is displayed.
When creating a new copy, you should specify the following parameters:
- A database copy name. Used to simplify the identification of the created copy by maintenance personnel. To access in 1C:Enterprise language, this copy property is needed.
- Built-in Data accelerator. Check this check box, if the created base will be used for the Data accelerator.
- Replication type. This parameter indicates who will be responsible for data replication (including creating tables of the desired structure in the copy).
- Database server, DBMS type and Database allow you to specify the address and type of DBMS where the created copy will be placed, as well as the name of the database with the copy.
IMPORTANT! It is not recommended to specify the parameters of the working database as the parameters of the database used as a copy. This will cause the working database to become unusable. In other words, the name of the database that acts as a copy must not match with the name of the working database.
- User and Password properties are used to specify the parameters of the user on behalf of whom the connection in the DBMS copy will be made. The specified user must have rights to create and delete tables in the DBMS copy, as well as to execute all actions with created tables (read, create, modify, delete).
- Create database check box shows necessity to the system on creating a database in the event when there is no database with specified name (Databaseproperty) in the DBMS copy.
As soon as an entry is made with database copy details, the following properties can no longer be modified: Name, Built-In Data Accelerator and Replication Kind. To change these parameters of the copy, delete and re-create information about the database copy. The remaining parameters can be modified in a conventional manner.
As soon as shared parameters are specified for a copy, you can as well specify data stored in this copy. Data stored in a copy can be modified any time. For that purpose, open a database copy for editing. Parameters can be modified in a dialog box which is similar to that used to create a database copy.
When you edit a database copy content, specify objects you need to store therein. As such, check the check box to the right (configuration object tree). To define attributes of selected objects to be saved in a copy, use check boxes in a group with number 1 (see the above figure). Group 1 check boxes are designed to define location of attributes in a copy in general.
Similarly, location of attributes per configuration object can be defined. For this purpose, use a dialog box which is displayed as soon as you press button 2.
In this dialog box, properties with number 1 assigned thereto are similar to those available in property group 1 on fig. 86. Group 2 (see fig. 87) contains filter parameters intended to set a filter by period for configuration objects which support this action. If filter by period is not supported by a configuration object, ‑ no fields from group 2 are available in configuration object group properties filter and setting dialog boxes (fig. 87).
If use parameters are specified for a configuration object which are dissimilar to default values, image 4 is displayed in the metadata tree for the said configuration object (see fig. 86). If filter by period is specified for a configuration object, image 3 is displayed in the metadata tree for the said configuration object (see fig. 86).
Press OK in the editing tree of a database copy to save modified parameters in the main database and delete disabled objects tables from a database copy. To physically create tables (and fill in the same with initial data), ‑press Update Copy, as soon as editing of a database copy is over.
Data change history standard function allows you to view the data change history. In terms of features to view the history, this standard function is similar to the form that opens by the History of changes command in the form of the object element for which the data history is included. In terms of features to filter and functions executed ‑, the standard function provides more options.
Before you start viewing the data history, you need to set the filter. A minimum filter that will allow you to continue viewing the history ‑filter by metadata type. This is due to the fact that the platform does not allow you to get a data history without filter by metadata type or data object. Reading the data history is executed after the filter has been set. In addition to filter, the size of the read portion of data is defined by the contents of the Number of displayed history elements field. This field will help in the case when too much records fall under the conditions of a given filter, but you need to see only a limited number. Specified number of records in sorting order will be filtered to the list. In standard processing, when receiving a history, the sort is used by default:
- When filter by metadata type: Date field in descending order, VersionNumber field in descending order.
- When filter by metadata object: VersionNumber field in descending order.
Special focus should be paid to the Fields and the Extended filter by fields tabs. The table on the Fields tab contains the current structure of the selected type of metadata. The table fills in after selecting a value in the Metadata field. Thus, this table allows you to set the filter for the current structure of the configuration object. In the event that you want to use already deleted attributes in the filter or the old names of the renamed attributes ‑, use the Extended filter by fields tab. In general, the purpose of the tab corresponds to the purpose of the Fields tab, but the data paths can be specified manually and these paths can be arbitrary.
After the filer has been set, the data history will be received and displayed. In the list received, there is a feature to execute some operations:
- Open version ‑ allows you to open a report that displays information about the version on which the cursor is in the list.
- Compare with previous ‑ opens a comparison report of the version on which the cursor is in the list and the version with the previous number.
- Compare with current ‑ opens a comparison report of the version on which the cursor is in the list and the current version of the object.
- Compare version ‑ allows you to compare two arbitrary versions of the object.
- Switch to version ‑ opens the form of the object, which is filled with the data of the selected version. To complete the switching to the selected version, you need to save the changes.
You can change the version comment directly in the list. To do this, just go to the Comment column and edit the value. To change the version comment, the user who edits the comment, must have the necessary access rights.
Reports that are opened from the standard function take into account the fact that the Main data form of the data history version and the Main form of differences in the data history versions configuration properties can be filled in the configuration. Standard processing will call those forms that are specified in a particular application.
Another feature of the standard function of viewing data history is the ability to update the data history itself. For a separate history update, use the Update history command. Executing a command from the menu will update the data history for all configuration objects. It also provides the option to update the data history each time the version list is updated. First, the history will be updated, and then ‑ the list of versions in the form. To manage this behavior, the Update history when updating the list check box is designed. When setting this check box, the history is updated in accordance with the filter set as follows:
- If a specific data object is specified in the filter, ‑the history will be updated only for this object.
- If only the type of metadata is specified in the filter, ‑ the history will be updated across the entire type of metadata (only in the compatibility mode of the application of Version 8.3.14 and later).
Use a standard data processor to perform service operations. As such, use Data History Management dialog box which is available in More menu in the main form of a standard data processor.
Let's take a closer look at features supported by this dialog box. At the top of this dialog box, you can see description of data which will be processed when you run any proper command. It is displayed after Will Be Processed:. Further, you can see three commands (with respective parameters) which can be run in this dialog box:
- Refresh data history. Technically, this command is intended to call DataHistory.UpdateHistory() method. However, method parameters are defined using check boxes to the right of this button.
Essentially, data history is updated (similar to More‑ Update History command). Additionally, a user is able to manage method parameters, while no command affords the said opportunity. Method parameters will be assigned default values.
- Process after writing versions. This command is intended to forcefully call a data processor, as soon as data history version is written. Whenever you run this command, postprocessing sequence is handled for data defined by the existing filter of a standard data processor.
- Delete from data processor after versions are written. This command allows it to restrict the queue size for processed objects after writing a version which is processed by a prior command. All versions created before the date specified as a command parameter will be deleted from the queue.
If any of the aforesaid commands is unavailable due to the existing filter, ‑ command button is disabled.