Log Files


Plant Application Log Files

Plant Applications log and buffer files can be installed in various locations and sometimes are difficult to find.
Methods to locate:

Observation: A default install will place the log files at this path:

C:\Program File (X86)\Proficy\Proficy Server\Log Files

Plant Applications Admin: If Plant Applications Admin is functioning the log file path can be found under: Global Configuration -> Administer Site Parameters -> LogFilePath (where the hostname is blank)

SQL TSQL query

SELECT Parm_Name, Parm_Long_Desc, Value
FROM Site_Parameters sp
JOIN Parameters p ON p.Parm_id = sp.Parm_Id
WHERE Parm_id in (101,102) and Hostname = ''

Proficy Server Log Files

Each of the Proficy services has a log file. The log files report reload times, execution messages and errors and, when debug is turned on, debugging information. Log files should be checked periodically, and any error messages should be resolved. Every time a service is restarted, a new log file version (-0#) is created.

How To Turn On Debug Mode for PA Services

  • Expand the Server Management folder in the Proficy Administrator tree and then right-click on the Base Server folder.

  • Select "Control Services" from the pop-up menu.

  • Right-click on the desired service, then select "Debug" and then choose the debug level.


Debug mode should be used sparingly as the log files can grow to an immense size very quickly with High generating the most detailed information and largest files. Do not leave debug mode on for an extended period or you may find yourself unexpectedly running out of disk space.

Proficy Server Show Files

Each of the Proficy services has a show file (".shw"). The show files are located in the same directory as the log files. The show files contain the configuration information acquired by the Proficy service when it was last reloaded. The running configuration can be verified here. If changes have been made by the Administrator, the shw file can be checked to see if they have been picked up by the Proficy service.

Buffer Files

Each of the Proficy services has a buffer file. The location of the buffer files is determined by the Site Parameter BufferFilePath. The buffer files stored queued actions for the Proficy service to execute. If missing, the files and directories located in the BufferFilePath are automatically created when services are restarted.
Deletion of the buffer files can be a potentially helpful option when troubleshooting a problem as it will effectively reset the server. The Proficy services must be shut down for the buffer files to be deleted. You can delete the entire folder and don't need to worry about deleting individual files within the folder. By removing the buffer files, you are removing any queued actions so when the services are restarted there is no historical processing and Proficy will only process new events. It must be noted that, for the same reason, it "can" also result in some lost data as any recently queued actions may be lost. Most times the data will be captured by the events being re-triggered.


Troubleshoot Services (in PA Admin)

Current Activity

The Current Activity display gives a snapshot of the current processes running in SQL and indicates the specific SQL statement being executed.
This can be a useful tool for determining potential causes of high SQL Server utilization. If a particular SQL statement continues to appear after many refreshes, then it may be causing the SQL Server high utilization. This high utilization is undesirable because it is preventing other Proficy SQL processes from executing. Most often the offending statement is a custom calculation, event model or report query. The Application Name tells you what kind of query it is (i.e.,. CalcMgr = calculation, EventMgr = event model etc…) while the Job Description shows you the name of the stored procedure or the contents of the query.

Remote Clients

The Host Name is useful in instances where the statement is being run from a remote client other than the server.

Kill

If there is a query that is hogging the SQL Server, an extreme measure is to execute the KILL command from Query Analyzer. A Kill command requires the user to have sysadmin authority and is usually performed by a DBA. Using the SPID from the Current Activity tab, execute 'KILL <Spid>' in Query Analyzer to end the process. This would be used to kill a SQL process executing from a remote client connection as any server based processes can be aborted by shutting down the relevant service.

Alternate Data Source

The information contained in the Current Activity display is also available in Query Analyzer by running the SQL stored procedure spSupport_ShowRunning. This is handy to know as Query Analyzer has the option of returning the results directly to a file, instead of to the display, so in situations where you want to capture a record of the activity on the server you can easily generate a report without having to copy and paste.

Pending tasks

Proficy's execution is task-based such that when an event occurs in the system, the subsequent actions to be performed are defined as "tasks" and queued up in the PendingTasks table. This tab allows you to dynamically view the contents of that table and monitor Proficy's progress
The data in the Pending Tasks tab has several things to offer. Foremost among them are the number of tasks queued up and the rate at which they are cleared out. This is an indication of Proficy's performance. To properly judge this, the administrator must have an appreciation for the normal level of activity on the server (ie. are there normally 30 tasks waiting to be processed or 300). If there are a lot more tasks queued than normal, then that's an indication that something is preventing Proficy from processing efficiently. If the number of tasks continues to increase and no tasks are being processed, then something may be totally "blocking" Proficy from executing.

A Blocking example:

Background

The Reader service is responsible for fetching and summarizing Time-Based historian data.The SummaryMgr service is responsible for fetching and summarizing Event-Based historian data.The EventMgr service is responsible for fetching historian data for triggering of and inputs to Event Models.

Symptoms

When any of these Plant Applications services that read data from historian tags fall behind in processing their tasks, it is often difficult to identify the cause. The service may keep running without error, but delays in the appearance of Results or Events may be seen. The delay may grow longer and longer, or the service may eventually catch up to real time.

Pending tasks may be seen to accumulate more than normal. Pending tasks are typically a few dozen (on large configurations) or less (on smaller configurations). When hundreds of Pending tasks are observed, there is likely a problem.
We can get an idea of where the problem is by examining the PendingTasks with this SQL Query:

SELECT pt.TaskId, t.TaskDesc, t.Owner, Count AS 'tasks' FROM PendingTasks pt JOIN Tasks t ON pt.TaskId = t.TaskId GROUP BY pt.TaskId, t.TaskDesc, t.Owner ORDER BY Count DESC

If the Owner of excessive PendingTasks is Reader, SummaryMgr, or EventMgr, the time they are taking to read their historian data should be evaluated. Put the service(s) that are falling behind in their processing in High Debug for long enough to read all of their associated tags. This will typically be several minutes in these cases. The service will then record in its debug log file the times it starts and finishes reading each historian tag's data. However, determining from the log file which one or more of these tags may be taking excessive time to read is still tedious at best, and at worst a daunting task.


Tags taking excessive time to read should have their historian archive data examined. There are a few common data issues resulting in excessive read time. Issue #1: Bad historian data qualityLots of archive values with Bad historian data quality results in a delay because the service must read all of the data as far back as it needs to in order to find a Good quality value, regardless of the Sampling Window or Event Timespan. The service must read all of the data every time it does a read. Historian data quality for both Input Tags and Data Quality Tags (if any) should be scrutinized.


Note: Data Quality tags are not listed by tag name in the debug Reader or SummaryMgr log files; only Input Tag names are listed. If an Input Tag has excessive Elapsed seconds and its historian data quality is found to be Good, see if the Variable has a Data Quality tag and check its historian data as well.Cause: Bad historian data quality is usually being sent to historian by a PLC or other data source for a piece of equipment that has been shut down. If the shutdown continues, the amount of Bad quality data continues to grow, making each subsequent historian read of that tag take longer and longer. Remedy: When Equipment is shut down, shut down the PLC or configure it to not send Bad quality data to historian. Issue #2: Lack of Archive Compression Excessive archiving of data in historian results in a delay due to the sheer amount of work to read all of the values in the Sampling Window or Event Timespan.Cause: When a historian tag's configuration has a short Polling interval or a lack of Archive Compression, it will place excessive values in the archive. Values are often unchanging or changes are insignificant.Remedy: Archive Compression should be configured on the historian tag such that only values that have changed significantly are archived. An Archive Compression Timeout can be configured to assure that a value is archived within the Variable's Sampling Interval or typical Event Timespan. Appropriate Archive Compression should be applied to all Variable Input Tags and Data Quality Tags, and especially to those tags used as Event Model Inputs and Triggers. Issue #3: Bad values due to Data Quality ComparisonVariables using Data Quality tags should also have the historian archive values of the Data Quality tag compared to the Data Quality Comparison Value using the Data Quality Comparison Type. Reader or SummaryMgr must read Data Quality Tag values as far back in time as needed to find a value whose comparison indicates Good Quality, regardless of the Sampling Interval or Event Timespan,Cause: If there is an extended period with excessive archive values of the Data Quality tag that consider all of the Input Tag values as Bad Quality (determined by Data Quality Comparison Type and Value), the service must also read all of the Input_Tag values back to the time of the last Good Quality.Remedy: Both Data Quality tags as well as Input tags should have historian archive compression applied so that only changed values are archived.

Can I delete orphaned pending tasks?

Description
Sometimes Pending Tasks are tasks visible from weeks ago. When you see that happen, typically it is caused by stopping and restarting the service. When the PA Service picks the task up, it sets the workstarted = 1 once the record has been processed then it'll go back and delete it from the table. If the service were stopped prior to it getting back to delete it, you would see an orphaned task. These tasks will NEVER be reprocessed unless an update is made to the workstarted = 0 again. On restart, these tasks would be ignored by the services.

Resolution
To delete the orphaned tasks, the tasks can be deleted using SQL Query Analyzer or an SQL Batch job can be setup to periodically clean up the table using the code below:

DELETE FROM Pendingtasks where Workstarted = 1 and Timestamp < getdate()-60

Audit Trail

The Audit Trail captures all configuration changes made by the Administrator or installed SIMs (hotfixes). Often one of the first courses of action in any troubleshooting situation is first to try and determine what has changed recently, as recent changes may be the cause of issues being experienced. The Audit Trail provides that facility along with who made those changes if they were logged on with their account. It should be noted though that this only captures actions performed mostly using the Administrator and not new or updated stored procedures introduced by those working directly in the database.
The Audit Trail gives the following information:

  • Which configuration stored procedure was executed and the parameters used

  • Which user made the change

  • When the change was made

Diagnostics: Blocking

The Blocking diagnostics option is intended to show blocked SQL Server processes. This data is also available in the Current Activity display. Blocked processes are processes that have stalled while waiting to acquire a lock on a table. They stall because another process has already acquired the same lock and it too has stalled. This is a complex situation that requires some in-depth analysis of SQL code to resolve. The same information is available in SQL Query Analyzer by executing the spSupport_ShowBlocking stored procedure. Running this sp usually requires the user executing it to have higher privileges (sysadmin)
By default, all processes are shown in this display. Blocked processed are identified in the Blocked column by SPId and application name (i.e. "53 / Proficy CalculationMgr").
When blocking occurs, two processes will be simultaneously blocked. An easy way to resolve the issue is to stop one of the processes. As such, if the DatabaseMgr and the CalculationMgr are both blocked, shutting down and restarting one of the services will resolve the blocking.

Proficy Client Message Log

The Proficy Client has a message log which may report errors associated with an issue. These messages are mostly used by the MSI support team but can be used to further diagnose issues with the Client. The message log is cleared out every time the Client is closed so it is important to save any relevant messages.

  1. From the Proficy Client menu bar select "Help" and then "View Message Log" from the drop down menu.

  2. Using the "Copy to Clipboard", copy and save any error messages.

Proficy SDK Error Log

The ProficySDKErrors.log file is installed with every Proficy Client Installation and is normally located in C:\Program Files\Common Files\Proficy folder on that server or workstation. This log contains connection information and some errors that are not reported to the Proficy Client application. It also applies to any client that uses the Proficy SDK, which means that along with errors associated with the Proficy Client itself, it will also report errors associated with any custom applications using the Proficy SDK or EAS

Event Viewer (Proficy)

Proficy has event viewer log files under Applications and Services Logs which often contain useful troubleshooting information.

Advantage License Report

GE Advantage can become disconnected from the application that they license. When this occurs a countdown of 21 days starts, at the end of which the license expires and the GE Applications stops working. This occurs from time to time. In the earlier versions of the Advantage License (before version 17) there is a Advantage License batch file that can be downloaded from GE and installed on the license servers. It will send daily reports indicating the expiry status of your license. For Version 17 this is built into the install but still needs to be configured.

 

AutomaTech Inc.