System Maintenance


SQL Maintenance

A Proficy Database should have the following Maintenance Plans:

  • Weekly defrag of indexes

  • Weekly update of Statistics

  • At least 4 times a day backup of database log (more likely every 15 min)

  • Daily full/differential database backup

  • Weekly database integrity check

All those Actions can be done using the SSIS Maintenance Plan designer.
They should be planned in period of server inactivity. This is obviously not always possible.
In the case that you can't afford a big server slow down, use the scripts attached to the wiki to delay the update of statistics and defrag of indexes. Those script will let the server catch up in between operations. They can be customized at will. If you know some tables in there are really getting big, you can add WAITFOR Statements to further slow down the defrag/statistics update and let Proficy catch up with the pending tasks.
*It cannot stress enough that your maintenance plan will need to be tested. You will need to execute the maintenance tasks manually during "normal hours" and monitor the server. This can be done on a TEST or DEVELOPMENT server to get a good idea of how it will behave on the Production database. Use the task manager, watch for pending tasks, watch for server locks, deadlocks and so on. Use a SQL Profiler to see how your Plant apps server handle the extra load caused by the Defrags or statistics updates. Tweak your script if the server cannot handle the load (Add more WaitFor statements or Waitfor a longer time).
Create a Maintenance plan
We are going to use the SSIS designer to create our maintenance plan. I will show you how to use Generic scripts to maintain the database.
Please note that on huge GBDB / Very busy servers you will need to use custom scripts that are doing "WAITFOR" to
Let the proficy server catchup the pending tasks as those maintenance plans can really slow down a server.
*This is a very basic example on how to setup a maintenance plan / sub plan. On a busy Proficy server, the best thing to do would be to generate multiple small scripts with pauses in between to accomplish those tasks to let the Proficy server catch up with the load.
Create your maintenance plan:



Add the update statistics sub plan:

Use the SSIS component toolbox and add an Update Statistics Action item.


Right-Click on the Update Statistics Task you just created and press Edit. Fill in the required
Configuration:


Click OK.
The next thing you must do is to configure the Maintenance plan Schedule:


Configure the schedule:

Your Maintenance Plan (Update statistics) is all set.Save it

Plant Applications Database Purge

Purging the plant applications database should only be done by someone who knows the configuration and business rules of the system very well. Careful investigation and consideration should be done prior to making decisions about what to purge and when to purge. There are many different versions of purge utilities. The following links will bring you to the GE knowledge base articles that describe the purge utility and include downloads of the various versions. Depending on the size of the database, the initial purge could take several hours and should be done offline.

It is recommended that you perform backups prior to purging and develop a test plan to validate the system after the purge to identify any issues that it may have introduced.

Plant Applications Database Purge 2.0 - Version 215.8, 4.2.3, 4.3.1, 4.4, 4.4.1 | GE Customer Center

Plant Applications Database Purge - Compatible with Plant Apps 5.0, 6.1 & 6.2 | GE Customer Center

SQL Maintenance Recommendations for Plant Applications Database

  1. SQL Maintenance Recommendations for Plant Applications Database based on Ola Hallegren solution.

  2. The attached document is old solution which may be used when #1 can't be deplyed due to e.g. local customer's policy. Please reference the attached document for more information on SQL table fragmentations, SQL Statistics, and SQL Execution Plans.

Solution

Using Ola Hallengren Solution for Index Maintenance

Ola Hallengren's scripts are simple to configure and can be used to optimize SQL Server indexes:

  • Index Maintenance: The script performs index defragmentation, index rebuilding and update statistics on all indexes in a database.

  • Index Optimization: The script analyzes index fragmentation and recommends indexes to rebuild or reorganize.

  • Index Verify: The script performs index verification and reports any corrupted indexes.

By using Ola Hallengren's scripts, you can automate and simplify index maintenance tasks, improve database performance, and ensure the integrity of your indexes.

Below is the Configuration of Maintenance Solution:

  1. CommandLog.sql: Table to log commands

URL to Download: https://ola.hallengren.com/scripts/CommandLog.sql

  1. CommandExecute.sql: Stored procedure to execute and log commands

URL to Download: https://ola.hallengren.com/scripts/CommandExecute.sql

  1. IndexOptimize.sql: Stored procedure to rebuild and reorganize indexes and update statistic

URL to Download: https://ola.hallengren.com/scripts/IndexOptimize.sql

 

Usage for IndexOptimize Proc: Refer https://ola.hallengren.com/sql-server-index-and-statistics-maintenance.html

One of the Best ways of Index and Statistics Maintenance is having as below:

  • Update only Modified Statistics on Daily Basis and Log to Table “CommandLog”.

EXECUTE master.dbo.IndexOptimize -- @Databases = 'SOADB', --List of Databases can be passed as CSV String @FragmentationLow = NULL, @FragmentationMedium = NULL, @FragmentationHigh = NULL, @UpdateStatistics = 'ALL', @OnlyModifiedStatistics = 'Y', @LogToTable = 'Y';
  • Index Re-Build and Re-Organize along on Weekly Basis based on the Fragmentation Percentage.

EXECUTE master.dbo.IndexOptimize -- @Databases = 'SOADB', @FragmentationLow = NULL, @FragmentationMedium = 'INDEX_REORGANIZE,INDEX_REBUILD_ONLINE,INDEX_REBUILD_OFFLINE', @FragmentationHigh = 'INDEX_REBUILD_ONLINE,INDEX_REBUILD_OFFLINE', @FragmentationLevel1 = 5, @FragmentationLevel2 = 30, @UpdateStatistics = 'ALL', @OnlyModifiedStatistics = 'Y', @LogToTable = 'Y';

 

We can use master.dbo.CommandLog Table to Analyze the frequency of Indexes and Statistics Updation.

 

SQL minimum configuration

Minimum requirements (TCP/IP and PIPEs enabled, Port 1433)

SQL Performance Monitoring

Identify key DB tables

Proficy Plant Applications is made up of 100's if not 1000+ tables. Below are some of the key tables that grow larger and a used very often. If their indexes have high fragmentation Plant Applications will experience poor performance.

Primary Key Tables

Tests

Timed_Event_Details

Event_Status_Transitions

Event_Components

Events

Event_Details

Secondary Key Tables

Test_History

Timed_Event_Detail_History

Event_History

 

DBCC

The primary purpose is to check for consistency errors, and should ideally be run every day.
The basic syntax is:
DBCC CHECKDB ('YourDatabase') WITH NO_INFOMSGS
DBCC CHECKDB('DatabaseName') WITH NO_INFOMSGS, ALL_ERRORMSGS
NO_INFOMSGS prevents an excessive number of informational messages from being generated. There are several other options, but this is the syntax you should aim to use as it performs all integrity checks.
This may take a long time on large databases and you may want to specify the PHYSICAL_ONLY option. This checks physical on-disk structures, but omits the internal logical checks. The syntax is:
DBCC CHECKDB ('YourDatabase') WITH PHYSICAL_ONLY

Determine level of Index Fragmentation

A Table is fragmented when it has empty holes in the pages and extents that it takes up.
Fragmentation occurs as a result of page splits and deletion of rows. This could happen is a large number of rows are deleted. Because primary keys are not supposed to be reused, new data is added to the physical end of the table, leaving a hole that slows down accessing the data. To resolve the fragmentation, you can rebuild the clustered index.
Rebuild: An index 'rebuild' creates a fresh, sparkling new structure for the index. If the index is disabled, rebuilding brings it back to life. You can apply a new fillfactor when you rebuild an index. If you cancel a rebuild operation midway, it must roll back (and if it's being done offline, that can take a while).
Reorganize: This option is more lightweight. It runs through the leaf level of the index, and as it goes it fixes physical ordering of pages and also compacts pages to apply any previously set fillfactor settings. This operation is always online, and if you cancel it then it's able to just stop where it is (it doesn't have a giant operation to rollback).
In version 2 of this document we will provide code and discuss the following:

  • Index fragmentation (specs -> warnings)

  • Fragmented count

  • Page Counts

  • Table Record Counts

  • Index level and depth

Re-Organize and/or Rebuild Indexes

In order to reduce fragmentation, we will have to reorganize or rebuild the indexes. Choosing between reorganizing or rebuilding depends on the query results.For heavily fragmented indexes a rebuild process is needed, otherwise index reorganization should be sufficient. The following table summarizes when to use each one:

Reference Values (in %)

Action

 SQL statement

avg_fragmentation_in_percent > 5 AND < 30

 Reorganize Index

 ALTER INDEX REORGANIZE

avg_fragmentation_in_percent > 30

 Rebuild Index

 ALTER INDEX REBUILD

 

SQL logs

SQL server log files are found in the Management section of the Object Browser of SQL.


You need to be a member of the security admin role in order to access these logs.

Table growth

More will be provide on table growth in version two. Basically key tables should be monitored for growth so that database/disk sizing can be predicted. It can also lead to decisions on purging.
A plan to compare key table growth from week to week or month to month might provide good information.

 

SQL Profiler

SQL Server Profiler is a graphical tool that allows you to monitor and collect server events such as:

  • Debug Transact-SQL statements and stored procedures.

  • Analyze performance by identifying slowly executing queries. (duration)

  • The execution of statements within stored procedures

  • Deadlocks

This information can be displayed in SQL Server Profiler, stored in trace files, or saved in a Microsoft® SQL Server™ table. SQL Server Profiler can read trace files created in SQL Server Profiler or SQL Trace, as well as Transact-SQL scripts.
Examples of event data captured by SQL Server Profiler include:

  • The types (classes) of events being traced (e.g. SQLBatchCompleted).

  • The SQL Server name of the users performing activity in SQL Server.

  • The text of Transact-SQL statements and stored procedures being executed.

  • The duration of Transact-SQL and other types of events.

  • The severity of errors.

Event data can be filtered so that only a subset of the event data is collected. For instance, a filter can be defined to trace only those SQL:BatchCompleted events with a duration greater than one second.
Additionally, SQL Server Profiler allows captured event data to be replayed against SQL Server, thereby effectively re-executing the saved events as they occurred originally.
SQL Server Profiler can also be used to:

  • Monitor the performance of SQL Server.

  • Debug Transact-SQL statements and stored procedures.

  • Identify slow-executing queries.

  • Capture error information in the context of the queries being executed.

  • Identify scans of large tables.

Using SQL Profiler with Proficy

  • SQL Profiler is useful for troubleshooting spLocal_* stored procedures created specifically for sites.

  • Allows you to see the values of the input parameters

  • Allows the capture of run statistics.

  • Open an existing trace or create a new one.

  • Save to a file.

  • Add filters to make traces more efficient and easier to read.

  • Most useful filters are:

    • LoginName=comxclient

    • TextData Like %splocal%

    • ApplicationName Like Proficy%

 

spSupport_Blocking

Plant Applications is installed with a Stored procedure that will determine whether there is Blocking occurring in the database. There may be a need to have heightened permissions to execute this. Below is an example of running it and observing that there are no blocks.

Real Time Information Portal Maintenance

"Portal" has reached its end of life with GE. If you are still operating Proficy Portal then you should back up your configuration files regularly.
Example of a Backup batch command for Portal files.
@echo off
rem
rem copy portal config and screens
rem
cd "C:\Program Files\GE Fanuc\Proficy Portal\webapps\"
xcopy /E /q /y C: \\remotestorageshare\MESbackups\MESPortal
Executed daily via Scheduled job Backup_Portal.job

Historian Server Maintenance

The Historian server is the central point for managing all of the client and collector interfaces, storing and (optionally) compressing data and retrieving data. All tag data (numbers, strings, BLOBs) are stored in a proprietary format in Data Archives. Each Data Archive represents a specific time period of historical data.

Routine Maintenance

On a regular schedule, examine and analyze the system performance indicators displayed on the System Statistics screen of the Historian Administrator, as follows:

Field

What to Monitor

Est. Days to Full

If time is growing short, make sure that the server has sufficient unused storage capacity to open a new archive when the active one fills up. Verify that the Create New Archives Automatically function is enabled. If it is disabled, you must manually create a new archive before the active archive fills up. If you do not have enough unused storage capacity, you may have to enable the Overwrite Old Archives feature. Since this feature overwrites existing data, exercise caution in using it.

Consumption Rate of Archive Storage

If the rate is excessively high, you must either reduce the rate at which data flows into the system or increase the filtering applied to the data to lower the rate of archiving. To reduce the collection rate, slow the polling rate on some or all tags. To increase filtering, enable compression at the collector and/or archiver and widen the compression deadbands.

Failed Writes

If the display shows a significant number of failed writes, investigate the cause and take corrective action to eliminate the malfunctions. Refer to the DataArchiver-XX.log file or query the message database to determine the tags for which failed writes occurred.For example, trying to write values to a deleted archive causes failed writes. Trying to archive data with a timestamp that precedes the start time of the first archive, trying to write to a read-only archive, or trying to write a value with a timestamp more than 15 minutes ahead of the current time on an archiver will produce a failed write.

System Alerts

Examine the messages and alerts and take appropriate action to correct any problems.

Collector Performance Indicators (see Collector Maintenance, Performance Tab)

Avg. Event Rate Chart – is the rate at a normal level?

Avg. Event Rate Chart, Compression Chart, Overruns Chart

Does the chart exhibit an acceptable trend line? If not, determine why. Balance polling schedules, adjust scan frequencies (collection intervals), and modify compression deadbands to lighten load. Compression Chart – is compression effectiveness acceptable? If not, verify that compression is enabled and then widen the deadbands to increase the effect of compression.Overruns Chart – if the value is anything other than zero, determine the severity and cause of the problem and take corrective action.



AutomaTech Inc.