SQL Maintenance
A Proficy Database should have the following Maintenance Plans:
Weekly defrag of indexes
Weekly update of Statistics
At least 4 times a day backup of database log (more likely every 15 min)
Daily full/differential database backup
Weekly database integrity check
All those Actions can be done using the SSIS Maintenance Plan designer.
They should be planned in period of server inactivity. This is obviously not always possible.
In the case that you can't afford a big server slow down, use the scripts attached to the wiki to delay the update of statistics and defrag of indexes. Those script will let the server catch up in between operations. They can be customized at will. If you know some tables in there are really getting big, you can add WAITFOR Statements to further slow down the defrag/statistics update and let Proficy catch up with the pending tasks.
*It cannot stress enough that your maintenance plan will need to be tested. You will need to execute the maintenance tasks manually during "normal hours" and monitor the server. This can be done on a TEST or DEVELOPMENT server to get a good idea of how it will behave on the Production database. Use the task manager, watch for pending tasks, watch for server locks, deadlocks and so on. Use a SQL Profiler to see how your Plant apps server handle the extra load caused by the Defrags or statistics updates. Tweak your script if the server cannot handle the load (Add more WaitFor statements or Waitfor a longer time).
Create a Maintenance plan
We are going to use the SSIS designer to create our maintenance plan. I will show you how to use Generic scripts to maintain the database.
Please note that on huge GBDB / Very busy servers you will need to use custom scripts that are doing "WAITFOR" to
Let the proficy server catchup the pending tasks as those maintenance plans can really slow down a server.
*This is a very basic example on how to setup a maintenance plan / sub plan. On a busy Proficy server, the best thing to do would be to generate multiple small scripts with pauses in between to accomplish those tasks to let the Proficy server catch up with the load.
Create your maintenance plan:
Add the update statistics sub plan:
Use the SSIS component toolbox and add an Update Statistics Action item.
Right-Click on the Update Statistics Task you just created and press Edit. Fill in the required
Configuration:
Click OK.
The next thing you must do is to configure the Maintenance plan Schedule:
Configure the schedule:
Your Maintenance Plan (Update statistics) is all set.Save it
SQL minimum configuration
Minimum requirements (TCP/IP and PIPEs enabled, Port 1433)
SQL Performance Monitoring
Identify key DB tables
Proficy Plant Applications is made up of 100's if not 1000+ tables. Below are some of the key tables that grow larger and a used very often. If their indexes have high fragmentation Plant Applications will experience poor performance.
Primary Key Tables |
Tests |
Timed_Event_Details |
Event_Status_Transitions |
Event_Components |
Events |
Event_Details |
Secondary Key Tables |
Test_History |
Timed_Event_Detail_History |
Event_History |
DBCC
The primary purpose is to check for consistency errors, and should ideally be run every day.
The basic syntax is:
DBCC CHECKDB ('YourDatabase') WITH NO_INFOMSGS
DBCC CHECKDB('DatabaseName') WITH NO_INFOMSGS, ALL_ERRORMSGS
NO_INFOMSGS prevents an excessive number of informational messages from being generated. There are several other options, but this is the syntax you should aim to use as it performs all integrity checks.
This may take a long time on large databases and you may want to specify the PHYSICAL_ONLY option. This checks physical on-disk structures, but omits the internal logical checks. The syntax is:
DBCC CHECKDB ('YourDatabase') WITH PHYSICAL_ONLY
Determine level of Index Fragmentation
A Table is fragmented when it has empty holes in the pages and extents that it takes up.
Fragmentation occurs as a result of page splits and deletion of rows. This could happen is a large number of rows are deleted. Because primary keys are not supposed to be reused, new data is added to the physical end of the table, leaving a hole that slows down accessing the data. To resolve the fragmentation, you can rebuild the clustered index.
Rebuild: An index 'rebuild' creates a fresh, sparkling new structure for the index. If the index is disabled, rebuilding brings it back to life. You can apply a new fillfactor when you rebuild an index. If you cancel a rebuild operation midway, it must roll back (and if it's being done offline, that can take a while).
Reorganize: This option is more lightweight. It runs through the leaf level of the index, and as it goes it fixes physical ordering of pages and also compacts pages to apply any previously set fillfactor settings. This operation is always online, and if you cancel it then it's able to just stop where it is (it doesn't have a giant operation to rollback).
In version 2 of this document we will provide code and discuss the following:
Index fragmentation (specs -> warnings)
Fragmented count
Page Counts
Table Record Counts
Index level and depth
Re-Organize and/or Rebuild Indexes
In order to reduce fragmentation, we will have to reorganize or rebuild the indexes. Choosing between reorganizing or rebuilding depends on the query results.For heavily fragmented indexes a rebuild process is needed, otherwise index reorganization should be sufficient. The following table summarizes when to use each one:
Reference Values (in %) | Action | SQL statement |
avg_fragmentation_in_percent > 5 AND < 30 | Reorganize Index | ALTER INDEX REORGANIZE |
avg_fragmentation_in_percent > 30 | Rebuild Index | ALTER INDEX REBUILD |
SQL logs
SQL server log files are found in the Management section of the Object Browser of SQL.
You need to be a member of the security admin role in order to access these logs.
Table growth
More will be provide on table growth in version two. Basically key tables should be monitored for growth so that database/disk sizing can be predicted. It can also lead to decisions on purging.
A plan to compare key table growth from week to week or month to month might provide good information.
SQL Profiler
SQL Server Profiler is a graphical tool that allows you to monitor and collect server events such as:
Debug Transact-SQL statements and stored procedures.
Analyze performance by identifying slowly executing queries. (duration)
The execution of statements within stored procedures
Deadlocks
This information can be displayed in SQL Server Profiler, stored in trace files, or saved in a Microsoft® SQL Server™ table. SQL Server Profiler can read trace files created in SQL Server Profiler or SQL Trace, as well as Transact-SQL scripts.
Examples of event data captured by SQL Server Profiler include:
The types (classes) of events being traced (e.g. SQLBatchCompleted).
The SQL Server name of the users performing activity in SQL Server.
The text of Transact-SQL statements and stored procedures being executed.
The duration of Transact-SQL and other types of events.
The severity of errors.
Event data can be filtered so that only a subset of the event data is collected. For instance, a filter can be defined to trace only those SQL:BatchCompleted events with a duration greater than one second.
Additionally, SQL Server Profiler allows captured event data to be replayed against SQL Server, thereby effectively re-executing the saved events as they occurred originally.
SQL Server Profiler can also be used to:
Monitor the performance of SQL Server.
Debug Transact-SQL statements and stored procedures.
Identify slow-executing queries.
Capture error information in the context of the queries being executed.
Identify scans of large tables.
Using SQL Profiler with Proficy
SQL Profiler is useful for troubleshooting spLocal_* stored procedures created specifically for sites.
Allows you to see the values of the input parameters
Allows the capture of run statistics.
Open an existing trace or create a new one.
Save to a file.
Add filters to make traces more efficient and easier to read.
Most useful filters are:
LoginName=comxclient
TextData Like %splocal%
ApplicationName Like Proficy%
spSupport_Blocking
Plant Applications is installed with a Stored procedure that will determine whether there is Blocking occurring in the database. There may be a need to have heightened permissions to execute this. Below is an example of running it and observing that there are no blocks.
eld.Error_Id,
eld.Nesting_Level