社区
Eclipse
帖子详情
纠结一下午的问题“The operation cannot be completed. See the details.”
willtolove
2012-07-18 06:17:39
求高手回答一下,在安装插件的时候出现“The operation cannot be completed. See the details.”换了很多个版本了。但是还是安装不上,现在是3.6的版本。
...全文
499
1
打赏
收藏
纠结一下午的问题“The operation cannot be completed. See the details.”
求高手回答一下,在安装插件的时候出现“The operation cannot be completed. See the details.”换了很多个版本了。但是还是安装不上,现在是3.6的版本。
复制链接
扫一扫
分享
转发到动态
举报
AI
作业
写回复
配置赞助广告
用AI写文章
1 条
回复
切换为时间正序
请发表友善的回复…
发表回复
打赏红包
willtolove
2012-07-18
打赏
举报
回复
求高人出现。
Android ADT安装技巧其常见错误处理
初学者对于 自己的eclipse 版本 在安装ADT时 太慢 不能安装的
问题
微软内部资料-SQL性能优化5
Contents Overview 1 Lesson 1: Index Concepts 3 Lesson 2: Concepts – Statistics 29 Lesson 3: Concepts – Query Optimization 37 Lesson 4: Information Collection and Analysis 61 Lesson 5: Formulating and Implementing Resolution 75 Module 6: Troubleshooting Query Performance Overview At the end of this module, you will be able to: Describe the different types of indexes and how indexes can be us
ed
to improve performance. Describe what statistics are us
ed
for and how they can help in optimizing query performance. Describe how queries are optimiz
ed
. Analyze the information collect
ed
from various tools. Formulate resolution to query performance problems. Lesson 1: Index Concepts Indexes are the most useful tool for improving query performance. Without a useful index, Microsoft® SQL Server™ must search every row on every page in table to find the rows to return. With a multitable query, SQL Server must sometimes search a table multiple times so each page is scann
ed
much more than once. Having useful indexes spe
ed
s up finding individual rows in a table, as well as finding the matching rows ne
ed
ed
to join two tables. What You Will Learn After
com
p
let
ing this lesson, you will be able to: Understand the structure of SQL Server indexes. Describe how SQL Server uses indexes to find rows. Describe how fillfactor can impact the performance of data retrieval and insertion. Describe the different types of fragmentation that can occur within an index. Re
com
mend
ed
Reading Chapter 8: “Indexes”, Inside SQL Server 2000 by Kalen Delaney Chapter 11: “Batches, Stor
ed
Proc
ed
ures and Functions”, Inside SQL Server 2000 by Kalen Delaney Finding Rows without Indexes With No Indexes, A Table Must Be Scann
ed
SQL Server keeps track of which pages belong to a table or index by using IAM pages. If there is no cluster
ed
index, there is a sysindexes row for the table with an indid value of 0, and that row will keep track of the address of the first IAM for the table. The IAM is a giant bitmap, and every 1 bit indicates that the corresponding extent belongs to the table. The IAM allows SQL Server to do efficient prefetching of the table’s extents, but every row still must be examin
ed
. General Index Structure All SQL Server Indexes Are Organiz
ed
As B-Trees Indexes in SQL Server store their information using standard B-trees. A B-tree provides fast access to data by searching on a key value of the index. B-trees cluster records with similar keys. The B stands for balanc
ed
, and balancing the tree is a core feature of a B-tree’s usefulness. The trees are manag
ed
, and branches are graft
ed
as necessary, so that navigating down the tree to find a value and locate a specific record takes only a few page accesses. Because the trees are balanc
ed
, finding any record requires about the same amount of resources, and retrieval spe
ed
is consistent because the index has the same depth throughout. Cluster
ed
and Noncluster
ed
Indexes Both Index Types Have Many
Com
mon Features An index consists of a tree with a root from which the navigation begins, possible interm
ed
iate index levels, and bottom-level leaf pages. You use the index to find the correct leaf page. The number of levels in an index will vary depending on the number of rows in the table and the size of the key column or columns for the index. If you create an index using a large key, fewer entries will fit on a page, so more pages (and possibly more levels) will be ne
ed
ed
for the index. On a qualifi
ed
select, update, or de
let
e, the correct leaf page will be the lowest page of the tree in which one or more rows with the specifi
ed
key or keys reside. A qualifi
ed
operation
is one that affects only specific rows that satisfy the conditions of a WHERE clause, as oppos
ed
to accessing the whole table. An index can have multiple node levels An index page above the leaf is call
ed
a node page. Each index row in node pages contains an index key (or set of keys for a
com
posite index) and a pointer to a page at the next level for which the first key value is the same as the key value in the current index row. Leaf Level contains all key values In any index, whether cluster
ed
or noncluster
ed
, the leaf level contains every key value, in key sequence. In SQL Server 2000, the sequence can be either ascending or descending. The sysindexes table contains all sizing, location and distribution information Any information about size of indexes or tables is stor
ed
in sysindexes. The only source of any storage location information is the sysindexes table, which keeps track of the address of the root page for every index, and the first IAM page for the index or table. There is also a column for the first page of the table, but this is not guarante
ed
to be reliable. SQL Server can find all pages belonging to an index or table by examining the IAM pages. Sysindexes contains a pointer to the first IAM page, and each IAM page contains a pointer to the next one. The Difference between Cluster
ed
and Noncluster
ed
Indexes The main difference between the two types of indexes is how much information is stor
ed
at the leaf. The leaf levels of both types of indexes contain all the key values in order, but they also contain other information. Cluster
ed
Indexes The Leaf Level of a Cluster
ed
Index Is the Data The leaf level of a cluster
ed
index contains the data pages, not just the index keys. Another way to say this is that the data itself is part of the cluster
ed
index. A cluster
ed
index keeps the data in a table order
ed
around the key. The data pages in the table are kept in a doubly link
ed
list call
ed
the page chain. The order of pages in the page chain, and the order of rows on the data pages, is the order of the index key or keys. Deciding which key to cluster on is an important performance consideration. When the index is travers
ed
to the leaf level, the data itself has been retriev
ed
, not simply point
ed
to. Uniqueness Is Maintain
ed
In Key Values In SQL Server 2000, all cluster
ed
indexes are unique. If you build a cluster
ed
index without specifying the unique keyword, SQL Server forces uniqueness by adding a uniqueifier to the rows when necessary. This uniqueifier is a 4-byte value add
ed
as an additional sort key to only the rows that have duplicates of their primary sort key. You can see this extra value if you use DBCC PAGE to look at the actual index rows the section on indexes internal. . Finding Rows in a Cluster
ed
Index The Leaf Level of a Cluster
ed
Index Contains the Data A cluster
ed
index is like a telephone directory in which all of the rows for customers with the same last name are cluster
ed
together in the same part of the book. Just as the organization of a telephone directory makes it easy for a person to search, SQL Server quickly searches a table with a cluster
ed
index. Because a cluster
ed
index determines the sequence in which rows are stor
ed
in a table, there can only be one cluster
ed
index for a table at a time. Performance Considerations Keeping your cluster
ed
key value small increases the number of index rows that can be plac
ed
on an index page and decreases the number of levels that must be travers
ed
. This minimizes I/O. As we’ll see, the cluster
ed
key is duplicat
ed
in every noncluster
ed
index row, so keeping your cluster
ed
key small will allow you to have more index fit per page in all your indexes. Note The query corresponding to the slide is: SELECT lastname, firstname FROM member WHERE lastname = ‘Ota’ Noncluster
ed
Indexes The Leaf Level of a Noncluster
ed
Index Contains a Bookmark A noncluster
ed
index is like the index of a textbook. The data is stor
ed
in one place and the index is stor
ed
in another. Pointers indicate the storage location of the index
ed
items in the underlying table. In a noncluster
ed
index, the leaf level contains each index key, plus a bookmark that tells SQL Server where to find the data row corresponding to the key in the index. A bookmark can take one of two forms: If the table has a cluster
ed
index, the bookmark is the cluster
ed
index key for the corresponding data row. This cluster
ed
key can be multiple column if the cluster
ed
index is
com
posite, or is defin
ed
to be non-unique. If the table is a heap (in other words, it has no cluster
ed
index), the bookmark is a RID, which is an actual row locator in the form File#:Page#:Slot#. Finding Rows with a NC Index on a Heap Noncluster
ed
Indexes Are Very Efficient When Searching For A Single Row After the noncluster
ed
key at the leaf level of the index is found, only one more page access is ne
ed
ed
to find the data row. Searching for a single row using a noncluster
ed
index is almost as efficient as searching for a single row in a cluster
ed
index. However, if we are searching for multiple rows, such as duplicate values, or keys in a range, anything more than a small number of rows will make the noncluster
ed
index search very inefficient. Note The query corresponding to the slide is: SELECT lastname, firstname FROM member WHERE lastname BETWEEN ‘Master’ AND ‘Rudd’ Finding Rows with a NC Index on a Cluster
ed
Table A Cluster
ed
Key Is Us
ed
as the Bookmark for All Noncluster
ed
Indexes If the table has a cluster
ed
index, all columns of the cluster
ed
key will be duplicat
ed
in the noncluster
ed
index leaf rows, unless there is overlap between the cluster
ed
and noncluster
ed
key. For example, if the cluster
ed
index is on (lastname, firstname) and a noncluster
ed
index is on firstname, the firstname value will not be duplicat
ed
in the noncluster
ed
index leaf rows. Note The query corresponding to the slide is: SELECT lastname, firstname, phone FROM member WHERE firstname = ‘Mike’ Covering Indexes A Covering Index Provides the Fastest Data Access A covering index contains ALL the fields access
ed
in the query. Normally, only the columns in the WHERE clause are helpful in determining useful indexes, but for a covering index, all columns must be includ
ed
. If all columns ne
ed
ed
for the query are in the index, SQL Server never ne
ed
s to access the data pages. If even one column in the query is not part of the index, the data rows must be access
ed
. The leaf level of an index is the only level that contains every key value, or set of key values. For a cluster
ed
index, the leaf level is the data itself, so in reality, a cluster
ed
index ALWAYS covers any query. Nevertheless, for most of our optimization discussions, we only consider noncluster
ed
indexes. Scanning the leaf level of a noncluster
ed
index is almost always faster than scanning a cluster
ed
index, so covering indexes are particular valuable when we ne
ed
ALL the key values of a particular noncluster
ed
index. Example: Select an aggregate value of a column with a cluster
ed
index. Suppose we have a noncluster
ed
index on price, this query is cover
ed
: SELECT avg(price) from titles Since the cluster
ed
key is includ
ed
in every noncluster
ed
index row, the cluster
ed
key can be includ
ed
in the covering. Suppose you have a noncluster
ed
index on price and a cluster
ed
index on title_id; then this query is cover
ed
: SELECT title_id, price FROM titles WHERE price between 10 and 20 Performance Considerations In general, you do want to keep your indexes narrow. However, if you have a critical query that just is not giving you satisfactory performance no matter what you do, you should consider creating an index to cover it, or adding one or two extra columns to an existing index, so that the query will be cover
ed
. The leaf level of a noncluster
ed
index is like a ‘mini’ cluster
ed
index, so you can have most of the benefits of clustering, even if there already is another cluster
ed
index on the table. The tradeoff to adding more, wider indexes for covering queries are the add
ed
disk space, and more overhead for updating those columns that are now part of the index. Bug In general, SQL Server will detect when a query is cover
ed
, and detect the possible covering indexes. However, in some cases, you must force SQL Server to use a covering index by including a WHERE clause, even if the WHERE clause will return ALL the rows in the table. This is SHILOH bug #352079 Steps to reproduce 1. Make copy of orders table from Northwind: USE Northwind CREATE TABLE [NewOrders] ( [OrderID] [int] NOT NULL , [CustomerID] [nchar] (5) NULL , [EmployeeID] [int] NULL , [OrderDate] [datetime] NULL , [Requir
ed
Date] [datetime] NULL , [Shipp
ed
Date] [datetime] NULL , [ShipVia] [int] NULL , [Freight] [money] NULL , [ShipName] [nvarchar] (40) NULL, [ShipAddress] [nvarchar] (60) , [ShipCity] [nvarchar] (15) NULL, [ShipRegion] [nvarchar] (15) NULL, [ShipPostalCode] [nvarchar] (10) NULL, [ShipCountry] [nvarchar] (15) NULL ) INSERT into NewOrders SELECT * FROM Orders 2. Build nc index on OrderDate: create index dateindex on neworders(orderdate) 3. Test Query by looking at query plan: select orderdate from NewOrders The index is being scann
ed
, as expect
ed
. 4. Build an index on orderId: create index orderid_index on neworders(orderID) 5. Test Query by looking at query plan: select orderdate from NewOrders Now the TABLE is being scann
ed
, instead of the original index! Index Intersection Multiple Indexes Can Be Us
ed
On A Single Table In versions prior to SQL Server 7, only one index could be us
ed
for any table to process any single query. The only exception was a query involving an OR. In current SQL Server versions, multiple noncluster
ed
indexes can each be access
ed
, retrieving a set of keys with bookmarks, and then the result sets can be join
ed
on the
com
mon bookmarks. The optimizer weighs the cost of performing the unindex
ed
join on the interm
ed
iate result sets, with the cost of only using one index, and then scanning the entire result set from that single index. Fillfactor and Performance Creating an Index with a Low Fillfactor Delays Page Splits when Inserting DBCC SHOWCONTIG will show you a low value for “Avg. Page Density” when a low fillfactor has been specifi
ed
. This is good for inserts and updates, because it will delay the ne
ed
to split pages to make room for new rows. It can be bad for scans, because fewer rows will be on each page, and more pages must be read to access the same amount of data. However, this cost will be minimal if the scan density value is good. Index Reorganization DBCC SHOWCONTIG Provides Lots of Information Here’s some sample output from running a basic DBCC SHOWCONTIG on the order
details
table in the Northwind database: DBCC SHOWCONTIG scanning 'Order
Details
' table... Table: 'Order
Details
' (325576198); index ID: 1, database ID:6 TABLE level scan perform
ed
. - Pages Scann
ed
................................: 9 - Extents Scann
ed
..............................: 6 - Extent Switches..............................: 5 - Avg. Pages per Extent........................: 1.5 - Scan Density [Best Count:Actual Count].......: 33.33% [2:6] - Logical Scan Fragmentation ..................: 0.00% - Extent Scan Fragmentation ...................: 16.67% - Avg. Bytes Free per Page.....................: 673.2 - Avg. Page Density (full).....................: 91.68% By default, DBCC SHOWCONTIG scans the page chain at the leaf level of the specifi
ed
index and keeps track of the following values: Average number of bytes free on each page (Avg. Bytes Free per Page) Number of pages access
ed
(Pages scann
ed
) Number of extents access
ed
(Extents scann
ed
) Number of times a page had a lower page number than the previous page in the scan (This value for Out of order pages is not display
ed
, but is us
ed
for additional
com
putations.) Number of times a page in the scan was on a different extent than the previous page in the scan (Extent switches) SQL Server also keeps track of all the extents that have been access
ed
, and then it determines how many gaps are in the us
ed
extents. An extent is identifi
ed
by the page number of its first page. So, if extents 8, 16, 24, 32, and 40 make up an index, there are no gaps. If the extents are 8, 16, 24, and 40, there is one gap. The value in DBCC SHOWCONTIG’s output call
ed
Extent Scan Fragmentation is
com
put
ed
by dividing the number of gaps by the number of extents, so in this example the Extent Scan Fragmentation is ¼, or 25 percent. A table using extents 8, 24, 40, and 56 has three gaps, and its Extent Scan Fragmentation is ¾, or 75 percent. The maximum number of gaps is the number of extents - 1, so Extent Scan Fragmentation can never be 100 percent. The value in DBCC SHOWCONTIG’s output call
ed
Logical Scan Fragmentation is
com
put
ed
by dividing the number of Out of order pages by the number of pages in the table. This value is meaningless in a heap. You can use either the Extent Scan Fragmentation value or the Logical Scan Fragmentation value to determine the general level of fragmentation in a table. The lower the value, the less fragmentation there is. Alternatively, you can use the value call
ed
Scan Density, which is
com
put
ed
by dividing the optimum number of extent switches by the actual number of extent switches. A high value means that there is little fragmentation. Scan Density is not valid if the table spans multiple files; therefore, it is less useful than the other values. SQL Server 2000 allows online defragmentation You can choose from several methods for removing fragmentation from an index. You could rebuild the index and have SQL Server allocate all new contiguous pages for you. To rebuild the index, you can use a simple DROP INDEX and CREATE INDEX
com
bination, but in many cases using these
com
mands is less than optimal. In particular, if the index is supporting a constraint, you cannot use the DROP INDEX
com
mand. Alternatively, you can use DBCC DBREINDEX, which can rebuild all the indexes on a table in one
operation
, or you can use the drop_existing clause along with CREATE INDEX. The drawback of these methods is that the table is unavailable while SQL Server is rebuilding the index. When you are rebuilding only noncluster
ed
indexes, SQL Server takes a shar
ed
lock on the table, which means that users cannot make modifications, but other processes can SELECT from the table. Of course, those SELECT queries cannot take advantage of the index you are rebuilding, so they might not perform as well as they would otherwise. If you are rebuilding a cluster
ed
index, SQL Server takes an exclusive lock and does not allow access to the table, so your data is temporarily unavailable. SQL Server 2000
let
s you defragment an index without
com
p
let
ely rebuilding it. DBCC INDEXDEFRAG reorders the leaf-level pages into physical order as well as logical order, but using only the pages that are already allocat
ed
to the leaf level. This
com
mand does an in-place ordering, which is similar to a sorting technique call
ed
bubble sort (you might be familiar with this technique if you've studi
ed
and
com
par
ed
various sorting algorithms). In-place ordering can r
ed
uce logical fragmentation to 2 percent or less, making an order
ed
scan through the leaf level much faster. DBCC INDEXDEFRAG also
com
pacts the pages of an index, bas
ed
on the original fillfactor. The pages will not always end up with the original fillfactor, but SQL Server uses that value as a goal. The defragmentation process attempts to leave at least enough space for one average-size row on each page. In addition, if SQL Server cannot obtain a lock on a page during the
com
paction phase of DBCC INDEXDEFRAG, it skips the page and does not return to it. Any empty pages creat
ed
as a result of
com
paction are remov
ed
. The algorithm SQL Server 2000 uses for DBCC INDEXDEFRAG finds the next physical page in a file belonging to the index's leaf level and the next logical page in the leaf level to swap it with. To find the next physical page, the algorithm scans the IAM pages belonging to that index. In a database spanning multiple files, in which a table or index has pages on more than one file, SQL Server handles pages on different files separately. SQL Server finds the next logical page by scanning the index's leaf level. After each page move, SQL Server drops all locks and saves the last key on the last page it mov
ed
. The next iteration of the algorithm uses the last key to find the next logical page. This process
let
s other users update the table and index while DBCC INDEXDEFRAG is running.
Let
us look at an example in which an index's leaf level consists of the following pages in the following logical order: 47 22 83 32 12 90 64 The first key is on page 47, and the last key is on page 64. SQL Server would have to scan the pages in this order to retrieve the data in sort
ed
order. As its first step, DBCC INDEXDEFRAG would find the first physical page, 12, and the first logical page, 47. It would then swap the pages, using a temporary buffer as a holding area. After the first swap, the leaf level would look like this: 12 22 83 32 47 90 64 The next physical page is 22, which is also the next logical page, so no work would be necessary. DBCC INDEXDEFRAG would then swap the next physical page, 32, with the next logical page, 83: 12 22 32 83 47 90 64 After the next swap of 47 with 83, the leaf level would look like this: 12 22 32 47 83 90 64 Then, the defragmentation process would swap 64 with 83: 12 22 32 47 64 90 83 and 83 with 90: 12 22 32 47 64 83 90 At the end of the DBCC INDEXDEFRAG
operation
, the pages in the table or index are not contiguous, but their logical order matches their physical order. Now, if the pages were access
ed
from disk in sort
ed
order, the head would ne
ed
to move in only one direction. Keep in mind that DBCC INDEXDEFRAG uses only pages that are already part of the index's leaf level; it allocates no new pages. In addition, defragmenting a large table can take quite a while, and you will get a report every 5 minutes about the estimat
ed
percentage
com
p
let
ed
. However, except for the locks on the pages being switch
ed
, this
com
mand ne
ed
s no additional locks. All the table's other pages and indexes are fully available for your applications to use during the defragmentation process. If you must
com
p
let
ely rebuild an index because you want a new fillfactor, or if simple defragmentation is not enough because you want to remove all fragmentation from your indexes, another SQL Server 2000 improvement makes index rebuilding less of an imposition on the rest of the system. SQL Server 2000
let
s you create an index in parallel—that is, using multiple processors—which drastically r
ed
uces the time necessary to perform the rebuild. The algorithm SQL Server 2000 uses, allows near-linear scaling with the number of processors you use for the rebuild, so four processors will take only one-fourth the time that one processor requires to rebuild an index. System availability increases because the length of time that a table is unavailable decreases. Note that only the SQL Server 2000 Enterprise
Ed
ition supports parallel index creation. Indexes on Views and
Com
put
ed
Columns Building an Index Gives the Data Physical Existence Normally, views are only logical and the rows
com
prising the view’s data are not generat
ed
until the view is access
ed
. The values for
com
put
ed
columns are typically not stor
ed
anywhere in the database; only the definition for the
com
putation is stor
ed
and the
com
putation is r
ed
one every time a
com
put
ed
column is access
ed
. The first index on a view must be a cluster
ed
index, so that the leaf level can hold all the actual rows that make up the view. Once that cluster
ed
index has been build, and the view’s data is now physical, additional (noncluster
ed
) indexes can be built. An index on a
com
put
ed
column can be noncluster
ed
, because all we ne
ed
to store is the index key values.
Com
mon Prerequisites for Index
ed
Views and Indexes on
Com
put
ed
Columns In order for SQL Server to create use these special indexes, you must have the seven SET options correctly specifi
ed
: ARITHABORT, CONCAT_NULL_YIELDS_NULL, QUOT
ED
_IDENTIFIER, ANSI_NULLS, ANSI_PADDING, ANSI_WARNING must be all ON NUMERIC_ROUNDABORT must be OFF Only deterministic expressions can be us
ed
in the definition of Index
ed
Views or indexes on
Com
put
ed
Columns. See the BOL for the list of deterministic functions and expressions. Property functions are available to check if a column or view meets the requirements and is indexable. SELECT OBJECTPROPERTY (Object_id, ‘IsIndexable’) SELECT COLUMNPROPERTY (Object_id, column_name , ‘IsIndexable’ ) Schema Binding Guarantees That Object Definition Won’t Change A view can only be index
ed
if it has been built with schema binding. The SQL Server Optimizer Determines If the Index
ed
View Can Be Us
ed
The query must request a subset of the data contain
ed
in the view. The ability of the optimizer to use the index
ed
view even if the view is not directly referenc
ed
is available only in SQL Server 2000 Enterprise
Ed
ition. In Standard
ed
ition, you can create index
ed
views, and you can select directly from them, but the optimizer will not choose to use them if they are not directly referenc
ed
. Examples of Index
ed
Views: The best candidates for improvement by index
ed
views are queries performing aggregations and joins. We will explain how the useful index
ed
views may be creat
ed
for these two major groups of queries. The considerations are valid also for queries and index
ed
views using both joins and aggregations. -- Example: USE Northwind -- Identify 5 products with overall biggest discount total. -- This may be express
ed
for example by two different queries: -- Q1. select TOP 5 ProductID, SUM(UnitPrice*Quantity)- SUM(UnitPrice*Quantity*(1.00-Discount)) Rebate from [order
details
] group by ProductID order by Rebate desc --Q2. select TOP 5 ProductID, SUM(UnitPrice*Quantity*Discount) Rebate from [order
details
] group by ProductID order by Rebate desc --The following index
ed
view will be us
ed
to execute Q1. create view Vdiscount1 with schemabinding as select SUM(UnitPrice*Quantity) SumPrice, SUM(UnitPrice*Quantity*(1.00-Discount)) SumDiscountPrice, COUNT_BIG(*) Count, ProductID from dbo.[order
details
] group By ProductID create unique cluster
ed
index VDiscountInd on Vdiscount1 (ProductID) However, it will not be us
ed
by the Q2 because the index
ed
view does not contain the SUM(UnitPrice*Quantity*Discount) aggregate. We can construct another index
ed
view create view Vdiscount2 with schemabinding as select SUM(UnitPrice*Quantity) SumPrice, SUM(UnitPrice*Quantity*(1.00-Discount)) SumDiscountPrice, SUM(UnitPrice*Quantity*Discount) SumDiscoutPrice2, COUNT_BIG(*) Count, ProductID from dbo.[order
details
] group By ProductID create unique cluster
ed
index VDiscountInd on Vdiscount2 (ProductID) This view may be us
ed
by both Q1 and Q2. Observe that the index
ed
view Vdiscount2 will have the same number of rows and only one more column
com
par
ed
to Vdiscount1, and it may be us
ed
by more queries. In general, try to design index
ed
views that may be us
ed
by more queries. The following query asking for the order with the largest total discount -- Q3. select TOP 3 OrderID, SUM(UnitPrice*Quantity*Discount) OrderRebate from dbo.[order
details
] group By OrderID Q3 can use neither of the Vdiscount views because the column OrderID is not includ
ed
in the view definition. To address this variation of the discount analysis query we may create a different index
ed
view, similar to the query itself. An attempt to generalize the previous index
ed
view Vdiscount2 so that all three queries Q1, Q2, and Q3 can take advantage of a single index
ed
view would require a view with both OrderID and ProductID as grouping columns. Because the OrderID, ProductID
com
bination is unique in the original order
details
table the resulting view would have as many rows as the original table and we would see no savings in using such view
com
par
ed
to using the original table. Consider the size of the resulting index
ed
view. In the case of pure aggregation, the index
ed
view may provide no significant performance gains if its size is close to the size of the original table.
Com
plex aggregates (STDEV, VARIANCE, AVG) cannot participate in the index view definition. However, SQL Server may use an index
ed
view to execute a query containing AVG aggregate. Query containing STDEV or VARIANCE cannot use index
ed
view to pre-
com
pute these values. The next example shows a query producing the average price for a particular product -- Q4. select ProductName, od.ProductID, AVG(od.UnitPrice*(1.00-Discount)) AvgPrice, SUM(od.Quantity) Units from [order
details
] od, Products p where od.ProductID=p.ProductID group by ProductName, od.ProductID This is an example of index
ed
view that will be consider
ed
by the SQL Server to answer the Q4 create view v3 with schemabinding as select od.ProductID, SUM(od.UnitPrice*(1.00-Discount)) Price, COUNT_BIG(*) Count, SUM(od.Quantity) Units from dbo.[order
details
] od group by od.ProductID go create UNIQUE CLUSTER
ED
index iv3 on v3 (ProductID) go Observe that the view definition does not contain the table Products. The index
ed
view does not ne
ed
to contain all tables us
ed
in the query that uses the index
ed
view. In addition, the following query (same as above Q4 only with one additional search condition) will use the same index
ed
view. Observe that the add
ed
pr
ed
icate references only columns from tables not present in the v3 view definition. -- Q5. select ProductName, od.ProductID, AVG(od.UnitPrice*(1.00-Discount)) AvgPrice, SUM(od.Quantity) Units from [order
details
] od, Products p where od.ProductID=p.ProductID and p.ProductName like '%tofu%' group by ProductName, od.ProductID The following query cannot use the index
ed
view because the add
ed
search condition od.UnitPrice>10 contains a column from the table in the view definition and the column is neither grouping column nor the pr
ed
icate appears in the view definition. -- Q6. select ProductName, od.ProductID, AVG(od.UnitPrice*(1.00-Discount)) AvgPrice, SUM(od.Quantity) Units from [order
details
] od, Products p where od.ProductID=p.ProductID and od.UnitPrice>10 group by ProductName, od.ProductID To contrast the Q6 case, the following query will use the index
ed
view v3 since the add
ed
pr
ed
icate is on the grouping column of the view v3. -- Q7. select ProductName, od.ProductID, AVG(od.UnitPrice*(1.00-Discount)) AvgPrice, SUM(od.Quantity) Units from [order
details
] od, Products p where od.ProductID=p.ProductID and od.ProductID in (1,2,13,41) group by ProductName, od.ProductID -- The previous query Q6 will use the following index
ed
view V4: create view V4 with schemabinding as select ProductName, od.ProductID, SUM(od.UnitPrice*(1.00-Discount)) AvgPrice, SUM(od.Quantity) Units, COUNT_BIG(*) Count from dbo.[order
details
] od, dbo.Products p where od.ProductID=p.ProductID and od.UnitPrice>10 group by ProductName, od.ProductID create unique cluster
ed
index VDiscountInd on V4 (ProductName, ProductID) The same index on the view V4 will be us
ed
also for a query where a join to the table Orders is add
ed
, for example -- Q8. select ProductName, od.ProductID, AVG(od.UnitPrice*(1.00-Discount)) AvgPrice, SUM(od.Quantity) Units from dbo.[order
details
] od, dbo.Products p, dbo.Orders o where od.ProductID=p.ProductID and o.OrderID=od.OrderID and od.UnitPrice>10 group by ProductName, od.ProductID We will show several modifications of the query Q8 and explain why such modifications cannot use the above view V4. -- Q8a. select ProductName, od.ProductID, AVG(od.UnitPrice*(1.00-Discount)) AvgPrice, SUM(od.Quantity) Units from dbo.[order
details
] od, dbo.Products p, dbo.Orders o where od.ProductID=p.ProductID and o.OrderID=od.OrderID and od.UnitPrice>25 group by ProductName, od.ProductID 8a cannot use the index
ed
view because of the where clause mismatch. Observe that table Orders does not participate in the index
ed
view V4 definition. In spite of that, adding a pr
ed
icate on this table will disallow using the index
ed
view because the add
ed
pr
ed
icate may eliminate additional rows participating in the aggregates as it is shown in Q8b. -- Q8b. select ProductName, od.ProductID, AVG(od.UnitPrice*(1.00-Discount)) AvgPrice, SUM(od.Quantity) Units from dbo.[order
details
] od, dbo.Products p, dbo.Orders o where od.ProductID=p.ProductID and o.OrderID=od.OrderID and od.UnitPrice>10 and o.OrderDate>'01/01/1998' group by ProductName, od.ProductID Locking and Indexes In General, You Should
Let
SQL Server Control the Locking within Indexes The stor
ed
proc
ed
ure sp_indexoption
let
s you manually control the unit of locking within an index. It also
let
s you disallow page locks or row locks within an index. Since these options are available only for indexes, there is no way to control the locking within the data pages of a heap. (But remember that if a table has a cluster
ed
index, the data pages are part of the index and are affect
ed
by the sp_indexoption setting.) The index options are set for each table or index individually. Two options, Allow Rowlocks and AllowPageLocks, are both set to TRUE initially for every table and index. If both of these options are set to FALSE for a table, only full table locks are allow
ed
. As describ
ed
in Module 4, SQL Server determines at runtime whether to initially lock rows, pages, or the entire table. The locking of rows (or keys) is heavily favor
ed
. The type of locking chosen is bas
ed
on the number of rows and pages to be scann
ed
, the number of rows on a page, the isolation level in effect, the update activity going on, the number of users on the system ne
ed
ing memory for their own purposes, and so on. SAP databases frequently use sp_indexoption to r
ed
uce deadlocks Setting vs. Querying In SQL Server 2000, the proc
ed
ure sp_indexoption should only be us
ed
for setting an index option. To query an option, use the INDEXPROPERTY function. Lesson 2: Concepts – Statistics Statistics are the most important tool that the SQL Server query optimizer has to determine the ideal execution plan for a query. Statistics that are out of date or nonexistent seriously jeopardize query performance. SQL Server 2000
com
putes and stores statistics in a
com
p
let
ely different format that all earlier versions of SQL Server. One of the improvements is an increas
ed
ability to determine which values are out of the normal range in terms of the number of occurrences. The new statistics maintenance routines are particularly good at determining when a key value has a very unusual skew of data. What You Will Learn After
com
p
let
ing this lesson, you will be able to: Define terms relat
ed
to statistics collect
ed
by SQL Server. Describe how statistics are maintain
ed
by SQL Server. Discuss the autostats feature of SQL Server. Describe how statistics are us
ed
in query optimization. Re
com
mend
ed
Reading Statistics Us
ed
by the Query Optimizer in Microsoft SQL Server 2000 http://msdn.microsoft.
com
/library/techart/statquery.htm Definitions Cardinality The cardinality means how many unique values exist in the data. Density For each index and set of column statistics, SQL Server keeps track of
details
about the uniqueness (or density) of the data values encounter
ed
, which provides a measure of how selective the index is. A unique index, of course, has the lowest density —by definition, each index entry can point to only one row. A unique index has a density value of 1/number of rows in the table. Density values range from 0 through 1. Highly selective indexes have density values of 0.10 or lower. For example, a unique index on a table with 8345 rows has a density of 0.00012 (1/8345). If a nonunique noncluster
ed
index has a density of 0.2165 on the same table, each index key can be expect
ed
to point to about 1807 rows (0.2165 × 8345). This is probably not selective enough to be more efficient than just scanning the table, so this index is probably not useful. Because driving the query from a noncluster
ed
index means that the pages must be retriev
ed
in index order, an estimat
ed
1807 data page accesses (or logical reads) are ne
ed
ed
if there is no cluster
ed
index on the table and the leaf level of the index contains the actual RID of the desir
ed
data row. The only time a data page doesn’t ne
ed
to be reaccess
ed
is when the occasional coincidence occurs in which two adjacent index entries happen to point to the same data page. In general, you can think of density as the average number of duplicates. We can also talk about the term ‘join density’, which applies to the average number of duplicates in the foreign key column. This would answer the question: in this one-to-many relationship, how many is ‘many’? Selectivity In general selectivity applies to a particular data value referenc
ed
in a WHERE clause. High selectivity means that only a small percentage of the rows satisfy the WHERE clause filter, and a low selectivity means that many rows will satisfy the filter. For example, in an employees table, the column employee_id is probably very selective, and the column gender is probably not very selective at all. Statistics Statistics are a histogram consisting of an even sampling of values for a column or for an index key (or the first column of the key for a
com
posite index) bas
ed
on the current data. The histogram is stor
ed
in the statblob field of the sysindexes table, which is of type image. (Remember that image data is actually stor
ed
in structures separate from the data row itself. The data row merely contains a pointer to the image data. For simplicity’s sake, we’ll talk about the index statistics as being stor
ed
in the image field call
ed
statblob.) To fully estimate the usefulness of an index, the optimizer also ne
ed
s to know the number of pages in the table or index; this information is stor
ed
in the dpages column of sysindexes. During the second phase of query optimization, index selection, the query optimizer determines whether an index exists for a columns in your WHERE clause, assesses the index’s usefulness by determining the selectivity of the clause (that is, how many rows will be return
ed
), and estimates the cost of finding the qualifying rows. Statistics for a single column index consist of one histogram and one density value. The multicolumn statistics for one set of columns in a
com
posite index consist of one histogram for the first column in the index and density values for each prefix
com
bination of columns (including the first column alone). The fact that density information is kept for all columns helps the optimizer decide how useful the index is for joins. Suppose, for example, that an index is
com
pos
ed
of three key fields. The density on the first column might be 0.50, which is not too useful. However, as you look at more key columns in the index, the number of rows point
ed
to is fewer than (or in the worst case, the same as) the first column, so the density value goes down. If you are looking at both the first and second columns, the density might be 0.25, which is somewhat better. Moreover, if you examine three columns, the density might be 0.03, which is highly selective. It does not make sense to refer to the density of only the second column. The lead column density is always ne
ed
ed
. Statistics Maintenance Statistics Information Tracks the Distribution of Key Values SQL Server statistics is basically a histogram that contains up to 200 values of a given key column. In addition to the histogram, the statblob field contains the following information: The time of the last statistics collection The number of rows us
ed
to produce the histogram and density information The average key length Densities for other
com
binations of columns In the statblob column, up to 200 sample values are stor
ed
; the range of key values between each sample value is call
ed
a step. The sample value is the endpoint of the range. Three values are stor
ed
along with each step: a value call
ed
EQ_ROWS, which is the number of rows that have a value equal to that sample value; a value call
ed
RANGE_ROWS, which specifies how many other values are inside the range (between two adjacent sample values); and the number of distinct values, or RANGE_DENSITY of the range. DBCC SHOW_STATISTICS The DBCC SHOW_STATISTICS output shows us the first two of these three values, but not the range density. The RANGE_DENSITY is instead us
ed
to
com
pute two additional values: DISTINCT_RANGE_ROWS—the number of distinct rows inside this range (not counting the RANGE_HI_KEY value itself. This is
com
put
ed
as 1/RANGE_DENSITY. AVG_RANGE_ROWS—the average number of rows per distinct value,
com
put
ed
as RANGE_DENSITY * RANGE_ROWS. In addition to statistics on indexes, SQL Server can also keep track of statistics on columns with no indexes. Knowing the density, or the likelihood of a particular value occurring, can help the optimizer determine an optimum processing strategy, even if SQL Server can’t use an index to actually locate the values. Statistics on Columns Column statistics can be useful for two main purposes When the SQL Server optimizer is determining the optimal join order, it frequently is best to have the smaller input process
ed
first. By ‘input’ we mean table after all filters in the WHERE clause have been appli
ed
. Even if there is no useful index on a column in the WHERE clause, statistics could tell us that only a few rows will quality, and those the resulting input will be very small. The SQL Server query optimizer can use column statistics on non-initial columns in a
com
posite noncluster
ed
index to determine if scanning the leaf level to obtain the bookmarks will be an efficient processing strategy. For example, in the member table in the cr
ed
it database, the first name column is almost unique. Suppose we have a noncluster
ed
index on (lastname, firstname), and we issue this query: select * from member where firstname = 'MPRO' In this case, statistics on the firstname column would indicate very few rows satisfying this condition, so the optimizer will choose to scan the noncluster
ed
index, since it is smaller than the cluster
ed
index (the table). The small number of bookmarks will then be follow
ed
to retrieve the actual data. Manually Updating Statistics You can also manually force statistics to be updat
ed
in one of two ways. You can run the UPDATE STATISTICS
com
mand on a table or on one specific index or column statistics, or you can also execute the proc
ed
ure sp_updatestats, which runs UPDATE STATISTICS against all user-defin
ed
tables in the current database. You can create statistics on unindex
ed
columns using the CREATE STATISTICS
com
mand or by executing sp_createstats, which creates single-column statistics for all eligible columns for all user tables in the current database. This includes all columns except
com
put
ed
columns and columns of the ntext, text, or image datatypes, and columns that already have statistics or are the first column of an index. Autostats By Default SQL Server Will Update Statistics on Any Index or Column as Ne
ed
ed
Every database is creat
ed
with the database options auto create statistics and auto update statistics set to true, but you can turn either one off. You can also turn off automatic updating of statistics for a specific table in one of two ways: UPDATE STATISTICS In addition to updating the statistics, the option WITH NORE
COM
PUTE indicates that the statistics should not be automatically re
com
put
ed
in the future. Running UPDATE STATISTICS again without the WITH NORE
COM
PUTE option enables automatic updates. sp_autostats This proc
ed
ure sets or unsets a flag for a table to indicate that statistics should or should not be updat
ed
automatically. You can also use this proc
ed
ure with only the table name to find out whether the table is set to automatically have its index statistics updat
ed
. ' However, setting the database option auto update statistics to FALSE overrides any individual table settings. In other words, no automatic updating of statistics takes place. This is not a re
com
mend
ed
practice unless thorough testing has shown you that you do not ne
ed
the automatic updates or that the performance overhead is more than you can afford. Trace Flags Trace flag 205 – reports re
com
pile due to autostats. Trace flag 8721 – writes information to the errorlog when AutoStats has been run. For more information, see the following Knowl
ed
ge Base article: Q195565 “INF: How SQL Server 7.0 Autostats Work.” Statistics and Performance The Performance Penalty of NOT Having Up-To-Date Statistics Far Outweighs the Benefit of Avoiding Automatic Updating Autostats should be turn
ed
off only after thorough testing shows it to be necessary. Because autostats only forces a re
com
pile after a certain number or percentage of rows has been chang
ed
, you do not have to make any adjustments for a read-only database. Lesson 3: Concepts – Query Optimization What You Will Learn After
com
p
let
ing this lesson, you will be able to: Describe the phases of query optimization. Discuss how SQL Server estimates the selectivity of indexes and column and how this estimate is us
ed
in query optimization. Re
com
mend
ed
Reading Chapter 15: “The Query Processor”, Inside SQL Server 2000 by Kalen Delaney Chapter 16: “Query Tuning”, Inside SQL Server 2000 by Kalen Delaney Whitepaper about SQL Server Query Processor Architecture by Hal Berenson and Kalen Delaney http://msdn.microsoft.
com
/library/backgrnd/html/sqlquerproc.htm Phases of Query Optimization Query Optimization Involves several phases Trivial Plan Optimization Optimization itself goes through several steps. The first step is something call
ed
Trivial Plan Optimization. The whole idea of trivial plan optimization is that cost bas
ed
optimization is a bit expensive to run. The optimizer can try a great many possible variations trying to find the cheapest plan. If SQL Server knows that there is only one really viable plan for a query, it could avoid a lot of work. A prime example is a query that consists of an INSERT with a VALUES clause. There is only one possible plan. Another example is a SELECT where all the columns are in a unique covering index, and that index is the only one that is useable. There is no other index that has that set of columns in it. These two examples are cases where SQL Server should just generate the plan and not try to find something better. The trivial plan optimizer finds the really obvious plans, which are typically very inexpensive. In fact, all the plans that get through the autoparameterization template result in plans that the trivial plan optimizer can find. Between those two mechanisms, the plans that are simple tend to be we
ed
ed
out earlier in the process and do not pay a lot of the
com
pilation cost. This is a good thing, because the number of potential plans in 7.0 went up astronomically as SQL Server add
ed
hash joins, merge joins and index intersections, to its list of processing techniques. Simplification and Statistics Loading If a plan is not found by the trivial plan optimizer, SQL Server can perform some simplifications, usually thought of as syntactic transformations of the query itself, looking for
com
mutative properties and
operation
s that can be rearrang
ed
. SQL Server can do constant folding, and other
operation
s that do not require looking at the cost or analyzing what indexes are, but that can result in a more efficient query. SQL Server then loads up the metadata including the statistics information on the indexes, and then the optimizer goes through a series of phases of cost bas
ed
optimization. Cost Bas
ed
Optimization Phases The cost bas
ed
optimizer is design
ed
as a set of transformation rules that try various permutations of indexes and join strategies. Because of the number of potential plans in SQL Server 7.0 and SQL Server 2000, if the optimizer just ran through all the
com
binations and produc
ed
a plan, the optimization process would take a very long time to run. Therefore, optimization is broken up into phases. Each phase is a set of rules. After each phase is run, the cost of any resulting plan is examin
ed
, and if SQL Server determines that the plan is cheap enough, that plan is kept and execut
ed
. If the plan is not cheap enough, the optimizer runs the next phase, which is another set of rules. In the vast majority of cases, a good plan will be found in the preliminary phases. Typically, if the plan that a query would have had in SQL Server 6.5 is also the optimal plan in SQL Server 7.0 and SQL Server 2000, the plan will tend to be found either by the trivial plan optimizer or by the first phase of the cost bas
ed
optimizer. The rules were intentionally organiz
ed
to try to make that be true. The plan will probably consist of using a single index and using nest
ed
loops. However, every once in a while, because of lack of statistical information, or some other nuance, the optimizer will have to proce
ed
with the later phases of optimization. Sometimes this is because there is a real possibility that the optimizer could find a better plan. When a plan is found, it be
com
es the optimizer’s output, and then SQL Server goes through all the caching mechanisms that we have already discuss
ed
in Module 5. Full Optimization At some point, the optimizer determines that it has gone through enough preliminary phases, and it reverts to a phase call
ed
full optimization. If the optimizer goes through all the preliminary phases, and still has not found a cheap plan, it examines the cost for the plan that it has so far. If the cost is above the threshold, the optimizer goes into a phase call
ed
full optimization. This threshold is configurable, as the configuration option ‘cost threshold for parallelism’. The full optimization phase assumes that this plan should be run this in parallel. If the machine is very busy, the plan will end up running it in serial, but the optimizer has a goal to produce a good parallel. If the cost is below the threshold (or a single processor machine), the full optimization phase just uses a brute force method to find a serial plan. Selectivity Estimation Selectivity Is One of The Most Important Pieces of Information One of the most import things the optimizer ne
ed
s to know is the number of rows from any table that will meet all the conditions in the query. If there are no restrictions on a table, and all the rows will be ne
ed
ed
, the optimizer can determine the number of rows from the sysindexes table. This number is not absolutely guarante
ed
to be accurate, but it is the number the optimizer uses. If there is a filter on the table in a WHERE clause, the optimizer ne
ed
s statistics information. Indexes automatically maintain statistics, and the optimizer will use these values to determine the usefulness of the index. If there is no index on the column involv
ed
in the filter, then column statistics can be us
ed
or generat
ed
. Optimizing Search Arguments In General, the Filters in the WHERE Clause Determine Which Indexes Will Be Useful If an index
ed
column is referenc
ed
in a Search Argument (SARG), the optimizer will analyze the cost of using that index. A SARG has the form: column
value value
column Operator must be one of =, >, >= <, <= The value can be a constant, an
operation
, or a variable. Some functions also will be treat
ed
as SARGs. These queries have SARGs, and a noncluster
ed
index on firstname will be us
ed
in most cases: select * from member where firstname < 'AKKG' select * from member where firstname = substring('HAAKGALSFJA', 2,5) select * from member where firstname = 'AA' + 'KG' declare @name char(4) set @name = 'AKKG' select * from member where firstname < @name Not all functions can be us
ed
in SARGs. select * from charge where charge_amt < 2*2 select * from charge where charge_amt < sqrt(16)
Com
pare these queries to ones using = instead of <. With =, the optimizer can use the density information to
com
e up with a good row estimate, even if it’s not going to actually perform the function’s calculations. A filter with a variable is usually a SARG The issue is, can the optimizer
com
e up with useful costing information? A filter with a variable is not a SARG if the variable is of a different datatype, and the column must be convert
ed
to the variable’s datatype For more information, see the following Knowl
ed
ge Base article: Q198625 Enter Title of KB Article Here Use cr
ed
it go CREATE TABLE [member2] ( [member_no] [smallint] NOT NULL , [lastname] [shortstring] NOT NULL , [firstname] [shortstring] NOT NULL , [middleinitial] [
let
ter] NULL , [street] [shortstring] NOT NULL , [city] [shortstring] NOT NULL , [state_prov] [statecode] NOT NULL , [country] [countrycode] NOT NULL , [mail_code] [mailcode] NOT NULL ) GO insert into member2 select member_no, lastname, firstname, middleinitial, street, city, state_prov, country, mail_code from member alter table member2 add constraint pk_member2 primary key cluster
ed
(lastname, member_no, firstname, country) declare @id int set @id = 47 update member2 set city = city + ' City', state_prov = state_prov + ' State' where lastname = 'Barr' and member_no = @id and firstname = 'URQYJBFVRRPWKVW' and country = 'USA' These queries don’t have SARGs, and a table scan will be done: select * from member where substring(lastname, 1,2) = ‘BA’ Some non-SARGs can be convert
ed
select * from member where lastname like ‘ba%’ In some cases, you can rewrite your query to turn a non-SARG into a SARG; for example, you can rewrite the substring query above and the LIKE query that follows it. Join Order and Types of Joins Join Order and Strategy Is Determin
ed
By the Optimizer The execution plan output will display the join order from top to bottom; i.e. the table list
ed
on top is the first one access
ed
in a join. You can override the optimizer’s join order decision in two ways: OPTION (FORCE ORDER) applies to one query SET FORCEPLAN ON applies to entire session, until set OFF If either of these options is us
ed
, the join order is determin
ed
by the order the tables are list
ed
in the query’s FROM clause, and no optimizer on JOIN ORDER is done. Forcing the JOIN order may force a particular join strategy. For example, in most outer join
operation
s, the outer table is process
ed
first, and a nest
ed
loops join is done. However, if you force the inner table to be access
ed
first, a merge join will ne
ed
to be done.
Com
pare the query plan for this query with and without the FORCE ORDER hint: select * from titles right join publishers on titles.pub_id = publishers.pub_id -- OPTION (FORCE ORDER) Nest
ed
Loop Join A nest
ed
iteration is when the query optimizer constructs a set of nest
ed
loops, and the result set grows as it progresses through the rows. The query optimizer performs the following steps. 1. Finds a row from the first table. 2. Uses that row to scan the next table. 3. Uses the result of the previous table to scan the next table. Evaluating Join
Com
binations The query optimizer automatically evaluates at least four or more possible join
com
binations, even if those
com
binations are not specifi
ed
in the join pr
ed
icate. You do not have to add r
ed
undant clauses. The query optimizer balances the cost and uses statistics to determine the number of join
com
binations that it evaluates. Evaluating every possible join
com
bination is inefficient and costly. Evaluating Cost of Query Performance When the query optimizer performs a nest
ed
join, you should be aware that certain costs are incurr
ed
. Nest
ed
loop joins are far superior to both merge joins and hash joins when executing small transactions, such as those affecting only a small set of rows. The query optimizer: Uses nest
ed
loop joins if the outer input is quite small and the inner input is index
ed
and quite large. Uses the smaller input as the outer table. Requires that a useful index exist on the join pr
ed
icate for the inner table. Always uses a nest
ed
loop join strategy if the join
operation
uses an operator other than an equality operator. Merge Joins The columns of the join conditions are us
ed
as inputs to process a merge join. SQL Server performs the following steps when using a merge join strategy: 1. Gets the first input values from each input set. 2.
Com
pares input values. 3. Performs a merge algorithm. • If the input values are equal, the rows are return
ed
. • If the input values are not equal, the lower value is discard
ed
, and the next input value from that input is us
ed
for the next
com
parison. 4. Repeats the process until all of the rows from one of the input sets have been process
ed
. 5. Evaluates any remaining search conditions in the query and returns only rows that qualify. Note Only one pass per input is done. The merge join
operation
ends after all of the input values of one input have been evaluat
ed
. The remaining values from the other input are not process
ed
. Requires That Join
ed
Columns Are Sort
ed
If you execute a query with join
operation
s, and the join
ed
columns are in sort
ed
order, the query optimizer processes the query by using a merge join strategy. A merge join is very efficient because the columns are already sort
ed
, and it requires fewer page I/O. Evaluates Sort
ed
Values For the query optimizer to use the merge join, the inputs must be sort
ed
. The query optimizer evaluates sort
ed
values in the following order: 1. Uses an existing index tree (most typical). The query optimizer can use the index tree from a cluster
ed
index or a cover
ed
noncluster
ed
index. 2. Leverages sort
operation
s that the GROUP BY, ORDER BY, and CUBE clauses use. The sorting
operation
only has to be perform
ed
once. 3. Performs its own sort
operation
in which a SORT operator is display
ed
when graphically viewing the execution plan. The query optimizer does this very rarely. Performance Considerations Consider the following facts about the query optimizer's use of the merge join: SQL Server performs a merge join for all types of join
operation
s (except cross join or full join
operation
s), including UNION
operation
s. A merge join
operation
may be a one-to-one, one-to-many, or many-to-many
operation
. If the merge join is a many-to-many
operation
, SQL Server uses a temporary table to store the rows. If duplicate values from each input exist, one of the inputs rewinds to the start of the duplicates as each duplicate value from the other input is process
ed
. Query performance for a merge join is very fast, but the cost can be high if the query optimizer must perform its own sort
operation
. If the data volume is large and the desir
ed
data can be obtain
ed
presort
ed
from existing Balanc
ed
-Tree (B-Tree) indexes, merge join is often the fastest join algorithm. A merge join is typically us
ed
if the two join inputs have a large amount of data and are sort
ed
on their join columns (for example, if the join inputs were obtain
ed
by scanning sort
ed
indexes). Merge join
operation
s can only be perform
ed
with an equality operator in the join pr
ed
icate. Hashing is a strategy for dividing data into equal sets of a manageable size bas
ed
on a given property or characteristic. The group
ed
data can then be us
ed
to determine whether a particular data item matches an existing value. Note Duplicate data or ranges of data are not useful for hash joins because the data is not organiz
ed
together or in order. When a Hash Join Is Us
ed
The query optimizer uses a hash join option when it estimates that it is more efficient than processing queries by using a nest
ed
loop or merge join. It typically uses a hash join when an index does not exist or when existing indexes are not useful. Assigns a Build and Probe Input The query optimizer assigns a build and probe input. If the query optimizer incorrectly assigns the build and probe input (this may occur because of imprecise density estimates), it reverses them dynamically. The ability to change input roles dynamically is call
ed
role reversal. Build input consists of the column values from a table with the lowest number of rows. Build input creates a hash table in memory to store these values. The hash bucket is a storage place in the hash table in which each row of the build input is insert
ed
. Rows from one of the join tables are plac
ed
into the hash bucket where the hash key value of the row matches the hash key value of the bucket. Hash buckets are stor
ed
as a link
ed
list and only contain the columns that are ne
ed
ed
for the query. A hash table contains hash buckets. The hash table is creat
ed
from the build input. Probe input consists of the column values from the table with the most rows. Probe input is what the build input checks to find a match in the hash buckets. Note The query optimizer uses column or index statistics to help determine which input is the smaller of the two. Processing a Hash Join The following list is a simplifi
ed
description of how the query optimizer processes a hash join. It is not intend
ed
to be
com
prehensive because the algorithm is very
com
plex. SQL Server: 1. Reads the probe input. Each probe input is process
ed
one row at a time. 2. Performs the hash algorithm against each probe input and generates a hash key value. 3. Finds the hash bucket that matches the hash key value. 4. Accesses the hash bucket and looks for the matching row. 5. Returns the row if a match is found. Performance Considerations Consider the following facts about the hash joins that the query optimizer uses: Similar to merge joins, a hash join is very efficient, because it uses hash buckets, which are like a dynamic index but with less overhead for
com
bining rows. Hash joins can be perform
ed
for all types of join
operation
s (except cross join
operation
s), including UNION and DIFFERENCE
operation
s. A hash operator can remove duplicates and group data, such as SUM (salary) GROUP BY department. The query optimizer uses only one input for both the build and probe roles. If join inputs are large and are of similar size, the performance of a hash join
operation
is similar to a merge join with prior sorting. However, if the size of the join inputs is significantly different, the performance of a hash join is often much faster. Hash joins can process large, unsort
ed
, non-index
ed
inputs efficiently. Hash joins are useful in
com
plex queries because the interm
ed
iate results: • Are not index
ed
(unless explicitly sav
ed
to disk and then index
ed
). • Are often not sort
ed
for the next
operation
in the execution plan. The query optimizer can identify incorrect estimates and make corrections dynamically to process the query more efficiently. A hash join r
ed
uces the ne
ed
for database denormalization. Denormalization is typically us
ed
to achieve better performance by r
ed
ucing join
operation
s despite r
ed
undancy, such as inconsistent updates. Hash joins give you the option to vertically partition your data as part of your physical database design. Vertical partitioning represents groups of columns from a single table in separate files or indexes. Subquery Performance Joins Are Not Inherently Better Than Subqueries Here is an example showing three different ways to update a table, using a second table for lookup purposes. The first uses a JOIN with the update, the second uses a regular introduc
ed
with IN, and the third uses a correlat
ed
subquery. All three yield nearly identical performance. Note Note that performance
com
parisons cannot just be made bas
ed
on I/Os. With HASHING and MERGING techniques, the number of reads may be the same for two queries, yet one may take a lot longer and use more memory resources. Also, always be sure to monitor statistics time. Suppose you want to add a 5 percent discount to order items in the Order
Details
table for which the supplier is Exotic Liquids, whose supplierid is 1. -- JOIN solution BEGIN TRAN UPDATE OD SET discount = discount + 0.05 FROM [Order
Details
] AS OD JOIN Products AS P ON OD.productid = P.productid WHERE supplierid = 1 ROLLBACK TRAN -- Regular subquery solution BEGIN TRAN UPDATE [Order
Details
] SET discount = discount + 0.05 WHERE productid IN (SELECT productid FROM Products WHERE supplierid = 1) ROLLBACK TRAN -- Correlat
ed
Subquery Solution BEGIN TRAN UPDATE [Order
Details
] SET discount = discount + 0.05 WHERE EXISTS(SELECT supplierid FROM Products WHERE [Order
Details
].productid = Products.productid AND supplierid = 1) ROLLBACK TRAN Internally, Your Join May Be Rewritten SQL Server’s query processor had many different ways of resolving your JOIN expressions. Subqueries may be convert
ed
to a JOIN with an impli
ed
distinct, which may result in a logical operator of SEMI JOIN.
Com
pare the plans of the first two queries: USE cr
ed
it select member_no from member where member_no in (select member_no from charge) select distinct m.member_no from member m join charge c on m.member_no = c.member_no The second query uses a HASH MATCH as the final step to remove the duplicates. The first query only had to do a semi join. For these queries, although the I/O values are the same, the first query (with the subquery) runs much faster (almost twice as fast). Another similar looking join is
微软内部资料-SQL性能优化3
Contents Overview 1 Lesson 1: Concepts – Locks and Lock Manager 3 Lesson 2: Concepts – Batch and Transaction 31 Lesson 3: Concepts – Locks and Applications 51 Lesson 4: Information Collection and Analysis 63 Lesson 5: Concepts – Formulating and Implementing Resolution 81 Module 4: Troubleshooting Locking and Blocking Overview At the end of this module, you will be able to: Discuss how lock manager uses lock mode, lock resources, and lock
com
patibility to achieve transaction isolation. Describe the various transaction types and how transactions differ from batches. Describe how to troubleshoot blocking and locking issues. Analyze the output of blocking scripts and Microsoft® SQL Server™ Profiler to troubleshoot locking and blocking issues. Formulate hypothesis to resolve locking and blocking issues. Lesson 1: Concepts – Locks and Lock Manager This lesson outlines some of the
com
mon causes that contribute to the perception of a slow server. What You Will Learn After
com
p
let
ing this lesson, you will be able to: Describe locking architecture us
ed
by SQL Server. Identify the various lock modes us
ed
by SQL Server. Discuss lock
com
patibility and concurrent access. Identify different types of lock resources. Discuss dynamic locking and lock escalation. Differentiate locks, latches, and other SQL Server internal “locking” mechanism such as spinlocks and other synchronization objects. Re
com
mend
ed
Reading Chapter 14 “Locking”, Inside SQL Server 2000 by Kalen Delaney SOX000821700049 – SQL 7.0 How to interpret lock resource Ids SOX000925700237 – TITLE: Lock escalation in SQL 7.0 SOX001109700040 – INF: Queries with PREFETCH in the plan hold lock until the end of transaction Locking Concepts Delivery Tip Prior to delivering this material, test the class to see if they fully understand the different isolation levels. If the class is not confident in their understanding, review appendix A04_Locking and its ac
com
panying PowerPoint® file. Transactions in SQL Server provide the ACID properties: Atomicity A transaction either
com
mits or aborts. If a transaction
com
mits, all of its effects remain. If it aborts, all of its effects are undone. It is an “all or nothing”
operation
. Consistency An application should maintain the consistency of a database. For example, if you defer constraint checking, it is your responsibility to ensure that the database is consistent. Isolation Concurrent transactions are isolat
ed
from the updates of other in
com
p
let
e transactions. These updates do not constitute a consistent state. This property is often call
ed
serializability. For example, a second transaction traversing the doubly link
ed
list mention
ed
above would see the list before or after the insert, but it will see only
com
p
let
e changes. Durability After a transaction
com
mits, its effects will persist even if there are system failures. Consistency and isolation are the most important in describing SQL Server’s locking model. It is up to the application to define what consistency means, and isolation in some form is ne
ed
ed
to achieve consistent results. SQL Server uses locking to achieve isolation. Definition of Dependency: A set of transactions can run concurrently if their outputs are disjoint from the union of one another’s input and output sets. For example, if T1 writes some object that is in T2’s input or output set, there is a dependency between T1 and T2. Bad Dependencies These include lost updates, dirty reads, non-repeatable reads, and phantoms. ANSI SQL Isolation Levels An isolation level determines the degree to which data is isolat
ed
for use by one process and guard
ed
against interference from other processes. Prior to SQL Server 7.0, REPEATABLE READ and SERIALIZABLE isolation levels were synonymous. There was no way to prevent non-repeatable reads while not preventing phantoms. By default, SQL Server 2000 operates at an isolation level of READ
COM
MITT
ED
. To make use of either more or less strict isolation levels in applications, locking can be customiz
ed
for an entire session by setting the isolation level of the session with the SET TRANSACTION ISOLATION LEVEL statement. To determine the transaction isolation level currently set, use the DBCC USEROPTIONS statement, for example: USE pubs GO SET TRANSACTION ISOLATION LEVEL REPEATABLE READ GO DBCC USEROPTIONS GO Multigranular Locking Multigranular Locking In our example, if one transaction (T1) holds an exclusive lock at the table level, and another transaction (T2) holds an exclusive lock at the row level, each of the transactions believe they have exclusive access to the resource. In this scenario, since T1 believes it locks the entire table, it might inadvertently make changes to the same row that T2 thought it has lock
ed
exclusively. In a multigranular locking environment, there must be a way to effectively over
com
e this scenario. Intent lock is the answer to this problem. Intent Lock Intent Lock is the term us
ed
to mean placing a marker in a higher-level lock queue. The type of intent lock can also be call
ed
the multigranular lock mode. An intent lock indicates that SQL Server wants to acquire a shar
ed
(S) lock or exclusive (X) lock on some of the resources lower down in the hierarchy. For example, a shar
ed
intent lock plac
ed
at the table level means that a transaction intends on placing shar
ed
(S) locks on pages or rows within that table. Setting an intent lock at the table level prevents another transaction from subsequently acquiring an exclusive (X) lock on the table containing that page. Intent locks improve performance because SQL Server examines intent locks only at the table level to determine whether a transaction can safely acquire a lock on that table. This removes the requirement to examine every row or page lock on the table to determine whether a transaction can lock the entire table. Lock Mode The code shown in the slide represents how the lock mode is stor
ed
internally. You can see these codes by querying the master.dbo.spt_values table: SELECT * FROM master.dbo.spt_values WHERE type = N'L' However, the req_mode column of master.dbo.syslockinfo has lock mode code that is one less than the code values shown here. For example, value of req_mode = 3 represents the Shar
ed
lock mode rather than the Schema Modification lock mode. Lock
Com
patibility These locks can apply at any coarser level of granularity. If a row is lock
ed
, SQL Server will apply intent locks at both the page and the table level. If a page is lock
ed
, SQL Server will apply an intent lock at the table level. SIX locks imply that we have shar
ed
access to a resource and we have also plac
ed
X locks at a lower level in the hierarchy. SQL Server never asks for SIX locks directly, they are always the result of a conversion. For example, suppose a transaction scann
ed
a page using an S lock and then subsequently decid
ed
to perform a row level update. The row would obtain an X lock, but now the page would require an IX lock. The resultant mode on the page would be SIX. Another type of table lock is a schema stability lock (Sch-S) and is
com
patible with all table locks except the schema modification lock (Sch-M). The schema modification lock (Sch-M) is in
com
patible with all table locks. Locking Resources Delivery Tip Note the differences between Key and Key Range locks. Key Range locks will be cover
ed
in a couple of slides. SQL Server can lock these resources: Item Description DB A database. File A database file Index An entire index of a table. Table An entire table, including all data and indexes. Extent A contiguous group of data pages or index pages. Page An 8-KB data page or index page. Key Row lock within an index. Key-range A key-range. Us
ed
to lock ranges between records in a table to prevent phantom insertions or de
let
ions into a set of records. Ensures serializable transactions. RID A Row Identifier. Us
ed
to individually lock a single row within a table. Application A lock resource defin
ed
by an application. The lock manager knows nothing about the resource format. It simply
com
pares the 'strings' representing the lock resources to determine whether it has found a match. If a match is found, it knows that resource is already lock
ed
. Some of the resources have “sub-resources.” The followings are sub-resources display
ed
by the sp_lock output: Database Lock Sub-Resources: Full Database Lock (default) [BULK-OP-DB] – Bulk
Operation
Lock for Database [BULK-OP-LOG] – Bulk
Operation
Lock for Log Table Lock Sub-Resources: Full Table Lock (default) [UPD-STATS] – Update statistics Lock [
COM
PILE] –
Com
pile Lock Index Lock sub-Resources: Full Index Lock (default) [INDEX_ID] – Index ID Lock [INDEX_NAME] – Index Name Lock [BULK_ALLOC] – Bulk Allocation Lock [DEFRAG] – Defragmentation Lock For more information, see also… SOX000821700049 SQL 7.0 How to interpret lock resource Ids Lock Resource Block The resource type has the following resource block format: Resource Type (Code) Content DB (2) Data 1: sub-resource; Data 2: 0; Data 3: 0 File (3) Data 1: File ID; Data 2: 0; Data 3: 0 Index (4) Data 1: Object ID; Data 2: sub-resource; Data 3: Index ID Table (5) Data 1: Object ID; Data 2: sub-resource; Data 3: 0. Page (6) Data 1: Page Number; Data 3: 0. Key (7) Data 1: Object ID; Data 2: Index ID; Data 3: Hash
ed
Key Extent (8) Data 1: Extent ID; Data 3: 0. RID (9) Data 1: RID; Data 3: 0. Application (10) Data 1: Application resource name The rsc_bin column of master..syslockinfo contains the resource block in hexadecimal format. For an example of how to decode value from this column using the information above,
let
us assume we have the following value: 0x000705001F83D775010002014F0BEC4E With byte swapping within each field, this can be decod
ed
as: Byte 0: Flag – 0x00 Byte 1: Resource Type – 0x07 (Key) Byte 2-3: DBID – 0x0005 Byte 4-7: ObjectID – 0x 75D7831F (1977058079) Byte 8-9: IndexID – 0x0001 Byte 10-16: Hash Key value – 0x 02014F0BEC4E For more information about how to decode this value, see also… Inside SQL Server 2000, pages 803 and 806. Key Range Locking Key Range Locking To support SERIALIZABLE transaction semantics, SQL Server ne
ed
s to lock sets of rows specifi
ed
by a pr
ed
icate, such as WHERE salary BETWEEN 30000 AND 50000 SQL Server ne
ed
s to lock data that does not exist! If no rows satisfy the WHERE condition the first time the range is scann
ed
, no rows should be return
ed
on any subsequent scans. Key range locks are similar to row locks on index keys (whether cluster
ed
or not). The locks are plac
ed
on individual keys rather than at the node level. The hash value consists of all the key
com
ponents and the locator. So, for a noncluster
ed
index over a heap, where columns c1 and c2 where index
ed
, the hash would contain contributions from c1, c2 and the RID. A key range lock appli
ed
to a particular key means that all keys between the value lock
ed
and the next value would be lock
ed
for all data modification. Key range locks can lock a slightly larger range than that impli
ed
by the WHERE clause. Suppose the following select was execut
ed
in a transaction with isolation level SERIALIZABLE: SELECT * FROM members WHERE first_name between ‘Al’ and ‘Carl’ If 'Al', 'Bob', and 'Dave' are index keys in the table, the first two of these would acquire key range locks. Although this would prevent anyone from inserting either 'Alex' or 'Ben', it would also prevent someone from inserting 'Dan', which is not within the range of the WHERE clause. Prior to SQL Server 7.0, page locking was us
ed
to prevent phantoms by locking the entire set of pages on which the phantom would exist. This can be too conservative. Key Range locking
let
s SQL Server lock only a much more restrictive area of the table. Impact Key-range locking ensures that these scenarios are SERIALIZABLE: Range scan query Sing
let
on fetch of nonexistent row De
let
e
operation
Insert
operation
However, the following conditions must be satisfi
ed
before key-range locking can occur: The transaction-isolation level must be set to SERIALIZABLE. The
operation
perform
ed
on the data must use an index range access. Range locking is activat
ed
only when query processing (such as the optimizer) chooses an index path to access the data. Key Range Lock Mode Again, the req_mode column of master.dbo.syslockinfo has lock mode code that is one less than the code values shown here. Dynamic Locking When modifying individual rows, SQL Server typically would take row locks to maximize concurrency (for example, OLTP, order-entry application). When scanning larger volumes of data, it would be more appropriate to take page or table locks to minimize the cost of acquiring locks (for example, DSS, data warehouse, reporting). Locking Decision The decision about which unit to lock is made dynamically, taking many factors into account, including other activity on the system. For example, if there are multiple transactions currently accessing a table, SQL Server will tend to favor row locking more so than it otherwise would. It may mean the difference between scanning the table now and paying a bit more in locking cost, or having to wait to acquire a more coarse lock. A preliminary locking decision is made during query optimization, but that decision can be adjust
ed
when the query is actually execut
ed
. Lock Escalation When the lock count for the transaction exce
ed
s and is a multiple of ESCALATION_THRESHOLD (1250), the Lock Manager attempts to escalate. For example, when a transaction acquir
ed
1250 locks, lock manager will try to escalate. The number of locks held may continue to increase after the escalation attempt (for example, because new tables are access
ed
, or the previous lock escalation attempts fail
ed
due to in
com
patible locks held by another spid). If the lock count for this transaction reaches 2500 (1250 * 2), Lock Manager will attempt escalation again. The Lock Manager looks at the lock memory it is using and if it is more than 40 percent of SQL Server’s allocat
ed
buffer pool memory, it tries to find a scan (SDES) where no escalation has already been perform
ed
. It then repeats the search
operation
until all scans have been escalat
ed
or until the memory us
ed
drops under the MEMORY_LOAD_ESCALATION_THRESHOLD (40%) value. If lock escalation is not possible or fails to significantly r
ed
uce lock memory footprint, SQL Server can continue to acquire locks until the total lock memory reaches 60 percent of the buffer pool (MAX_LOCK_RESOURCE_MEMORY_PERCENTAGE=60). Lock escalation may be also done when a single scan (SDES) holds more than LOCK_ESCALATION_THRESHOLD (765) locks. There is no lock escalation on temporary tables or system tables. Trace Flag 1211 disables lock escalation. Important Do not relay this to the customer without careful consideration. Lock escalation is a necessary feature, not something to be avoid
ed
com
p
let
ely. Trace flags are global and disabling lock escalation could lead to out of memory situations, extremely poor performing queries, or other problems. Lock escalation tracing can be seen using the Profiler or with the general locking trace flag, -T1200. However, Trace Flag 1200 shows all lock activity so it should not be usable on a production system. For more information, see also… SOX000925700237 “TITLE: SQL 7.0 Lock escalation in SQL 7.0” Lock Timeout Application Lock Timeout An application can set lock timeout for a session with the SET option: SET LOCK_TIMEOUT N where N is a number of milliseconds. A value of -1 means that there will be no timeout, which is equivalent to the version 6.5 behavior. A value of 0 means that there will be no waiting; if a process finds a resource lock
ed
, it will generate error message 1222 and continue with the next statement. The current value of LOCK_TIMEOUT is stor
ed
in the global variable @@lock_timeout. Note After a lock timeout any transaction containing the statement, is roll
ed
back or cancel
ed
by SQL Server 2000 (bug#352640 was fil
ed
). This behavior is different from that of SQL Server 7.0. With SQL Server 7.0, the application must have an error handler that can trap error 1222 and if an application does not trap the error, it can proce
ed
unaware that an individual statement within a transaction has been cancel
ed
, and errors can occur because statements later in the transaction may depend on the statement that was never execut
ed
. Bug#352640 is fix
ed
in hotfix build 8.00.266 whereby a lock timeout will only Internal Lock Timeout At time, internal
operation
s within SQL Server will attempt to acquire locks via lock manager. Typically, these lock requests are issu
ed
with “no waiting.” For example, the ghost record processing might try to clean up rows on a particular page, and before it can do that, it ne
ed
s to lock the page. Thus, the ghost record manager will request a page lock with no wait so that if it cannot lock the page, it will just move on to other pages; it can always
com
e back to this page later. If you look at SQL Profiler Lock: Timeout events, internal lock timeout typically have a duration value of zero. Lock Duration Lock Mode and Transaction Isolation Level For REPEATABLE READ transaction isolation level, update locks are held until data is read and process
ed
, unless promot
ed
to exclusive locks. "Data is process
ed
" means that we have decid
ed
whether the row in question match
ed
the search criteria; if not then the update lock is releas
ed
, otherwise, we get an exclusive lock and make the modification. Consider the following query: use northwind go dbcc traceon(3604, 1200, 1211) -- turn on lock tracing -- and disable escalation go set transaction isolation level repeatable read begin tran update dbo.[order
details
] set discount = convert (real, discount) where discount = 0.0 exec sp_lock Update locks are promot
ed
to exclusive locks when there is a match; otherwise, the update lock is releas
ed
. The sp_lock output verifies that the SPID does not hold any update locks or shar
ed
locks at the end of the query. Lock escalation is turn
ed
off so that exclusive table lock is not held at the end. Warning Do not use trace flag 1200 in a production environment because it produces a lot of output and slows down the server. Trace flag 1211 should not be us
ed
unless you have done extensive study to make sure it helps with performance. These trace flags are us
ed
here for illustration and learning purposes only. Lock Ownership Most of the locking discussion in this lesson relates to locks own
ed
by “transactions.” In addition to transaction, cursor and session can be owners of locks and they both affect how long locks are held. For every row that is fetch
ed
, when SCROLL_LOCKS option is us
ed
, regardless of the state of a transaction, a cursor lock is held until the next row is fetch
ed
or when the cursor is clos
ed
. Locks own
ed
by session are outside the scope of a transaction. The duration of these locks are bound
ed
by the connection and the process will continue to hold these locks until the process disconnects. A typical lock own
ed
by session is the database (DB) lock. Locking – Read
Com
mitt
ed
Scan Under read
com
mitt
ed
isolation level, when database pages are scann
ed
, shar
ed
locks are held when the page is read and process
ed
. The shar
ed
locks are releas
ed
“behind” the scan and allow other transactions to update rows. It is important to note that the shar
ed
lock currently acquir
ed
will not be releas
ed
until shar
ed
lock for the next page is successfully acquir
ed
(this is
com
monly know as “crabbing”). If the same pages are scann
ed
again, rows may be modifi
ed
or de
let
ed
by other transactions. Locking – Repeatable Read Scan Under repeatable read isolation level, when database pages are scann
ed
, shar
ed
locks are held when the page is read and process
ed
. SQL Server continues to hold these shar
ed
locks, thus preventing other transactions to update rows. If the same pages are scann
ed
again, previously scann
ed
rows will not change but new rows may be add
ed
by other transactions. Locking – Serializable Read Scan Under serializable read isolation level, when database pages are scann
ed
, shar
ed
locks are held not only on rows but also on scann
ed
key range. SQL Server continues to hold these shar
ed
locks until the end of transaction. Because key range locks are held, not only will this prevent other transactions from modifying the rows, no new rows can be insert
ed
. Prefetch and Isolation Level Prefetch and Locking Behavior The prefetch feature is available for use with SQL Server 7.0 and SQL Server 2000. When searching for data using a noncluster
ed
index, the index is search
ed
for a particular value. When that value is found, the index points to the disk address. The traditional approach would be to imm
ed
iately issue an I/O for that row, given the disk address. The result is one synchronous I/O per row and, at most, one disk at a time working to evaluate the query. This does not take advantage of strip
ed
disk sets. The prefetch feature takes a different approach. It continues looking for more record pointers in the noncluster
ed
index. When it has collect
ed
a number of them, it provides the storage engine with prefetch hints. These hints tell the storage engine that the query processor will ne
ed
these particular records soon. The storage engine can now issue several I/Os simultaneously, taking advantage of strip
ed
disk sets to execute multiple
operation
s simultaneously. For example, if the engine is scanning a noncluster
ed
index to determine which rows qualify but will eventually ne
ed
to visit the data page as well to access columns that are not in the index, it may decide to submit asynchronous page read requests for a group of qualifying rows. The prefetch data pages are then revisit
ed
later to avoid waiting for each individual page read to
com
p
let
e in a serial fashion. This data access path requires that a lock be held between the prefetch request and the row lookup to stabilize the row on the page so it is not to be mov
ed
by a page split or cluster
ed
key update. For our example, the isolation level of the query is escalat
ed
to REPEATABLE READ, overriding the transaction isolation level. With SQL Server 7.0 and SQL Server 2000, portions of a transaction can execute at a different transaction isolation level than the entire transaction itself. This is implement
ed
as lock classes. Lock classes are us
ed
to control lock lifetime when portions of a transaction ne
ed
to execute at a stricter isolation level than the underlying transaction. Unfortunately, in SQL Server 7.0 and SQL Server 2000, the lock class is creat
ed
at the topmost operator of the query and hence releas
ed
only at the end of the query. Currently there is no support to release the lock (lock class) after the row has been discard
ed
or fetch
ed
by the filter or join operator. This is because isolation level can be set at the query level via a lock class, but no lower. Because of this, locks acquir
ed
during the query will not be releas
ed
until the query
com
p
let
es. If prefetch is occurring you may see a single SPID that holds hundr
ed
s of Shar
ed
KEY or PAG locks even though the connection’s isolation level is READ
COM
MITT
ED
. Isolation level can be determin
ed
from DBCC PSS output. For
details
about this behavior see “SOX001109700040 INF: Queries with PREFETCH in the plan hold lock until the end of transaction”. Other Locking Mechanism Lock manager does not manage latches and spinlocks. Latches Latches are internal mechanisms us
ed
to protect pages while doing
operation
s such as placing a row physically on a page,
com
pressing space on a page, or retrieving rows from a page. Latches can roughly be divid
ed
into I/O latches and non-I/O latches. If you see a high number of non-I/O relat
ed
latches, SQL Server is usually doing a large number of hash or sort
operation
s in tempdb. You can monitor latch activities via DBCC SQLPERF(‘WAITSTATS’)
com
mand. Spinlock A spinlock is an internal data structure that is us
ed
to protect vital information that is shar
ed
within SQL Server. On a multi-processor machine, when SQL Server tries to access a particular resource protect
ed
by a spinlock, it must first acquire the spinlock. If it fails, it executes a loop that will check to see if the lock is available and if not, decrements a counter. If the counter reaches zero, it yields the processor to another thread and goes into a “sleep” (wait) state for a pre-determin
ed
amount of time. When it wakes, hopefully, the lock is free and available. If not, the loop starts again and it is terminat
ed
only when the lock is acquir
ed
. The reason for implementing a spinlock is that it is probably less costly to “spin” for a short time rather than yielding the processor. Yielding the processor will force an expensive context switch where: The old thread’s state must be sav
ed
The new thread’s state must be reload
ed
The data stor
ed
in the L1 and L2 cache are useless to the processor On a single-processor
com
puter, the loop is not useful because no other thread can be running and thus, no one can release the spinlock for the currently executing thread to acquire. In this situation, the thread yields the processor imm
ed
iately. Lesson 2: Concepts – Batch and Transaction This lesson outlines some of the
com
mon causes that contribute to the perception of a slow server. What You Will Learn After
com
p
let
ing this lesson, you will be able to: Review batch processing and error checking. Review explicit, implicit and auto
com
mit transactions and transaction nesting level. Discuss how
com
mit and rollback transaction done in stor
ed
proc
ed
ure and trigger affects transaction nesting level. Discuss various transaction isolation level and their impact on locking. Discuss the difference between aborting a statement, a transaction, and a batch. Describe how @@error, @@transcount, and @@rowcount can be us
ed
for error checking and handling. Re
com
mend
ed
Reading Charter 12 “Transactions and Triggers”, Inside SQL Server 2000 by Kalen Delaney Batch Definition SQL Profiler Statements and Batches To help further your understanding of what is a batch and what is a statement, you can use SQL Profiler to study the definition of batch and statement. Try This: Using SQL Profiler to Analyze Batch 1. Log on to a server with Query Analyzer 2. Startup the SQL Profiler against the same server 3. Start a trace using the “StandardSQLProfiler” template 4. Execute the following using Query Analyzer: SELECT @@VERSION SELECT @@SPID The ‘SQL:Batch
Com
p
let
ed
’ event is captur
ed
by the trace. It shows both the statements as a single batch. 5. Now execute the following using Query Analyzer {call sp_who()} What shows up? The ‘RPC:
Com
p
let
ed
’ with the sp_who information. RPC is simply another entry point to the SQL Server to call stor
ed
proc
ed
ures with native data types. This allows one to avoid parsing. The ‘RPC:
Com
p
let
ed
’ event should be consider
ed
the same as a batch for the purposes of this discussion. Stop the current trace and start a new trace using the “SQLProfilerTSQL_SPs” template. Issue the same
com
mand as outlines in step 5 above. Looking at the output, not only can you see the batch markers but each statement as execut
ed
within the batch. Auto
com
mit, Explicit, and Implicit Transaction Auto
com
mit Transaction Mode (Default) Auto
com
mit mode is the default transaction management mode of SQL Server. Every Transact-SQL statement, whether it is a standalone statement or part of a batch, is
com
mitt
ed
or roll
ed
back when it
com
p
let
es. If a statement
com
p
let
es successfully, it is
com
mitt
ed
; if it encounters any error, it is roll
ed
back. A SQL Server connection operates in auto
com
mit mode whenever this default mode has not been overridden by either explicit or implicit transactions. Auto
com
mit mode is also the default mode for ADO, OLE DB, ODBC, and DB-Library. A SQL Server connection operates in auto
com
mit mode until a BEGIN TRANSACTION statement starts an explicit transaction, or implicit transaction mode is set on. When the explicit transaction is
com
mitt
ed
or roll
ed
back, or when implicit transaction mode is turn
ed
off, SQL Server returns to auto
com
mit mode. Explicit Transaction Mode An explicit transaction is a transaction that starts with a BEGIN TRANSACTION statement. An explicit transaction can contain one or more statements and must be terminat
ed
by either a
COM
MIT TRANSACTION or a ROLLBACK TRANSACTION statement. Implicit Transaction Mode SQL Server can automatically or, more precisely, implicitly start a transaction for you if a SET IMPLICIT_TRANSACTIONS ON statement is run or if the implicit transaction option is turn
ed
on globally by running sp_configure ‘user options’ 2. (Actually, the bit mask 0x2 must be turn
ed
on for the user option so you might have to perform an ‘OR’
operation
with the existing user option value.) See SQL Server 2000 Books Online on how to turn on implicit transaction under ODBC and OLE DB (acdata.chm::/ac_8_md_06_2g6r.htm). Transaction Nesting Explicit transactions can be nest
ed
.
Com
mitting inner transactions is ignor
ed
by SQL Server other than to decrements @@TRANCOUNT. The transaction is either
com
mitt
ed
or roll
ed
back bas
ed
on the action taken at the end of the outermost transaction. If the outer transaction is
com
mitt
ed
, the inner nest
ed
transactions are also
com
mitt
ed
. If the outer transaction is roll
ed
back, then all inner transactions are also roll
ed
back, regardless of whether the inner transactions were individually
com
mitt
ed
. Each call to
COM
MIT TRANSACTION applies to the last execut
ed
BEGIN TRANSACTION. If the BEGIN TRANSACTION statements are nest
ed
, then a
COM
MIT statement applies only to the last nest
ed
transaction, which is the innermost transaction. Even if a
COM
MIT TRANSACTION transaction_name statement within a nest
ed
transaction refers to the transaction name of the outer transaction, the
com
mit applies only to the innermost transaction. If a ROLLBACK TRANSACTION statement without a transaction_name parameter is execut
ed
at any level of a set of nest
ed
transaction, it rolls back all the nest
ed
transactions, including the outermost transaction. The @@TRANCOUNT function records the current transaction nesting level. Each BEGIN TRANSACTION statement increments @@TRANCOUNT by one. Each
COM
MIT TRANSACTION statement decrements @@TRANCOUNT by one. A ROLLBACK TRANSACTION statement that does not have a transaction name rolls back all nest
ed
transactions and decrements @@TRANCOUNT to 0. A ROLLBACK TRANSACTION that uses the transaction name of the outermost transaction in a set of nest
ed
transactions rolls back all the nest
ed
transactions and decrements @@TRANCOUNT to 0. When you are unsure if you are already in a transaction, SELECT @@TRANCOUNT to determine whether it is 1 or more. If @@TRANCOUNT is 0 you are not in a transaction. You can also find the transaction nesting level by checking the sysprocess.open_tran column. See SQL Server 2000 Books Online topic “Nesting Transactions” (acdata.chm::/ac_8_md_06_66nq.htm) for more information. Statement, Transaction, and Batch Abort One batch can have many statements and one transaction can have multiple statements, also. One transaction can span multiple batches and one batch can have multiple transactions. Statement Abort Currently executing statement is abort
ed
. This can be a bit confusing when you start talking about statements in a trigger or stor
ed
proc
ed
ure.
Let
us look closely at the following trigger: CREATE TRIGGER TRG8134 ON TBL8134 AFTER INSERT AS BEGIN SELECT 1/0 SELECT 'Next
com
mand in trigger' END To fire the INSERT trigger, the batch could be as simple as ‘INSERT INTO TBL8134 VALUES(1)’. However, the trigger contains two statements that must be execut
ed
as part of the batch to satisfy the clients insert request. When the ‘SELECT 1/0’ causes the divide by zero error, a statement abort is issu
ed
for the ‘SELECT 1/0’ statement. Batch and Transaction Abort On SQL Server 2000 (and SQL Server 7.0) whenever a non-informational error is encounter
ed
in a trigger, the statement abort is promot
ed
to a batch and transactional abort. Thus, in the example the statement abort for ‘select 1/0’ promotion results in an entire batch abort. No further statements in the trigger or batch will be execut
ed
and a rollback is issu
ed
. On SQL Server 6.5, the statement aborts imm
ed
iately and results in a transaction abort. However, the rest of the statements within the trigger are execut
ed
. This trigger could return ‘Next
com
mand in trigger’ as a result set. Once the trigger
com
p
let
es the batch abort promotion takes effect. Conversely, submitting a similar set of statements in a standalone batch can result in different behavior. SELECT 1/0 SELECT 'Next
com
mand in batch' Not considering the set option possibilities, a divide by zero error generally results in a statement abort. Since it is not in a trigger, the promotion to a batch abort is avoid
ed
and subsequent SELECT statement can execute. The programmer should add an “if @@ERROR” check imm
ed
iately after the ‘select 1/0’ to T-SQL execution to control the flow correctly. Aborting and Set Options ARITHABORT If SET ARITHABORT is ON, these error conditions cause the query or batch to terminate. If the errors occur in a transaction, the transaction is roll
ed
back. If SET ARITHABORT is OFF and one of these errors occurs, a warning message is display
ed
, and NULL is assign
ed
to the result of the arithmetic
operation
. When an INSERT, DE
LET
E, or UPDATE statement encounters an arithmetic error (overflow, divide-by-zero, or a domain error) during expression evaluation when SET ARITHABORT is OFF, SQL Server inserts or updates a NULL value. If the target column is not nullable, the insert or update action fails and the user receives an error. XACT_ABORT When SET XACT_ABORT is ON, if a Transact-SQL statement raises a run-time error, the entire transaction is terminat
ed
and roll
ed
back. When OFF, only the Transact-SQL statement that rais
ed
the error is roll
ed
back and the transaction continues processing.
Com
pile errors, such as syntax errors, are not affect
ed
by SET XACT_ABORT. For example: CREATE TABLE t1 (a int PRIMARY KEY) CREATE TABLE t2 (a int REFERENCES t1(a)) GO INSERT INTO t1 VALUES (1) INSERT INTO t1 VALUES (3) INSERT INTO t1 VALUES (4) INSERT INTO t1 VALUES (6) GO SET XACT_ABORT OFF GO BEGIN TRAN INSERT INTO t2 VALUES (1) INSERT INTO t2 VALUES (2) /* Foreign key error */ INSERT INTO t2 VALUES (3)
COM
MIT TRAN SELECT 'Continue running batch 1...' GO SET XACT_ABORT ON GO BEGIN TRAN INSERT INTO t2 VALUES (4) INSERT INTO t2 VALUES (5) /* Foreign key error */ INSERT INTO t2 VALUES (6)
COM
MIT TRAN SELECT 'Continue running batch 2...' GO /* Select shows only keys 1 and 3 add
ed
. Key 2 insert fail
ed
and was roll
ed
back, but XACT_ABORT was OFF and rest of transaction succe
ed
ed
. Key 5 insert error with XACT_ABORT ON caus
ed
all of the second transaction to roll back. Also note that 'Continue running batch 2...' is not Return
ed
to indicate that the batch is abort
ed
. */ SELECT * FROM t2 GO DROP TABLE t2 DROP TABLE t1 GO
Com
pile and Run-time Errors
Com
pile Errors
Com
pile errors are encounter
ed
during syntax checks, security checks, and other general
operation
s to prepare the batch for execution. These errors can prevent the optimization of the query and thus lead to imm
ed
iate abort. The statement is not run and the batch is abort
ed
. The transaction state is generally left untouch
ed
. For example, assume there are four statements in a particular batch. If the third statement has a syntax error, none of the statements in the batch is execut
ed
. Optimization Errors Optimization errors would include rare situations where the statement encounters a problem when attempting to build an optimal execution plan. Example: “too many tables referenc
ed
in the query” error is report
ed
because a “work table” was add
ed
to the plan. Runtime Errors Runtime errors are those that are encounter
ed
during the execution of the query. Consider the following batch: SELECT * FROM pubs.dbo.titles UPDATE pubs.dbo.authors SET au_lname = au_lname SELECT * FROM foo UPDATE pubs.dbo.authors SET au_lname = au_lname If you run the above statements in a batch, the first two statements will be execut
ed
, the third statement will fail because table foo does not exist, and the batch will terminate. Deferr
ed
Name Resolution is the feature that allows this batch to start executing before resolving the object foo. This feature allows SQL Server to delay object resolution and place a “placeholder” in the query’s execution. The object referenc
ed
by the placeholder is resolv
ed
until the query is execut
ed
. In our example, the execution of the statement “SELECT * FROM foo” will trigger another
com
pile process to resolve the name again. This time, error message 208 is return
ed
. Error: 208, Level 16, State 1, Line 1 Invalid object name 'foo'. Message 208 can be encounter
ed
as a runtime or
com
pile error depending on whether the Deferr
ed
Name Resolution feature is available. In SQL Server 6.5 this would be consider
ed
a
com
pile error and on SQL Server 2000 (and SQL Server7.0) as a runtime error due to Deferr
ed
Name Resolution. In the following example, if a trigger referenc
ed
authors2, the error is detect
ed
as SQL Server attempts to execute the trigger. However, under SQL Server 6.5 the create trigger statement fails because authors2 does not exist at
com
pile time. When errors are encounter
ed
in a trigger, generally, the statement, batch, and transaction are abort
ed
. You should be able to observe this by running the following script in pubs database: Create table tblTest(iID int) go create trigger trgInsert on tblTest for INSERT as begin select * from authors select * from authors2 select * from titles end go begin tran select 'Before' insert into tblTest values(1) select 'After' go select @@TRANCOUNT go When run in a batch, the statement and the batch are abort
ed
but the transaction remains active. The follow script illustrates this: begin tran select 'Before' select * from authors2 select 'After' go select @@TRANCOUNT go One other factor in a
com
pile versus runtime error is implicit data type conversions. If you were to run the following statements on SQL Server 6.5 and SQL Server 2000 (and SQL Server 7.0): create table tblData(dtData datetime) go select 1 insert into tblData values(12/13/99) go On SQL Server 6.5, you get an error before execution of the batch begins so no statements are execut
ed
and the batch is abort
ed
. Error: 206, Level 16, State 2, Line 2 Operand type clash: int is in
com
patible with datetime On SQL Server 2000, you get the default value (1900-01-01 00:00:00.000) insert
ed
into the table. SQL Server 2000 implicit data type conversion treats this as integer division. The integer division of 12/13/99 is 0, so the default date and time value is insert
ed
, no error return
ed
. To correct the problem on either version is to wrap the date string with quotes. See Bug #56118 (sqlbug_70) for more
details
about this situation. Another example of a runtime error is a 605 message. Error: 605 Attempt to fetch logical page %S_PGID in database '%.*ls' belongs to object '%.*ls', not to object '%.*ls'. A 605 error is always a runtime error. However, depending on the transaction isolation level, (e.g. using the NOLOCK lock hint), establish
ed
by the SPID the handling of the error can vary. Specifically, a 605 error is consider
ed
an ACCESS error. Errors associat
ed
with buffer and page access are found in the 600 series of errors. When the error is encounter
ed
, the isolation level of the SPID is examin
ed
to determine proper handling bas
ed
on information or fatal error level. Transaction Error Checking Not all errors cause transactions to automatically rollback. Although it is difficult to determine exactly which errors will rollback transactions and which errors will not, the main idea here is that programmers must perform error checking and handle errors appropriately. Error Handling Raiserror
Details
Raiserror seems to be a source of confusion but is really rather simple. Raiserror with severity levels of 20 or higher will terminate the connection. Of course, when the connection is terminat
ed
a full rollback of any open transaction will imm
ed
iately be instantiat
ed
by the SQL Server (except distribut
ed
transaction with DTC involv
ed
). Severity levels lower than 20 will simply result in the error message being return
ed
to the client. They do not affect the transaction scope of the connection. Consider the following batch: use pubs begin tran update authors set au_lname = 'smith' raiserror ('This is bad', 19, 1) with log select @@trancount With severity set at 19, the 'select @@trancount' will be execut
ed
after the raiserror statement and will return a value of 1. If severity is chang
ed
to 20, then the select statement will not run and the connection is broken. Important Error handling must occur not only in T-SQL batches and stor
ed
proc
ed
ures, but also in application program code. Transactions and Triggers (1 of 2) Basic behavior assumes the implicit transactions setting is set to OFF. This behavior makes it possible to identify business logic errors in a trigger, raise an error, rollback the action, and add an audit table entry. Logically, the insert to the audit table cannot take place before the ROLLBACK action and you would not want to build in the audit table insert into every applications error handler that violat
ed
the business rule of the trigger. For more information, see also… SQL Server 2000 Books Online topic “Rollbacks in stor
ed
proc
ed
ure and triggers“ (acdata.chm::/ac_8_md_06_4qcz.htm) IMPLICIT_TRANSACTIONS ON Behavior The behavior of firing other triggers on the same table can be tricky. Say you add
ed
a trigger that checks the CODE field. Read only versions of the rows contain the code ‘RO’ and read/write versions use ‘RW.’ Whenever someone tries to de
let
e a row with a code ‘RO’ the trigger issues the rollback and logs an audit table entry. However, you also have a second trigger that is responsible for cascading de
let
e
operation
s. One client could issue the de
let
e without implicit transactions on and only the current trigger would execute and then terminate the batch. However, a second client with implicit transactions on could issue the same de
let
e and the secondary trigger would fire. You end up with a situation in which the cascading de
let
e
operation
s can take place (are
com
mitt
ed
) but the initial row remains in the table because of the rollback
operation
. None of the de
let
e
operation
s should be allow
ed
but because the transaction scope was restart
ed
because of the implicit transactions setting, they did. Transactions and Triggers (2 of 2) It is extremely difficult to determine the execution state of a trigger when using explicit rollback statements in
com
bination with implicit transactions. The RETURN statement is not allow
ed
to return a value. The only way I have found to set the @@ERROR is using a ‘raiserror’ as the last execution statement in the last trigger to execute. If you modify the example, this following RAISERROR statement will set @@ERROR to 50000: CREATE TRIGGER trgTest on tblTest for INSERT AS BEGIN ROLLBACK INSERT INTO tblAudit VALUES (1) RAISERROR('This is bad', 14,1) END However, this value does not carry over to a secondary trigger for the same table. If you raise an error at the end of the first trigger and then look at @@ERROR in the secondary trigger the @@ERROR remains 0. Carrying Forward an Active/Open Transaction It is possible to exit from a trigger and carry forward an open transaction by issuing a BEGIN TRAN or by setting implicit transaction on and doing INSERT, UPDATE, or DE
LET
E. Warning It is never re
com
mend
ed
that a trigger call BEGIN TRANSACTION. By doing this you increment the transaction count. Invalid code logic, not calling
com
mit transaction, can lead to a situation where the transaction count remains elevat
ed
upon exit of the trigger. Transaction Count The behavior is better explain
ed
by understanding how the server works. It does not matter whether you are in a transaction, when a modification takes place the transaction count is increment
ed
. So, in the simplest form, during the processing of an insert the transaction count is 1. On
com
p
let
ion of the insert, the server will
com
mit (and thus decrement the transaction count). If the
com
mit identifies the transaction count has return
ed
to 0, the actual
com
mit processing is
com
p
let
ed
. Issuing a
com
mit when the transaction count is greater than 1 simply decrements the nest
ed
transaction counter. Thus, when we enter a trigger, the transaction count is 1. At the
com
p
let
ion of the trigger, the transaction count will be 0 due to the
com
mit issu
ed
at the end of the modification statement (insert). In our example, if the connection was already in a transaction and call
ed
the second INSERT, since implicit transaction is ON, the transaction count in the trigger will be 2 as long as the ROLLBACK is not execut
ed
. At the end of the insert, the
com
mit is again issu
ed
to decrement the transaction reference count to 1. However, the value does not return to 0 so the transaction remains open/active. Subsequent triggers are only fir
ed
if the transaction count at the end of the trigger remains greater than or equal to 1. The key to continuation of secondary triggers and the batch is the transaction count at the end of a trigger execution. If the trigger that performs a rollback has done an explicit begin transaction or uses implicit transactions, subsequent triggers and the batch will continue. If the transaction count is not 1 or greater, subsequent triggers and the batch will not execute. Warning Forcing the transaction count after issuing a rollback is dangerous because you can easily loose track of your transaction nesting level. When performing an explicit rollback in a trigger, you should imm
ed
iately issue a return statement to maintain consistent behavior between a connection with and without implicit transaction settings. This will force the trigger(s) and batch to terminate imm
ed
iately. One of the methods of dealing with this issue is to run ‘SET IMPLICIT_TRANSACTIONS OFF’ as the first statement of any trigger. Other methods may entails checking @@TRANCOUNT at the end of the trigger and continue to
COM
MIT the transaction as long as @@TRANCOUNT is greater than 1. Examples The following examples are bas
ed
on this table: create table tbl50000Insert (iID int NOT NULL) go Note If more than one trigger is us
ed
, to guarantee the trigger firing sequence, the sp_settriggerorder
com
mand should be us
ed
. This
com
mand is omitt
ed
in these examples to simplify the
com
plexity of the statements. First Example In the first example, the second trigger was never fir
ed
and the batch, starting with the insert statement, was abort
ed
. Thus, the print statement was never issu
ed
. print('Trigger issues rollback - cancels batch') go create trigger trg50000Insert on tbl50000Insert for INSERT as begin select 'Insert
ed
', * from insert
ed
rollback tran select 'End of trigger', @@TRANCOUNT as 'TRANCOUNT' end go create trigger trg50000Insert2 on tbl50000Insert for INSERT as begin select 'In Trigger2' select 'Trigger 2 Insert
ed
', * from insert
ed
end go insert into tbl50000Insert values(1) print('---------------------- In same batch') select * from tbl50000Insert go -- Cleanup drop trigger trg50000Insert drop trigger trg50000Insert2 go de
let
e from tbl50000Insert Second Example The next example shows that since a new transaction is start
ed
, the second trigger will be fir
ed
and the print statement in the batch will be execut
ed
. Note that the insert is roll
ed
back. print('Trigger issues rollback - increases tran count to continue batch') go create trigger trg50000Insert on tbl50000Insert for INSERT as begin select 'Insert
ed
', * from insert
ed
rollback tran begin tran end go create trigger trg50000Insert2 on tbl50000Insert for INSERT as begin select 'In Trigger2' select 'Trigger 2 Insert
ed
', * from insert
ed
end go insert into tbl50000Insert values(2) print('---------------------- In same batch') select * from tbl50000Insert go -- Cleanup drop trigger trg50000Insert drop trigger trg50000Insert2 go de
let
e from tbl50000Insert Third Example In the third example, the raiserror statement is us
ed
to set the @@ERROR value and the BEGIN TRAN statement is us
ed
in the trigger to allow the batch to continue to run. print('Trigger issues rollback - uses raiserror to set @@ERROR') go create trigger trg50000Insert on tbl50000Insert for INSERT as begin select 'Insert
ed
', * from insert
ed
rollback tran begin tran -- Increase @@trancount to allow -- batch to continue select @@trancount as ‘Trancount’ raiserror('This is from the trigger', 14,1) end go insert into tbl50000Insert values(3) select @@ERROR as 'ERROR', @@TRANCOUNT as 'Trancount' go -- Cleanup drop trigger trg50000Insert go de
let
e from tbl50000Insert Fourth Example For the fourth example, a second trigger is add
ed
to illustrate the fact that @@ERROR value set in the first trigger will not be seen in the second trigger nor will it show up in the batch after the second trigger is fir
ed
. print('Trigger issues rollback - uses raiserror to set @@ERROR, not seen in second trigger and clear
ed
in batch') go create trigger trg50000Insert on tbl50000Insert for INSERT as begin select 'Insert
ed
', * from insert
ed
rollback begin tran -- Increase @@trancount to -- allow batch to continue select @@TRANCOUNT as 'Trancount' raiserror('This is from the trigger', 14,1) end go create trigger trg50000Insert2 on tbl50000Insert for INSERT as begin select @@ERROR as 'ERROR', @@TRANCOUNT as 'Trancount' end go insert into tbl50000Insert values(4) select @@ERROR as 'ERROR', @@TRANCOUNT as 'Trancount' go -- Cleanup drop trigger trg50000Insert drop trigger trg50000Insert2 go de
let
e from tbl50000Insert Lesson 3: Concepts – Locks and Applications This lesson outlines some of the
com
mon causes that contribute to the perception of a slow server. What You Will Learn After
com
p
let
ing this lesson, you will be able to: Explain how lock hints are us
ed
and their impact. Discuss the effect on locking when an application uses Microsoft Transaction Server. Identify the different kinds of deadlocks including distribut
ed
deadlock. Re
com
mend
ed
Reading Charter 14 “Locking”, Inside SQL Server 2000 by Kalen Delaney Charter 16 “Query Tuning”, Inside SQL Server 2000 by Kalen Delaney Q239753 – Deadlock Situation Not Detect
ed
by SQL Server Q288752 – Block
ed
SPID Not Participating in Deadlock May Incorrectly be Chosen as victim Locking Hints UPDLOCK If update locks are us
ed
instead of shar
ed
locks while reading a table, the locks are held until the end of the statement or transaction. UPDLOCK has the advantage of allowing you to read data (without blocking other readers) and update it later with the assurance that the data has not chang
ed
since you last read it. READPAST READPAST is an optimizer hint for use with SELECT statements. When this hint is us
ed
, SQL Server will read past lock
ed
rows. For example, assume table T1 contains a single integer column with the values of 1, 2, 3, 4, and 5. If transaction A changes the value of 3 to 8 but has not yet
com
mitt
ed
, a SELECT * FROM T1 (READPAST) yields values 1, 2, 4, 5. Tip READPAST only applies to transactions operating at READ
COM
MITT
ED
isolation and only reads past row-level locks. This lock hint can be us
ed
to implement a work queue on a SQL Server table. For example, assume there are many external work requests being thrown into a table and they should be servic
ed
in approximate insertion order but they do not have to be
com
p
let
ely FIFO. If you have 4 worker threads consuming work items from the queue they could each pick up a record using read past locking and then de
let
e the entry from the queue and
com
mit when they're done. If they fail, they could rollback, leaving the entry on the queue for the next worker thread to pick up. Caution The READPAST hint is not
com
patible with HOLDLOCK. Try This: Using Locking Hints 1. Open a Query Window and connect to the pubs database. 2. Execute the following statements (--Conn 1 is optional to help you keep track of each connection): BEGIN TRANSACTION -- Conn 1 UPDATE titles SET price = price * 0.9 WHERE title_id = 'BU1032' 3. Open a second connection and execute the following statements: SELECT @@lock_timeout -- Conn 2 GO SELECT * FROM titles SELECT * FROM authors 4. Open a third connection and execute the following statements: SET LOCK_TIMEOUT 0 -- Conn 3 SELECT * FROM titles SELECT * FROM authors 5. Open a fourth connection and execute the following statement: SELECT * FROM titles (READPAST) -- Conn 4 WHERE title_ID < 'C' SELECT * FROM authors How many records were return
ed
? 3 6. Open a fifth connection and execute the following statement: SELECT * FROM titles (NOLOCK) -- Conn 5 WHERE title_ID 0 the lock manager also checks for deadlocks every time a SPID gets block
ed
. So a single deadlock will trigger 20 seconds of more imm
ed
iate deadlock detection, but if no additional deadlocks occur in that 20 seconds, the lock manager no longer checks for deadlocks at each block and detection again only happens every 5 seconds. Although normally not ne
ed
ed
, you may use trace flag -T1205 to trace the deadlock detection process. Note Please note the distinction between application lock and other locks’ deadlock detection. For application lock, we do not rollback the transaction of the deadlock victim but simply return a -3 to sp_getapplock, which the application ne
ed
s to handle itself. Deadlock Resolution How is a deadlock resolv
ed
? SQL Server picks one of the connections as a deadlock victim. The victim is chosen bas
ed
on either which is the least expensive transaction (calculat
ed
using the number and size of the log records) to roll back or in which process “SET DEADLOCK_PRIORITY LOW” is specifi
ed
. The victim’s transaction is roll
ed
back, held locks are releas
ed
, and SQL Server sends error 1205 to the victim’s client application to notify it that it was chosen as a victim. The other process can then obtain access to the resource it was waiting on and continue. Error 1205: Your transaction (process ID #%d) was deadlock
ed
with another process and has been chosen as the deadlock victim. Rerun your transaction. Symptoms of deadlocking Error 1205 usually is not written to the SQL Server errorlog. Unfortunately, you cannot use sp_altermessage to cause 1205 to be written to the errorlog. If the client application does not capture and display error 1205, some of the symptoms of deadlock occurring are: Clients
com
plain of mysteriously cancel
ed
queries when using certain features of an application. May be ac
com
pani
ed
by excessive blocking. Lock contention increases the chances that a deadlock will occur. Triggers and Deadlock Triggers promote the deadlock priority of the SPID for the life of the trigger execution when the DEADLOCK PRIORITY is not set to low. When a statement in a trigger causes a deadlock to occur, the SPID executing the trigger is given preferential treatment and will not be
com
e the victim. Warning Bug 235794 is fil
ed
against SQL Server 2000 where a block
ed
SPID that is not a participant of a deadlock may incorrectly be chosen as a deadlock victim if the SPID is block
ed
by one of the deadlock participants and the SPID has the least amount of transaction logging. See KB article Q288752: “Block
ed
Spid Not Participating in Deadlock May Incorrectly be Chosen as victim” for more information. Distribut
ed
Deadlock – Scenario 1 Distribut
ed
Deadlocks The term distribut
ed
deadlock is ambiguous. There are many types of distribut
ed
deadlocks. Scenario 1 Client application opens connection A, begins a transaction, acquires some locks, opens connection B, connection B gets block
ed
by A but the application is design
ed
to not
com
mit A’s transaction until B
com
p
let
es. Note SQL Server has no way of knowing that connection A is somehow dependent on B – they are two distinct connections with two distinct transactions. This situation is discuss
ed
in scenario #4 in “Q224453 INF: Understanding and Resolving SQL Server 7.0 Blocking Problems”. Distribut
ed
Deadlock – Scenario 2 Scenario 2 Distribut
ed
deadlock involving bound connections. Two connections can be bound into a single transaction context with sp_getbindtoken/sp_bindsession or via DTC. Spid 60 enlists in a transaction with spid 61. A third spid 62 is block
ed
by spid 60, but spid 61 is block
ed
by spid 62. Because they are doing work in the same transaction, spid 60 cannot
com
mit until spid 61 finishes his work, but spid 61 is block
ed
by 62 who is block
ed
by 60. This scenario is describ
ed
in article “Q239753 - Deadlock Situation Not Detect
ed
by SQL Server.” Note SQL Server 6.5 and 7.0 do not detect this deadlock. The SQL Server 2000 deadlock detection algorithm has been enhanc
ed
to detect this type of distribut
ed
deadlock. The diagram in the slide illustrates this situation. Resources lock
ed
by a spid are below that spid (in a box). Arrows indicate blocking and are drawn from the block
ed
spid to the resource that the spid requires. A circle represents a transaction; spids in the same transaction are shown in the same circle. Distribut
ed
Deadlock – Scenario 3 Scenario 3 Distribut
ed
deadlock involving link
ed
servers or server-to-server RPC. Spid 60 on Server 1 executes a stor
ed
proc
ed
ure on Server 2 via link
ed
server. This stor
ed
proc
ed
ure does a loopback link
ed
server query against a table on Server 1, and this connection is block
ed
by a lock held by Spid 60. Note No version of SQL Server is currently design
ed
to detect this distribut
ed
deadlock. Lesson 4: Information Collection and Analysis This lesson outlines some of the
com
mon causes that contribute to the perception of a slow server. What You Will Learn After
com
p
let
ing this lesson, you will be able to: Identify specific information ne
ed
ed
for troubleshooting issues. Locate and collect information ne
ed
ed
for troubleshooting issues. Analyze output of DBCC Inputbuffer, DBCC PSS, and DBCC Page
com
mands. Review information collect
ed
from master.dbo.sysprocesses table. Review information collect
ed
from master.dbo.syslockinfo table. Review output of sp_who, sp_who2, sp_lock. Analyze Profiler log for query usage pattern. Review output of trace flags to help troubleshoot deadlocks. Re
com
mend
ed
Reading Q244455 - INF: Definition of Sysprocesses Waittype and Lastwaittype Fields Q244456 - INF: Description of DBCC PSS
Com
mand for SQL Server 7.0 Q271509 - INF: How to Monitor SQL Server 2000 Blocking Q251004 - How to Monitor SQL Server 7.0 Blocking Q224453 - Understanding and Resolving SQL Server 7.0 Blocking Problem Q282749 – BUG: Deadlock information report
ed
with SQL Server 2000 Profiler Locking and Blocking Try This: Examine Block
ed
Processes 1. Open a Query Window and connect to the pubs database. Execute the following statements: BEGIN TRAN -- connection 1 UPDATE titles SET price = price + 1 2. Open another connection and execute the following statement: SELECT * FROM titles-- connection 2 3. Open a third connection and execute sp_who; note the process id (spid) of the block
ed
process. (Connection 3) 4. In the same connection, execute the following: SELECT spid, cmd, waittype FROM master..sysprocesses WHERE waittype 0 -- connection 3 5. Do not close any of the connections! What was the wait type of the block
ed
process? Try This: Look at locks held Assumes all your connections are still open from the previous exercise. • Execute sp_lock -- Connection 3 What locks is the process from the previous example holding? Make sure you run ROLLBACK TRAN in Connection 1 to clean up your transaction. Collecting Information See Module 2 for more about how to gather this information using various tools. Recognizing Blocking Problems How to Recognize Blocking Problems Users
com
plain about poor performance at a certain time of day, or after a certain number of users connect. SELECT * FROM sysprocesses or sp_who2 shows non-zero values in the block
ed
or BlkBy column. More severe blocking incidents will have long blocking chains or large sysprocesses.waittime values for block
ed
spids. Possibl
LCTF软件备份VariSpec™ Liquid Crystal Tunable Filters
Release history (reverse chronological order) This release 1.3.7.1 Release date: December 11, 2006 Known bugs: none Fixes/features add
ed
from previous release: a) add
ed
support for multiple filters per process in VsDrvr.dll b) updat
ed
manual Previous release 1.3.5 Release date: October 11th, 2005 Known bugs: none Fixes/features add
ed
from previous release a) add
ed
VsSetWavelengthStep and VsGetWavelengthStep functions b) add
ed
VsSetWavelengthWavesConfirm() function c) fix
ed
error-handling of VsSetWavelength() In earlier revisions, the error status light was clear
ed
after a VsSetWavelength() call fail
ed
, so the user did not see the light turn r
ed
to alert that an error had occurr
ed
. This has been fix
ed
in 1.35 so the error light remains lit, and an error code is return
ed
. d) add
ed
range-check to VsDefinePa
let
te() Previous revisions did not range-check the pa
let
te index number, and hard crashes could be produc
ed
if out-of-range values were suppli
ed
to this routine. Previous release 1.33b Release date: February 9, 2005 Known bugs: none Fixes/features chang
ed
from previous release: a) Fix
ed
installer: programmers?guide (vsdrvr.pdf) install
ed
when SDK is select
ed
. Previous release 1.33a Release date: January 10th, 2005 Known bugs: i) SDK programmers?guide is not install
ed
even if SDK is select
ed
. Fixes/features add
ed
from previous release a) VsDrvr.dll fix
ed
handling of
COM
x ports that do not support 460kb The autobaud sequence tries a variety of baud rates, some of which are not support
ed
by RS-232 interfaces (but are support
ed
on USB virtual
COM
ports). This was not handl
ed
properly, so if a call was made to VsOpen when no VariSpec was present, but a later call was made when a filter was present, the latter would fail. b) VsGui add
ed
check of which
COM
x ports are present on
com
puter This program now filters its
COM
x list and only shows ports which actually exist; it us
ed
to show
COM
1 ?
COM
8 even if not all of these were present. c) VsGui add
ed
automatic filter detection on Configure dialog This checks all ports in turn, and reports the first detect
ed
filter. The search order is determin
ed
by the order in which the
com
puter lists ports in the Registry. d) VsGui chang
ed
to recognize filters as present while initializing In prior revisions, VsGui would not report no filter found if a filter was present but still going through its power-up initialization. Now, a message box is post
ed
to indicate that a filter was found, and the program checks whether initialization is
com
p
let
e, at 1 second intervals. When the filter is done initializing, the VsGui controls be
com
e active and report the filter information (serial number, wavelength range, etc). e) VsGui add
ed
filter status item to Configure dialog Adjacent the
COM
x
com
bo box, there is now a text field that indicates filter status as 揘ot found? 揑nitializing? or 揜eady? This field is updat
ed
whenever the
com
bo box selection is chang
ed
. Previous release 1.32 Release date: July 27th, 2004 Known bugs:
COM
x port describ
ed
above as 1.33 fix item a) Fixes/features add
ed
from previous release a) VsGui add
ed
a sweep feature to enable cycling the filter The wavelength start, stop, and step are adjustable. Cycling can be done a fix
ed
number of times or indefinitely. Previous release 1.30 Release date: June 23rd, 2004 Known bugs: none Fixes/features add
ed
from previous release a) New
com
mands VsSetWaveplateAndWaves(), VsGetWaveplateAndWaves(), VsGetWaveplateLimits(), and VsGetWaveplateStages() were add
ed
for support of variable retarder models. b) New
com
mands VsSetRetries() and VsSetLatencyMs() were add
ed
for control of serial port latency and automatic retry in case of error. c) New
com
mands VsSetMode() and VsGetMode() were add
ed
for control of the VariSpec filter抯 triggering and sweep modes d) New
com
mand VsGetSettleMs() was add
ed
to learn optics settling time e) New
com
mands VsIsDiagnostic() and VsIsEngag
ed
InBeam() were add
ed
. These are reserv
ed
for CRI use and are not support
ed
for use by end users. f) The
com
mand syntax and functionality of the VsSend
Com
mand() function was chang
ed
- see description of this
com
mand for
details
g) The VsGui program was modifi
ed
to add sweep function, and the associat
ed
files were add
ed
to the file manifest. The new functions are assign
ed
higher ordinal numbers than the earlier
com
mands, so the ordinal numbers assign
ed
to routines in the earlier VsDrvr routines are preserv
ed
. This means one may use the new VsDrvr.dll file with applications that were develop
ed
and link
ed
with the earlier release, without any ne
ed
to re
com
pile or relink the application. Of course, to use the new functions one must link the application code with the new .lib file containing these functions. Previous release: 1.20 Release date December 3rd, 2003 Known bugs: a) there is a conflict when one uses the implicit pa
let
te to set wavelengths, and also defines pa
let
te states explicitly using the VsDefinePa
let
te() function. When the explicitly set pa
let
te state overwrites a pa
let
te state implicitly associat
ed
with a certain wavelength, that wavelength will not be accurately set when one issues the VsSetWavelength()
com
mand. This is fix
ed
in release 1.30 Fixes/features add
ed
from previous release a) fixes bug with implicit pa
let
te in September 8 release b) incorporates implicit retry for
com
mand send/reply if error in transmission c) recognizes filters with serial numbers > 60000 (normally VariLC numbers) d) supports binary transfer of >127 bytes Previous release 1.11 Release date September 8, 2003 Known bugs a) implicit pa
let
te can fail to create pa
let
te entry, causing tuning error b) VsSendBinary() fails if 128 chars or more sent (sign
ed
char error) Fixes/features add
ed
from previous release a) includ
ed
VsIsPresent() function omitt
ed
from function list of 1.10 release Previous release 1.10 Release date: August 28th, 2003 Known bugs: a) VsIsPresent function not includ
ed
?generates 搖nresolv
ed
external?at link-time Fixes/features add
ed
from previous release: b) add
ed
com
mand VsEnableImplicitPa
let
te() to code and documentation add
ed
com
mand VsConnect() to code and documentation add
ed
com
mand VsClose() to code and documentation add
ed
local variable to avoid unnecessary querying of diagnostic status document
ed
that
com
mand VsConnect() will not be support
ed
in future document
ed
that
com
mand VsDisconnect() will not be support
ed
in future document
ed
that
com
mand VsIsConnect
ed
() will not be support
ed
in future chang
ed
to Windows Installer from previous ZIP file add
ed
table summary of
com
mands to this manual Previous release 1.00 Release date: November 5th, 2002 Known bugs: a) none Fixes/features add
ed
from previous release b) n/a ?initial releas
eD
escription This package provides a set of functions to control the VariSpec filter, which may be call
ed
from C or C++ programs. It incorporates all aspects of the filter
com
munication, including low-level serial routines. With these routines, one can address the filter as a virtual object, with little ne
ed
for detail
ed
understanding of its behavior. This simplifies the programming task for those who want to integrate the VariSpec into larger software packages. File manifest All files are contain
ed
in a single installer file which includes the following: vsdrvr.h declaration file vsdrvr.lib library stub file vsdrvr.dll run-time library vsdrvr_r1p30.pdf (this file) release notes and programmer抯 guide {sample program using VsDrvr package} registryAccess.cpp registryAccess.h resource.h stdafx.h VsConfigDlg.cpp VsConfigfDlg.h VsGui.cpp VsGui.h VsGui.mak VsGui.rc VsGuiDlg.cpp VsGuiDlg.h VsSweep.cpp VsSweep.h Development cycle In order to use the DLL, one should take the following steps: a) Add #include 搗sdrvr.h?statements to all files that access the VariSpec software b) Add vsdrvr.lib to the list of modules search
ed
by the linker c) Place a copy of vsdrvr.dll in either the folder that includes the executable code for the program being develop
ed
; or, preferably, in the windows system folder. Failures in step a) will lead to
com
piler errors; in step b) to linker errors; in step c) to a run-time error message that 揳 requir
ed
.DLL file, vsdrvr.dll, was not found? VariSpec filter configuration The VariSpec filter
com
municates via ASCII
com
mands sent over an RS-232 interface or USB. The RS232 can operate at 9600 or 19,200 baud, while the USB appears as a virtual
COM
x device. While it appears to be present at either 9600 baud or 115.2 kbaud , the actual data transmission occurs at 12 MBaud over the USB. Each
com
mand is terminat
ed
with an end-of-line terminator which can be either a carriage-return
or line fe
ed
. For RS-232 models, the baud rate and terminator character are select
ed
using DIP switches inside the VariSpec electronics module. Default settings are 9600 baud, and the
character (denot
ed
慭r?in the C language). For USB devices, the terminator is always
. For latest information, or to determine how to alter the settings from the factory defaults, consult the VariSpec manual. Timing and latency The VariSpec filter takes a finite time to process
com
mands, which adds an additional delay to that impos
ed
by simple
com
munication delays. In general, the time to process a given
com
mand is short except for the following
operation
s: ?filter initialization ?wavelength selection ?pa
let
te definition The first of these is quite lengthy (30 seconds or more) because it involves measurements and exercising of the liquid crystal optics. The latter two are much faster but still can take a significant amount of time (up to 300 ms) on the older RS-232 electronics due to the
com
putations involv
ed
. On the newer, USB electronics, the latter two functions are
com
p
let
ed
in less than 5 ms. For this reason, the functions that handle these actions offer the option of waiting until the action is
com
p
let
e before returning (so-call
ed
synchronous
operation
); although they can be call
ed
in an asynchronous mode where the function returns as soon as all
com
mands have been sent to the VariSpec, without waiting for them to run to
com
p
let
ion. Another option is to use implicit pa
let
te tables. If this is enabl
ed
, by calling the VsEnableImplicitPa
let
te() function, the driver will define the settings for a given wavelength once, then saves the results within the VariSpec for faster access next time that wavelength is us
ed
. Subsequent access times are essentially instantaneous, until either all of the 128 pa
let
te states are in use, or the pa
let
te is clear
ed
via the VsClearPa
let
te()
com
mand. The VsIsReady() function can be us
ed
to determine whether a filter is done processing all
com
mands. Ideally, one should check VsIsReady() using a timer or the like to wait efficiently, so that the host PC is free to do other tasks while waiting for the VariSpec. The VariSpec always processes each
com
mand to
com
p
let
ion before starting on the next
com
mand, and it has a 256 byte input buffer, so there is no problem issuing several
com
mands at once; they will all be execut
ed
, and in the order given. This also indicates another way to coordinate one抯 program with the VariSpec filter: one can issue any of the VsGetxxx() functions, which query the filter. Since these do not return until the filter has respond
ed
, one may be sure that there are no pending
com
mands when the VsGetxxx() function
com
p
let
es. The VsDrvr package provides for automatic re-try of
com
mands up to 3 times, in the event that
com
munications are garbl
ed
, and will wait up to 2 seconds for
com
p
let
ion of serial
com
mands. The number of retries can be set from 0 to 10, and the latency adjust
ed
, if desir
ed
. However, there should be no ne
ed
to do so. The hardware and software have been test
ed
and observ
ed
to execute several million
com
mands without a single
com
munications error, so in practice the ne
ed
for the retry protocol is very slight.
Com
munication spe
ed
is not improv
ed
by r
ed
ucing the latency, since
com
mands proce
ed
when all characters are receiv
ed
, and the latency time to time-out is only relevent when there is a
com
munications lapse ?and as not
ed
, these are very unlikely so the performance burden of retries should not be a practical consideration. Multiple Filters and Multiple Processes These routines only permit one VariSpec per process, and one process per VariSpec. So, these routines cannot control multiple filters at once from a single process; nor can several active processes seek to control the same filter at the same time. The VsDrvr package anticipates a future upgrade to enable control of multiple filters per process, so it makes use of an integer handle to identify which VariSpec is being controll
ed
, even though (for now) only a single filter can be active. This handle is check
ed
, and the correct handle must be us
ed
in all calls. Program flow and sequence Typical programs should use the following API calls (all applications, upon initiating link to the filter) ?call VsOpen() to establish
com
munications link (requir
ed
) ?call VsIsPresent() to confirm a filter is actually present ?call VsIsReady() in case filter is still doing power-up sequence
?call VsGetFilterIdentity() to learn wavelength limits and serial number if ne
ed
ed
(if setting wavelengths via implicit pa
let
tes; re
com
mend
ed
especially with older filters) ?call VsEnableImplicitPa
let
tes() ? (to set wavelengths, either directly or via implicit pa
let
tes) ?call VsSetWavelength() and VsGetWavelength() to select and retrieve tuning (if setting wavelengths by means of pa
let
tes, and managing pa
let
tes explicity) ?call VsDefinePa
let
teEntry() and VsClearPa
let
te() to define pa
let
te entries ?call VsSetPa
let
te() and VsGetPa
let
te() to select and retrieve pa
let
te state (all applications, when done with the filter) ?call VsClose() to release the
com
munications link (requir
ed
) Sample program Source code for a sample program, VsGui, is provid
ed
, which illustrates how to control a VariSpec filter using the VsDrvr package. All filter control code lives in the VsGuiDlg.cpp module, specifically in the Connect(), RequestToSetWavelength(), and VsWriteTimerProc() functions. The latter two use a system timer to decouple the GUI from the actual filter control, for more responsive fe
ed
back to the user. Such an approach is unnecessary if pa
let
tes are us
ed
, which is preferable when one wishes the best real-time performance. See the VariSpec manual for further information. Auxiliary
com
mands Certain
com
mands are normally only us
ed
at the factory when filters are being built and configur
ed
, or in specializ
ed
configurations. These appear after the normal
com
mand set in the listing below. Obsolescent
com
mands The VsConnect(), VsIsConnect
ed
(), and VsDisconnect() functions are obsolescent. They are support
ed
in this release, but will not necessarily exist in releases after 1.3x. As they are obsolescent, they are not re
com
mend
ed
for new code. These function calls are not document
ed
further in this manual.Summary of
com
mands Normal
Com
mands VsClearError(vsHnd) VsClearPa
let
te(vsHnd) VsClearPending
Com
mands(vsHnd) VsClose(vsHnd) VsDefinePa
let
te(vsHnd, palEntry, wl) VsEnableImplicitPa
let
te(vsHnd, isEnabl
ed
) VsGetError(vsHnd, *pErr) VsGetFilterIdentity(vsHnd, *pVer, *pSerno, *pminWl, *pmaxWl) VsGetMode(vsHnd, int *pMode) VsGetPa
let
te(vsHnd, *ppalEntryNo) VsGetSettleMs(vsHnd, *psettleMs) VsGetTemperature(vsHnd, *pTemperature) VsGetWavelength(vsHnd, *pwl) VsGetWavelengthAndWaves(vsHnd, double *pWl, double *pwaves) VsGetWaveplateLimits(vsHnd, double *pminWaves, double *pmaxWaves) VsGetWaveplateStages(vsHnd, int *pnStages) VsIsPresent(vsHnd) VsIsReady(vsHnd) VsOpen(*pvsHnd, portName, *pErrorCode) VsSetLatencyMs(vsHnd, nLatencyMs) VsSetMode(vsHnd, mode) VsSetPa
let
te(vsHnd, palEntry) VsSetRetries(vsHnd, nRetries) VsSetWavelength(vsHnd, wl, confirm) VsSetWavelengthAndWaves(vsHnd, wl, waveplateVector) Auxiliary
com
mands VsGetAllDrive(vsHnd, *pStages, drive[]) VsGetNstages(vsHnd, *pStages) VsGetPendingReply(vsHnd, reply, nChars, *pQuit, firstMs, subsequentMs) VsGetReply(vsHnd, reply, nChars, waitMs) VsIsDiagnostic(vsHnd) VsIsEngag
ed
InBeam(vsHnd) VsSendBinary(vsHnd, bin[], nChars, clearEcho) VsSend
Com
mand(vsHnd, cmd, sendEolChar) VsSetStag
eD
rive(vsHnd, stage, drive) VsThermistorCounts(vsHnd, *pCounts) Alphabetical list of function calls Syntax Throughout this manual, the following conventions are us
ed
: VSDRVR_API Int32 VsOpen( VS_HANDLE *vsHnd, LPCSTR port, Int32 *pErrorCode ) Bold text is us
ed
for function names Italics indicate variables whose names (or values) are suppli
ed
by the user in their code Name-mangling The declaration file vsdrvr.h includes statements that render the API names accurately in a C++ environment, i.e. free of the name-mangling decoration suffix that is normally add
ed
by C++
com
pilers. Thus the functions can be call
ed
freely from either C or C++ programs, using the names exactly as shown in this manual or in the VsDrvr.h file. Call and argument declarations The call protocol type, VSDRVR_API, is declar
ed
in vsdrvr.h, as are the types Int32 and VS_HANDLE. Errors All functions return an Int32 status value, which is TRUE if the routine
com
p
let
ed
successfully and FALSE if there was an error. If there is an error in the VsOpen() function, the error is return
ed
in *pErrorCode. If there is an error in
com
municating with a filter after a successful VsOpen(), one should use the VsGetError() function to obtain the specific error code involv
ed
. This function returns VSD_ERR_NOERROR if there is no error pending. Main and auxiliary functions The next section provides a description of the main functions, in alphabetic order; follow
ed
by the auxiliary functions, also in alphabetical order. In normal use, one will probably have no ne
ed
for the auxiliary functions, but this list is provid
ed
for
com
p
let
eness. VSDRVR_API Int32 VsClearError( VS_HANDLE vsHnd ) Arguments: vsHnd handle value return
ed
by VsOpen() Purpose: this function clears any pending error on the VariSpec. This resets the error L
ED
on the filter, and sets the pending error to VS_ERR_NOERROR. Returns: TRUE if successful, FALSE otherwise Notes: noneVSDRVR_API Int32 VsClearPa
let
te( VS_HANDLE vsHnd ) Arguments: vsHnd handle value return
ed
by VsOpen() Function: clears all elements of the current filter pa
let
te and renders the current pa
let
te element undefin
ed
. Returns: TRUE if successful, FALSE otherwise Notes: none VSDRVR_API Int32 VsClearPending
Com
mands( VS_HANDLE vsHnd ) Arguments: vsHnd handle value return
ed
by VsOpen() Function: clears all pending
com
mands including any presently in-process Returns: TRUE if successful, FALSE otherwise Notes: none VSDRVR_API Int32 VsClose( VS_HANDLE vsHnd ) Arguments: vsHnd handle value return
ed
by VsOpen(). May also be NULL, in which case all VariSpec filters are disconnect
ed
. Function: Disconnects the filter. Returns: TRUE if successful, FALSE otherwise Notes: No other functions will work until VsOpen() is call
ed
to re-establish
com
munications with the filter. VSDRVR_API Int32 VsDefinePa
let
te( VS_HANDLE vsHnd, Int32 palEntry, double wl) Arguments: vsHnd handle value return
ed
by VsOpen() palEntry pa
let
te entry to be defin
ed
, in the range [0, 127] wl wavelength associat
ed
with this pa
let
te entry Function: creates a pa
let
te entry for the entry and wavelength specifi
ed
. This pa
let
te entry can then be access
ed
using VsSetPa
let
te() and VsGetPa
let
te() functions. Returns: TRUE if successful, FALSE otherwise Notes: pa
let
tes provide a fast way to define filter settings for wavelengths that are to be repeat
ed
ly access
ed
. The calculations are perform
ed
once, at the time the pa
let
te element is defin
ed
, and the results are sav
ed
in a pa
let
te table to tune to that wavelength without repeating the underlying calculations. And, one may cycle through the pa
let
te table, once defin
ed
, by means of TTL a trigger signal to the filter electronics. For more information about using pa
let
tes, consult the VariSpec user抯 manual. VSDRVR_API Int32 VsEnableImplicitPa
let
te( VS_HANDLE vsHnd, BOOL imlEnabl
ed
) Arguments: vsHnd handle value return
ed
by VsOpen() implEnabl
ed
selects whether to use implicit pa
let
te definition Function: enables or disables implicit pa
let
te generation when wavelengths are defin
ed
using the VsSetWavelength function. If enabl
ed
, a new pa
let
te entry is creat
ed
whenever a new wavelength is access
ed
, and the VsSetWavelength function will use this pa
let
te entry whenever that wavelength is access
ed
again, until the pa
let
te is clear
ed
. The result is improv
ed
tuning spe
ed
; however, it means that the pa
let
te contents are alter
ed
dynamically, which can be a problem if one relies upon the pa
let
te contents remaining fix
ed
. Clearing the pa
let
te with VsClearPa
let
te() will clear all implicit pa
let
te entries as well as explicitly defin
ed
pa
let
te entries. This is useful if one knows that wavelengths us
ed
previously will not be us
ed
again, or that a new set of wavelengths is about to be defin
ed
and one wishes to make sure there is sufficient room in the pa
let
te. Returns: TRUE if successful, FALSE otherwise Notes: By default, the implicit pa
let
te is enabl
ed
for VariSpec filters that have RS-232 interface, and is disabl
ed
for newer VariSpec filters that have the USB interface. This is because the newer filters perform the filter tuning calculations fast enough that no performance improvement is obtain
ed
by using the implicit pa
let
te to set wavelength. For more information about using pa
let
tes, consult the VariSpec user抯 manual. VSDRVR_API Int32 VsGetError( VS_HANDLE vsHnd, Int32 *pErr) Arguments: vsHnd handle value return
ed
by VsOpen() pErr pointer to the int that will receive the most recent error code Purpose: this function clears any pending error on the VariSpec. This resets the error L
ED
on the filter, and sets the pending error to VS_ERR_NOERROR. Returns: TRUE if successful, FALSE otherwise Notes: noneVSDRVR_API Int32 VsGetFilterIdentity( VS_HANDLE vsHnd, Int32 *pVer, Int32 *pSerno, double *pminWl, double *pmaxWl ) Arguments: vsHnd handle value return
ed
by VsOpen() pVer pointer to variable that receives the filter firmware version pSerno pointer to variable that receives the filter serial number pminWl pointer to variable that receives the filter抯 minimum wavelength pmaxWl pointer to variable that receives the filter抯 maximum wavelength Purpose: this function reads the filter抯 information using the VariSpec 慥?
com
mand, and puts it to the call variables. Any one of the pointers may be NULL, in which case that piece of information is not return
ed
. Returns: TRUE if successful, FALSE otherwise Notes: none VSDRVR_API Int32 VsGetMode( VS_HANDLE vsHnd, Int32 *pMode ) Arguments: vsHnd handle value return
ed
by VsOpen() pMode pointer to variable that receives the filter mode Purpose: this function enables one to read the filter抯 present mode. The mode describes how the filter responds to hardware triggers, and is describ
ed
in the filter manual. If the pointer *pMode is NULL, no information is return
ed
. Returns: TRUE if successful, FALSE otherwise Notes: none VSDRVR_API Int32 VsGetPa
let
te( VS_HANDLE vsHnd, Int32 *ppalEntry ) Arguments: vsHnd handle value return
ed
by VsOpen() ppalEntry pointer to int that receives the 0-bas
ed
pa
let
te entry number. This pointer may not be NULL. Purpose: this function determines what pa
let
te entry is currently active and returns it to *ppalEntry. If the present pa
let
te entry is undefin
ed
, it sets *ppalEntry to ? and returns a successful status code. Returns: TRUE if successful, FALSE otherwise Notes: noneVSDRVR_API Int32 VsGetSettleMs( VS_HANDLE vsHnd, Int32 *pSettleMs ) Arguments: vsHnd handle value return
ed
by VsOpen() pSettleMs pointer to variable that receives the filter settling time Purpose: this function returns the filter抯 settling time, in milliseconds. This is useful for establishing overall system timing. The settling time is defin
ed
as beginning at the moment that the electronics have process
ed
the request to change wavelength, as determin
ed
by VsIsReady() or equivalent. At that moment, the new set of drive signals are appli
ed
to the optics, and the optics will settle in *psettleMs milliseconds. The settling time is defin
ed
as a 95% settling time, meaning the filter has settl
ed
to 95% of its ultimate transmission value at the new wavelength being tun
ed
to. Returns: TRUE if successful, FALSE otherwise Notes: none VSDRVR_API Int32 VsGetTemperature( VS_HANDLE vsHnd, double *pTemperature ) Arguments: vsHnd handle value return
ed
by VsOpen() pTemperature pointer to double that will receive the filter temperature, in C This pointer may not be NULL Purpose: this function determines the filter temperature using the VariSpec 慪?
com
mand, and puts the result to *pTemperature. Returns: TRUE if successful, FALSE otherwise Notes: noneVSDRVR_API Int32 VsGetWavelength( VS_HANDLE vsHnd, double *pwl ) Arguments: vsHnd handle value return
ed
by VsOpen() pwl pointer to double that will receive the filter wavelength, in nm This pointer may not be NULL Purpose: this function determines the current filter wavelength and returns it to *pwl. If the present wavelength is undefin
ed
, it sets *pwl to ? and returns a successful status code. Returns: TRUE if successful, FALSE otherwise Notes: none VSDRVR_API Int32 VsGetWavelengthAndWaves( VS_HANDLE vsHnd, double *pwl, double *pwaves ) Arguments: vsHnd handle value return
ed
by VsOpen() pwl pointer to double that will receive the filter wavelength, in nm. This pointer may not be NULL pwaves pointer to double array that will receive one or more waveplate settings. The actual number of settings may be determin
ed
by VsGetWaveplateStages(). Purpose: this function determines the current filter wavelength and returns it to *pwl. If the present wavelength is undefin
ed
, it sets *pwl to ? and returns a successful status code. If the present wavelength is defin
ed
, it also returns the waves of retardance at each of the polarization analysis waveplates in the optics, in the pwaves[] array. Returns: TRUE if successful, FALSE otherwise Notes: See the description of the VsGetWaveplateStages()
com
mand for more detail on what stages are consider
ed
waveplates. VSDRVR_API Int32 VsGetWaveplateLimits( VS_HANDLE vsHnd, double *pminWaves, double *pmaxWaves ) Arguments: vsHnd handle value return
ed
by VsOpen() pminWaves pointer to double array that will receive the minimum retardances possible at each of the waveplate stages in the filter optics. pmaxWaves pointer to double array that will receive the maximum retardances possible at each of the waveplate stages in the filter optics Purpose: this function determines the range of retardances that are possible at each waveplate stage, in waves, at the present wavelength setting. Note that the retardance range is itself a function of wavelength, so the results will vary as the wavelength is chang
ed
. Returns: TRUE if successful, FALSE otherwise Notes: See the description of the VsGetWaveplateStages
com
mand for more detail on what stages are consider
ed
waveplates. VSDRVR_API Int32 VsGetWaveplateStages( VS_HANDLE vsHnd, Int32 *pnwpStages ) Arguments: vsHnd handle value return
ed
by VsOpen() pnwpStages pointer to Int32 that will receive the number of waveplate stages in the filter optics. This pointer may not be NULL Purpose: this function determines how many polarization analysis stages are present in the optics and returns this number. Note that although all VariSpec filters operate by means of variable retarder element, optical stages that perform wavelength tuning rather than polarization analysis are not treat
ed
as waveplate stages. For example, most VariSpec filters do not include any polarization analysis stages and thus report no waveplates. VsGetWaveplateStages will return a value of 2 for conventional PolScope optics. In contrast, VsGetNstages() reports the total number of stages in a filter, including stages that perform polarization analysis and stages that perform wavelength tuning. Returns: TRUE if successful, FALSE otherwise Notes: none VSDRVR_API Int32 VsIsPresent( VS_HANDLE vsHnd ) Arguments: vsHnd handle value return
ed
by VsOpen() Function: determines whether a filter is actually present and responding. This is done using the status-check character ??as describ
ed
in the VariSpec manual. Returns: TRUE if successful, FALSE otherwise Notes: none VSDRVR_API Int32 VsIsReady( VS_HANDLE vsHnd ) Arguments: vsHnd handle value return
ed
by VsOpen() Function: determines whether the filter is done processing all
com
mands, and is ready to receive new
com
mands. Returns: TRUE if successful, FALSE otherwise Notes: this is useful when sending
com
mands such as VsSetWavelength(), VsInitialize(), VsExercise(), and VsDefinePa
let
teEntry() in asynchronous mode. These
com
mands take a prolong
ed
time, and running them synchronously ties up the processor waiting. Alternatively, one can create a loop that uses CreateWaitab
leT
imer(), SetWaitab
leT
imer(), and WaitForSingleObject() to call VsIsReady() at intervals, checking whether the filter is ready. This approach, though more work for the programmer, leaves most of the processor capacity free for other tasks such as GUI update and the like. VSDRVR_API Int32 VsOpen (VS_HANDLE *pvsHnd, LPCSTR port, Int32 *pErrorCode ) Arguments: pvsHnd pointer to handle. This pointer may not be NULL. port port name, such as 揅OM1? pErrorCode pointer to Int32 to receive an error code if VsOpen() fails Purpose: establishes a connection to the VariSpec using the port specifi
ed
, and automatically determines the baud rate and end-of-line character for subsequent
com
munications. It also retrieves the filter抯 serial number and wavelength range, to confirm that it is a VariSpec and not some other similar device. However, these are retriev
ed
purely as an integrity check, and the values are not return
ed
to the calling application. See VsGetFilterInfo() to access this information. If the device responds as a VariSpec does when it is not ready (i.e. still initializing), VsOpen() fails and returns the error code VSD_ERR_BUSY. However, one may not be sure that the device is a VariSpec until VsOpen()
com
p
let
es successfully The error codes return
ed
by this function are list
ed
in VsDrvr.h. When VsOpen() runs successfully, *pErrorCode is set to VSD_ERR_NOERROR. The handle associat
ed
with this filter is set by VsOpen() to a nonzero handle value if successful, or to NULL if no connection is establish
ed
. The port may refer to
COM
1 through
COM
8. Return: TRUE if successful, FALSE otherwise Notes: Until this function is call
ed
, none of the other functions will work. VSDRVR_API Int32 VsSetLatency( VS_HANDLE vsHnd, Int32 latencyMs ) Arguments: vsHnd handle value return
ed
by VsOpen() latencyMs the serial port latency, in ms, in the range [1, 5000] Purpose: this function sets the latency time for USB or RS-232
com
mands to the value given by latencyMs.
Com
mands that do not conclude in this time are consider
ed
to have tim
ed
-out. Returns: TRUE if successful, FALSE otherwise Notes: increasing the latency time does not increase the time for
com
mands to
com
p
let
e, nor does it insert any delays in normal processing. It merely defines the window for maximum transmission time, beyond which time an error is report
ed
. VSDRVR_API Int32 VsSetPa
let
te( VS_HANDLE vsHnd, Int32 palEntry ) Arguments: vsHnd handle value return
ed
by VsOpen() palEntry the pa
let
te entry to be set, in the range [0, 127] Purpose: this function sets the filter to the pa
let
te entry specifi
ed
by palEntry Returns: TRUE if successful, FALSE otherwise Notes: pa
let
tes are a good way to control the filter in applications where it will be cycl
ed
repeat
ed
ly to various, fix
ed
wavelength settings. Pa
let
tes calculate the filter settings once, and save the results for rapid access later, rather than calculating them each time, as occurs when one sets the wavelength directly with VsSetWavelength(). See the VariSpec manual for more information on pa
let
tes.VSDRVR_API Int32 VsSetRetries( VS_HANDLE vsHnd, Int32 nRetries ) Arguments: vsHnd handle value return
ed
by VsOpen() nRetries the number serial
com
munications retries, in the range [0, 10] Purpose: The VsDrvr software automatically detects errors in
com
munication and re-sends if an error is detect
ed
. This function sets the number of times to retry sending any single
com
mand, before reporting a
com
munications failure. The default is 3, which should be adequate, and one should rarely ne
ed
to change this, if ever. The primary purpose of this function is to enable setting the number of retries to zero, to force single-error events to cause detectable errors (as they would normally be fix
ed
automatically via the retry mechanism) Returns: TRUE if successful, FALSE otherwise Notes: noneVSDRVR_API Int32 VsSetWavelength( VS_HANDLE vsHnd, double wl, BOOL confirm ) Arguments: vsHnd handle value return
ed
by VsOpen() wl wavelength to tune to, in nm confirm logical flag, indicating whether to confirm actual wavelength value Purpose: this function sets the filter wavelength to the value in wl. If confirm is TRUE, it waits for the filter to
com
p
let
e the
com
mand, and then reads back the actual wavelength to confirm it was implement
ed
successfully. Note that the only time there can be a disparity is when the wavelength request
ed
by wl lies outside the legal range for that filter, or if the wavelength is specifi
ed
to a finer resolution than the filter recognizes (normally, 0.01 nm). Returns: TRUE if successful, FALSE otherwise Notes: noneVSDRVR_API Int32 VsGetAllDrive( VS_HANDLE vsHnd, Int32 *pStages, Int32 drive[] ) Arguments: vsHnd handle value return
ed
by VsOpen() pStages pointer to int that will receive the number of stages in the filter drive[] int array to receive the filter drive levels. Purpose: this function reports the number of filter stages in *pStages. If this argument is NULL, it is ignor
ed
. The function returns the actual drive level at each stage, in counts, in drive[] , which must not be NULL. Returns: TRUE if successful, FALSE otherwise Notes: The array drive[] must be large enough to receive all the drive levels ?if the exact number of stages is not known, call VsGetNstages() first, or allocate enough array elements (12) to ac
com
modate the largest filter design.VSDRVR_API Int32 VsGetNstages( VS_HANDLE vsHnd, Int32 *pStages ) Arguments: vsHnd handle value return
ed
by VsOpen() pStages pointer to int that will receive the number of stages in the filter Purpose: this function determines the number of optical stages in the filter and returns it in *pStages, which may not be NULL. Returns: TRUE if successful, FALSE otherwise Notes: noneVSDRVR_API Int32 VsGetPendingReply( VS_HANDLE vsHnd, LPSTR reply, Int32 nChars, Int32 *pQuit, Int32 firstMs, Int32 subsequentMs ) Arguments: vsHnd handle value return
ed
by VsOpen() reply pointer to buffer that is to receive the reply nChars number of characters to receive pQuit pointer to flag to control this function ?see Notes below firstMs maximum time to wait, in ms, for first character of reply subsequentMs maximum time to wait, in ms, for each subsequent character Purpose: this function is us
ed
to exploit some of the less-
com
mon aspects of the filter, and it is likely that most programs will require its use. It receives a reply from the filter that may not arrive for a long time. The routine waits up to firstMs for the first character to arrive. Subsequent characters must arrive within subsequentMs of one another. Typically, this routine is call
ed
with a high value for firstMs and a lower value for subsequentMs. Returns: TRUE if successful, FALSE otherwise Notes: pQuit can be us
ed
to cancel this function while it is waiting for the reply, if that is desir
ed
, such as to respond to a user cancellation request. To use this feature, pQuit must be non-NULL and *pQuit must be FALSE at the time VsGetPendingReply() is call
ed
. VsGetPendingReply() checks this address periodically, and if it discovers that *pQuit is TRUE, it will cancel and return imm
ed
iately.VSDRVR_API Int32 VsGetReply( VS_HANDLE vsHnd, LPSTR reply, Int32 nChars, Int32 waitMs ) Arguments: vsHnd handle value return
ed
by VsOpen() reply pointer to buffer that will receive the filter reply nChars the number of characters sought waitMs the maximum time, in ms, to wait for the reply Purpose: this function is us
ed
to exploit those filter
com
mands that are not directly provid
ed
by other functions, and most programmers will not ne
ed
to use it. If the reply is not receiv
ed
in the time indicat
ed
by waitMs, or if less than nChars are receiv
ed
, the function returns with an unsuccessful status code. Returns: TRUE if successful, FALSE otherwise Notes: noneVSDRVR_API Int32 VsIsDiagnostic( VS_HANDLE vsHnd ) Arguments: vsHnd handle value return
ed
by VsOpen() Function: determines whether the filter is in the diagnostic mode that is us
ed
at the factory for setup and calibration. This
com
mand is reserv
ed
for CRI use only. Returns: TRUE if diagnostic, FALSE otherwise. VSDRVR_API Int32 VsIsEngag
ed
InBeam( VS_HANDLE vsHnd ) Arguments: vsHnd handle value return
ed
by VsOpen() Function: determines whether the filter is engag
ed
in the beam, when configur
ed
into certain CRI systems. This function is reserv
ed
for CRI use only Returns: TRUE if engag
ed
in the beam, FALSE otherwise VSDRVR_API Int32 VsSendBinary( VS_HANDLE vsHnd, char *bin, Int32 nChars, BOOL clearEcho ) Arguments: vsHnd handle value return
ed
by VsOpen() bin pointer a buffer that contains binary data to be sent to the filter nChars the number of binary characters to be sent clearEcho flag indicating whether to clear echo characters from the queue Purpose: this routine sends binary blocks of data to the filter. This is only necessary when programming calibration data to the filter, and it is not anticipat
ed
that this function will be necessary in any normal use. Returns: TRUE if successful, FALSE otherwise Notes: none VSDRVR_API Int32 VsSend
Com
mand( VS_HANDLE vsHnd, LPCSTR cmd, BOOL sendEolChar) Arguments: vsHnd handle value return
ed
by VsOpen() cmd pointer to the
com
mand to be sent to the filter sendEolChar flag indicating whether to append the end-of-line character or not Purpose: this function sends the
com
mand in cmd to the filter, and appends an end-of-line terminator (or not) bas
ed
on sendEolChar. It automatically retrieves and discards the character echo of this
com
mand by the VariSpec. It does not automatically retrieve the reply, if any, from the VariSpec. Returns: TRUE if successful, FALSE otherwise Notes: The parameter sendEolChar should normally be true in all cases, unless one is sending individual character
com
mands such as the ??or 慇?
com
mands describ
ed
in the VariSpec user抯 manual.VSDRVR_API Int32 VsSetStag
eD
rive( VS_HANDLE vsHnd, Int32 stage, Int32 drive ) Arguments: vsHnd handle value return
ed
by VsOpen() stage stage number whose drive level is to be adjust
ed
drive drive level, in counts, for that stage Purpose: this function provides a way to manually adjust the drive levels at each of the filter抯 optical stages. It is normally us
ed
only during manufacture, and is not a function that most software programs will have any reason to use. Returns: TRUE if successful, FALSE otherwise Notes: none VSDRVR_API Int32 VsThermistorCounts( VS_HANDLE vsHnd, Int32 *pCounts ) Arguments: vsHnd handle value return
ed
by VsOpen() pCounts pointer to int that will receive the thermistor signal, in counts Purpose: this function provides a way to determine the signal level, in counts, at the thermistor. It is normally us
ed
only during manufacture, and is not a function that most software programs will have any reason to use. Returns: TRUE if successful, FALSE otherwise Notes: none
php.ini-development
[PHP] ;;;;;;;;;;;;;;;;;;; ; About php.ini ; ;;;;;;;;;;;;;;;;;;; ; PHP's initialization file, generally call
ed
php.ini, is responsible for ; configuring many of the aspects of PHP's behavior. ; PHP attempts to find and load this configuration from a number of locations. ; The following is a summary of its search order: ; 1. SAPI module specific location. ; 2. The PHPRC environment variable. (As of PHP 5.2.0) ; 3. A number of pr
ed
efin
ed
registry keys on Windows (As of PHP 5.2.0) ; 4. Current working directory (except CLI) ; 5. The web server's directory (for SAPI modules), or directory of PHP ; (otherwise in Windows) ; 6. The directory from the --with-config-file-path
com
pile time option, or the ; Windows directory (C:\windows or C:\winnt) ; See the PHP docs for more specific information. ; http://php.net/configuration.file ; The syntax of the file is extremely simple. Whitespace and lines ; beginning with a semicolon are silently ignor
ed
(as you probably guess
ed
). ; Section headers (e.g. [Foo]) are also silently ignor
ed
, even though ; they might mean something in the future. ; Directives following the section heading [PATH=/www/mysite] only ; apply to PHP files in the /www/mysite directory. Directives ; following the section heading [HOST=www.example.
com
] only apply to ; PHP files serv
ed
from www.example.
com
. Directives set in these ; special sections cannot be overridden by user-defin
ed
INI files or ; at runtime. Currently, [PATH=] and [HOST=] sections only work under ; CGI/FastCGI. ; http://php.net/ini.sections ; Directives are specifi
ed
using the following syntax: ; directive = value ; Directive names are *case sensitive* - foo=bar is different from FOO=bar. ; Directives are variables us
ed
to configure PHP or PHP extensions. ; There is no name validation. If PHP can't find an expect
ed
; directive because it is not set or is mistyp
ed
, a default value will be us
ed
. ; The value can be a string, a number, a PHP constant (e.g. E_ALL or M_PI), one ; of the INI constants (On, Off, True, False, Yes, No and None) or an expression ; (e.g. E_ALL & ~E_NOTICE), a quot
ed
string ("bar"), or a reference to a ; previously set variable or directive (e.g. ${foo}) ; Expressions in the INI file are limit
ed
to bitwise operators and parentheses: ; | bitwise OR ; ^ bitwise XOR ; & bitwise AND ; ~ bitwise NOT ; ! boolean NOT ; Boolean flags can be turn
ed
on using the values 1, On, True or Yes. ; They can be turn
ed
off using the values 0, Off, False or No. ; An empty string can be denot
ed
by simply not writing anything after the equal ; sign, or by using the None keyword: ; foo = ; sets foo to an empty string ; foo = None ; sets foo to an empty string ; foo = "None" ; sets foo to the string 'None' ; If you use constants in your value, and these constants belong to a ; dynamically load
ed
extension (either a PHP extension or a Zend extension), ; you may only use these constants *after* the line that loads the extension. ;;;;;;;;;;;;;;;;;;; ; About this file ; ;;;;;;;;;;;;;;;;;;; ; PHP
com
es packag
ed
with two INI files. One that is re
com
mend
ed
to be us
ed
; in production environments and one that is re
com
mend
ed
to be us
ed
in ; development environments. ; php.ini-production contains settings which hold security, performance and ; best practices at its core. But please be aware, these settings may break ;
com
patibility with older or less security conscience applications. We ; re
com
mending using the production ini in production and testing environments. ; php.ini-development is very similar to its production variant, except it's ; much more verbose when it
com
es to errors. We re
com
mending using the ; development version only in development environments as errors shown to ; application users can inadvertently leak otherwise secure information. ; This is php.ini-development INI file. ;;;;;;;;;;;;;;;;;;; ; Quick Reference ; ;;;;;;;;;;;;;;;;;;; ; The following are all the settings which are different in either the production ; or development versions of the INIs with respect to PHP's default behavior. ; Please see the actual settings later in the document for more
details
as to why ; we re
com
mend these changes in PHP's behavior. ; display_errors ; Default Value: On ; Development Value: On ; Production Value: Off ; display_startup_errors ; Default Value: Off ; Development Value: On ; Production Value: Off ; error_reporting ; Default Value: E_ALL & ~E_NOTICE & ~E_STRICT & ~E_DEPRECAT
ED
; Development Value: E_ALL ; Production Value: E_ALL & ~E_DEPRECAT
ED
& ~E_STRICT ; html_errors ; Default Value: On ; Development Value: On ; Production value: On ; log_errors ; Default Value: Off ; Development Value: On ; Production Value: On ; max_input_time ; Default Value: -1 (Unlimit
ed
) ; Development Value: 60 (60 seconds) ; Production Value: 60 (60 seconds) ; output_buffering ; Default Value: Off ; Development Value: 4096 ; Production Value: 4096 ; register_argc_argv ; Default Value: On ; Development Value: Off ; Production Value: Off ; request_order ; Default Value: None ; Development Value: "GP" ; Production Value: "GP" ; session.gc_divisor ; Default Value: 100 ; Development Value: 1000 ; Production Value: 1000 ; session.hash_bits_per_character ; Default Value: 4 ; Development Value: 5 ; Production Value: 5 ; short_open_tag ; Default Value: On ; Development Value: Off ; Production Value: Off ; track_errors ; Default Value: Off ; Development Value: On ; Production Value: Off ; url_rewriter.tags ; Default Value: "a=href,area=href,frame=src,form=,fieldset=" ; Development Value: "a=href,area=href,frame=src,input=src,form=fakeentry" ; Production Value: "a=href,area=href,frame=src,input=src,form=fakeentry" ; variables_order ; Default Value: "EGPCS" ; Development Value: "GPCS" ; Production Value: "GPCS" ;;;;;;;;;;;;;;;;;;;; ; php.ini Options ; ;;;;;;;;;;;;;;;;;;;; ; Name for user-defin
ed
php.ini (.htaccess) files. Default is ".user.ini" ;user_ini.filename = ".user.ini" ; To disable this feature set this option to empty value ;user_ini.filename = ; TTL for user-defin
ed
php.ini files (time-to-live) in seconds. Default is 300 seconds (5 minutes) ;user_ini.cache_ttl = 300 ;;;;;;;;;;;;;;;;;;;; ; Language Options ; ;;;;;;;;;;;;;;;;;;;; ; Enable the PHP scripting language engine under Apache. ; http://php.net/engine engine = On ; This directive determines whether or not PHP will recognize code between ; and ?> tags as PHP source which should be process
ed
as such. It is ; generally re
com
mend
ed
that should be us
ed
and that this feature ; should be disabl
ed
, as enabling it may result in issues when generating XML ; documents, however this remains support
ed
for backward
com
patibility reasons. ; Note that this directive does not control the = shorthand tag, which can be ; us
ed
regardless of this directive. ; Default Value: On ; Development Value: Off ; Production Value: Off ; http://php.net/short-open-tag short_open_tag = Off ; Allow ASP-style <% %> tags. ; http://php.net/asp-tags asp_tags = Off ; The number of significant digits display
ed
in floating point numbers. ; http://php.net/precision precision = 14 ; Output buffering is a mechanism for controlling how much output data ; (excluding headers and cookies) PHP should keep internally before pushing that ; data to the client. If your application's output exce
ed
s this setting, PHP ; will send that data in chunks of roughly the size you specify. ; Turning on this setting and managing its maximum buffer size can yield some ; interesting side-effects depending on your application and web server. ; You may be able to send headers and cookies after you've already sent output ; through print or echo. You also may see performance benefits if your server is ; emitting less packets due to buffer
ed
output versus PHP streaming the output ; as it gets it. On production servers, 4096 bytes is a good setting for performance ; reasons. ; Note: Output buffering can also be controll
ed
via Output Buffering Control ; functions. ; Possible Values: ; On = Enabl
ed
and buffer is unlimit
ed
. (Use with caution) ; Off = Disabl
ed
; Integer = Enables the buffer and sets its maximum size in bytes. ; Note: This directive is hardcod
ed
to Off for the CLI SAPI ; Default Value: Off ; Development Value: 4096 ; Production Value: 4096 ; http://php.net/output-buffering output_buffering = 4096 ; You can r
ed
irect all of the output of your scripts to a function. For ; example, if you set output_handler to "mb_output_handler", character ; encoding will be transparently convert
ed
to the specifi
ed
encoding. ; Setting any output handler automatically turns on output buffering. ; Note: People who wrote portable scripts should not depend on this ini ; directive. Instead, explicitly set the output handler using ob_start(). ; Using this ini directive may cause problems unless you know what script ; is doing. ; Note: You cannot use both "mb_output_handler" with "ob_iconv_handler" ; and you cannot use both "ob_gzhandler" and "zlib.output_
com
pression". ; Note: output_handler must be empty if this is set 'On' !!!! ; Instead you must use zlib.output_handler. ; http://php.net/output-handler ;output_handler = ; Transparent output
com
pression using the zlib library ; Valid values for this option are 'off', 'on', or a specific buffer size ; to be us
ed
for
com
pression (default is 4KB) ; Note: Resulting chunk size may vary due to nature of
com
pression. PHP ; outputs chunks that are few hundr
ed
s bytes each as a result of ;
com
pression. If you prefer a larger chunk size for better ; performance, enable output_buffering in addition. ; Note: You ne
ed
to use zlib.output_handler instead of the standard ; output_handler, or otherwise the output will be corrupt
ed
. ; http://php.net/zlib.output-
com
pression zlib.output_
com
pression = Off ; http://php.net/zlib.output-
com
pression-level ;zlib.output_
com
pression_level = -1 ; You cannot specify additional output handlers if zlib.output_
com
pression ; is activat
ed
here. This setting does the same as output_handler but in ; a different order. ; http://php.net/zlib.output-handler ;zlib.output_handler = ; Implicit flush tells PHP to tell the output layer to flush itself ; automatically after every output block. This is equivalent to calling the ; PHP function flush() after each and every call to print() or echo() and each ; and every HTML block. Turning this option on has serious performance ; implications and is generally re
com
mend
ed
for debugging purposes only. ; http://php.net/implicit-flush ; Note: This directive is hardcod
ed
to On for the CLI SAPI implicit_flush = Off ; The unserialize callback function will be call
ed
(with the undefin
ed
class' ; name as parameter), if the unserializer finds an undefin
ed
class ; which should be instantiat
ed
. A warning appears if the specifi
ed
function is ; not defin
ed
, or if the function doesn't include/implement the missing class. ; So only set this entry, if you really want to implement such a ; callback-function. unserialize_callback_func = ; When floats & doubles are serializ
ed
store serialize_precision significant ; digits after the floating point. The default value ensures that when floats ; are decod
ed
with unserialize, the data will remain the same. serialize_precision = 17 ; open_bas
ed
ir, if set, limits all file
operation
s to the defin
ed
directory ; and below. This directive makes most sense if us
ed
in a per-directory ; or per-virtualhost web server configuration file. This directive is ; *NOT* affect
ed
by whether Safe Mode is turn
ed
On or Off. ; http://php.net/open-bas
ed
ir ;open_bas
ed
ir = ; This directive allows you to disable certain functions for security reasons. ; It receives a
com
ma-delimit
ed
list of function names. This directive is ; *NOT* affect
ed
by whether Safe Mode is turn
ed
On or Off. ; http://php.net/disable-functions disable_functions = ; This directive allows you to disable certain classes for security reasons. ; It receives a
com
ma-delimit
ed
list of class names. This directive is ; *NOT* affect
ed
by whether Safe Mode is turn
ed
On or Off. ; http://php.net/disable-classes disable_classes = ; Colors for Syntax Highlighting mode. Anything that's acceptable in ;
would work. ; http://php.net/syntax-highlighting ;highlight.string = #DD0000 ;highlight.
com
ment = #FF9900 ;highlight.keyword = #007700 ;highlight.default = #0000BB ;highlight.html = #000000 ; If enabl
ed
, the request will be allow
ed
to
com
p
let
e even if the user aborts ; the request. Consider enabling it if executing long requests, which may end up ; being interrupt
ed
by the user or a browser timing out. PHP's default behavior ; is to disable this feature. ; http://php.net/ignore-user-abort ;ignore_user_abort = On ; Determines the size of the realpath cache to be us
ed
by PHP. This value should ; be increas
ed
on systems where PHP opens many files to reflect the quantity of ; the file
operation
s perform
ed
. ; http://php.net/realpath-cache-size ;realpath_cache_size = 16k ; Duration of time, in seconds for which to cache realpath information for a given ; file or directory. For systems with rarely changing files, consider increasing this ; value. ; http://php.net/realpath-cache-ttl ;realpath_cache_ttl = 120 ; Enables or disables the circular reference collector. ; http://php.net/zend.enable-gc zend.enable_gc = On ; If enabl
ed
, scripts may be written in encodings that are in
com
patible with ; the scanner. CP936, Big5, CP949 and Shift_JIS are the examples of such ; encodings. To use this feature, mbstring extension must be enabl
ed
. ; Default: Off ;zend.multibyte = Off ; Allows to set the default encoding for the scripts. This value will be us
ed
; unless "declare(encoding=...)" directive appears at the top of the script. ; Only affects if zend.multibyte is set. ; Default: "" ;zend.script_encoding = ;;;;;;;;;;;;;;;;; ; Miscellaneous ; ;;;;;;;;;;;;;;;;; ; Decides whether PHP may expose the fact that it is install
ed
on the server ; (e.g. by adding its signature to the Web server header). It is no security ; threat in any way, but it makes it possible to determine whether you use PHP ; on your server or not. ; http://php.net/expose-php expose_php = On ;;;;;;;;;;;;;;;;;;; ; Resource Limits ; ;;;;;;;;;;;;;;;;;;; ; Maximum execution time of each script, in seconds ; http://php.net/max-execution-time ; Note: This directive is hardcod
ed
to 0 for the CLI SAPI max_execution_time = 30 ; Maximum amount of time each script may spend parsing request data. It's a good ; idea to limit this time on productions servers in order to eliminate unexpect
ed
ly ; long running scripts. ; Note: This directive is hardcod
ed
to -1 for the CLI SAPI ; Default Value: -1 (Unlimit
ed
) ; Development Value: 60 (60 seconds) ; Production Value: 60 (60 seconds) ; http://php.net/max-input-time max_input_time = 60 ; Maximum input variable nesting level ; http://php.net/max-input-nesting-level ;max_input_nesting_level = 64 ; How many GET/POST/COOKIE input variables may be accept
ed
; max_input_vars = 1000 ; Maximum amount of memory a script may consume (128MB) ; http://php.net/memory-limit memory_limit = 128M ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; ; Error handling and logging ; ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; ; This directive informs PHP of which errors, warnings and notices you would like ; it to take action for. The re
com
mend
ed
way of setting values for this ; directive is through the use of the error level constants and bitwise ; operators. The error level constants are below here for convenience as well as ; some
com
mon settings and their meanings. ; By default, PHP is set to take action on all errors, notices and warnings EXCEPT ; those relat
ed
to E_NOTICE and E_STRICT, which together cover best practices and ; re
com
mend
ed
coding standards in PHP. For performance reasons, this is the ; re
com
mend error reporting setting. Your production server shouldn't be wasting ; resources
com
plaining about best practices and coding standards. That's what ; development servers and development settings are for. ; Note: The php.ini-development file has this setting as E_ALL. This ; means it pretty much reports everything which is exactly what you want during ; development and early testing. ; ; Error Level Constants: ; E_ALL - All errors and warnings (includes E_STRICT as of PHP 5.4.0) ; E_ERROR - fatal run-time errors ; E_RECOVERABLE_ERROR - almost fatal run-time errors ; E_WARNING - run-time warnings (non-fatal errors) ; E_PARSE -
com
pile-time parse errors ; E_NOTICE - run-time notices (these are warnings which often result ; from a bug in your code, but it's possible that it was ; intentional (e.g., using an uninitializ
ed
variable and ; relying on the fact it's automatically initializ
ed
to an ; empty string) ; E_STRICT - run-time notices, enable to have PHP suggest changes ; to your code which will ensure the best interoperability ; and forward
com
patibility of your code ; E_CORE_ERROR - fatal errors that occur during PHP's initial startup ; E_CORE_WARNING - warnings (non-fatal errors) that occur during PHP's ; initial startup ; E_
COM
PILE_ERROR - fatal
com
pile-time errors ; E_
COM
PILE_WARNING -
com
pile-time warnings (non-fatal errors) ; E_USER_ERROR - user-generat
ed
error message ; E_USER_WARNING - user-generat
ed
warning message ; E_USER_NOTICE - user-generat
ed
notice message ; E_DEPRECAT
ED
- warn about code that will not work in future versions ; of PHP ; E_USER_DEPRECAT
ED
- user-generat
ed
deprecation warnings ; ;
Com
mon Values: ; E_ALL (Show all errors, warnings and notices including coding standards.) ; E_ALL & ~E_NOTICE (Show all errors, except for notices) ; E_ALL & ~E_NOTICE & ~E_STRICT (Show all errors, except for notices and coding standards warnings.) ; E_
COM
PILE_ERROR|E_RECOVERABLE_ERROR|E_ERROR|E_CORE_ERROR (Show only errors) ; Default Value: E_ALL & ~E_NOTICE & ~E_STRICT & ~E_DEPRECAT
ED
; Development Value: E_ALL ; Production Value: E_ALL & ~E_DEPRECAT
ED
& ~E_STRICT ; http://php.net/error-reporting error_reporting = E_ALL ; This directive controls whether or not and where PHP will output errors, ; notices and warnings too. Error output is very useful during development, but ; it could be very dangerous in production environments. Depending on the code ; which is triggering the error, sensitive information could potentially leak ; out of your application such as database usernames and passwords or worse. ; It's re
com
mend
ed
that errors be logg
ed
on production servers rather than ; having the errors sent to STDOUT. ; Possible Values: ; Off = Do not display any errors ; stderr = Display errors to STDERR (affects only CGI/CLI binaries!) ; On or stdout = Display errors to STDOUT ; Default Value: On ; Development Value: On ; Production Value: Off ; http://php.net/display-errors display_errors = On ; The display of errors which occur during PHP's startup sequence are handl
ed
; separately from display_errors. PHP's default behavior is to suppress those ; errors from clients. Turning the display of startup errors on can be useful in ; debugging configuration problems. But, it's strongly re
com
mend
ed
that you ; leave this setting off on production servers. ; Default Value: Off ; Development Value: On ; Production Value: Off ; http://php.net/display-startup-errors display_startup_errors = On ; Besides displaying errors, PHP can also log errors to locations such as a ; server-specific log, STDERR, or a location specifi
ed
by the error_log ; directive found below. While errors should not be display
ed
on productions ; servers they should still be monitor
ed
and logging is a great way to do that. ; Default Value: Off ; Development Value: On ; Production Value: On ; http://php.net/log-errors log_errors = On ; Set maximum length of log_errors. In error_log information about the source is ; add
ed
. The default is 1024 and 0 allows to not apply any maximum length at all. ; http://php.net/log-errors-max-len log_errors_max_len = 1024 ; Do not log repeat
ed
messages. Repeat
ed
errors must occur in same file on same ; line unless ignore_repeat
ed
_source is set true. ; http://php.net/ignore-repeat
ed
-errors ignore_repeat
ed
_errors = Off ; Ignore source of message when ignoring repeat
ed
messages. When this setting ; is On you will not log errors with repeat
ed
messages from different files or ; source lines. ; http://php.net/ignore-repeat
ed
-source ignore_repeat
ed
_source = Off ; If this parameter is set to Off, then memory leaks will not be shown (on ; stdout or in the log). This has only effect in a debug
com
pile, and if ; error reporting includes E_WARNING in the allow
ed
list ; http://php.net/report-memleaks report_memleaks = On ; This setting is on by default. ;report_zend_debug = 0 ; Store the last error/warning message in $php_errormsg (boolean). Setting this value ; to On can assist in debugging and is appropriate for development servers. It should ; however be disabl
ed
on production servers. ; Default Value: Off ; Development Value: On ; Production Value: Off ; http://php.net/track-errors track_errors = On ; Turn off normal error reporting and emit XML-RPC error XML ; http://php.net/xmlrpc-errors ;xmlrpc_errors = 0 ; An XML-RPC faultCode ;xmlrpc_error_number = 0 ; When PHP displays or logs an error, it has the capability of formatting the ; error message as HTML for easier reading. This directive controls whether ; the error message is formatt
ed
as HTML or not. ; Note: This directive is hardcod
ed
to Off for the CLI SAPI ; Default Value: On ; Development Value: On ; Production value: On ; http://php.net/html-errors html_errors = On ; If html_errors is set to On *and* docref_root is not empty, then PHP ; produces clickable error messages that direct to a page describing the error ; or function causing the error in detail. ; You can download a copy of the PHP manual from http://php.net/docs ; and change docref_root to the base URL of your local copy including the ; leading '/'. You must also specify the file extension being us
ed
including ; the dot. PHP's default behavior is to leave these settings empty, in which ; case no links to documentation are generat
ed
. ; Note: Never use this feature for production boxes. ; http://php.net/docref-root ; Examples ;docref_root = "/phpmanual/" ; http://php.net/docref-ext ;docref_ext = .html ; String to output before an error message. PHP's default behavior is to leave ; this setting blank. ; http://php.net/error-prepend-string ; Example: ;error_prepend_string = "
" ; String to output after an error message. PHP's default behavior is to leave ; this setting blank. ; http://php.net/error-append-string ; Example: ;error_append_string = "
" ; Log errors to specifi
ed
file. PHP's default behavior is to leave this value ; empty. ; http://php.net/error-log ; Example: ;error_log = php_errors.log ; Log errors to syslog (Event Log on Windows). ;error_log = syslog ;windows.show_crt_warning ; Default value: 0 ; Development value: 0 ; Production value: 0 ;;;;;;;;;;;;;;;;; ; Data Handling ; ;;;;;;;;;;;;;;;;; ; The separator us
ed
in PHP generat
ed
URLs to separate arguments. ; PHP's default setting is "&". ; http://php.net/arg-separator.output ; Example: ;arg_separator.output = "&" ; List of separator(s) us
ed
by PHP to parse input URLs into variables. ; PHP's default setting is "&". ; NOTE: Every character in this directive is consider
ed
as separator! ; http://php.net/arg-separator.input ; Example: ;arg_separator.input = ";&" ; This directive determines which super global arrays are register
ed
when PHP ; starts up. G,P,C,E & S are abbreviations for the following respective super ; globals: GET, POST, COOKIE, ENV and SERVER. There is a performance penalty ; paid for the registration of these arrays and because ENV is not as
com
monly ; us
ed
as the others, ENV is not re
com
mend
ed
on productions servers. You ; can still get access to the environment variables through getenv() should you ; ne
ed
to. ; Default Value: "EGPCS" ; Development Value: "GPCS" ; Production Value: "GPCS"; ; http://php.net/variables-order variables_order = "GPCS" ; This directive determines which super global data (G,P,C,E & S) should ; be register
ed
into the super global array REQUEST. If so, it also determines ; the order in which that data is register
ed
. The values for this directive are ; specifi
ed
in the same manner as the variables_order directive, EXCEPT one. ; Leaving this value empty will cause PHP to use the value set in the ; variables_order directive. It does not mean it will leave the super globals ; array REQUEST empty. ; Default Value: None ; Development Value: "GP" ; Production Value: "GP" ; http://php.net/request-order request_order = "GP" ; This directive determines whether PHP registers $argv & $argc each time it ; runs. $argv contains an array of all the arguments pass
ed
to PHP when a script ; is invok
ed
. $argc contains an integer representing the number of arguments ; that were pass
ed
when the script was invok
ed
. These arrays are extremely ; useful when running scripts from the
com
mand line. When this directive is ; enabl
ed
, registering these variables consumes CPU cycles and memory each time ; a script is execut
ed
. For performance reasons, this feature should be disabl
ed
; on production servers. ; Note: This directive is hardcod
ed
to On for the CLI SAPI ; Default Value: On ; Development Value: Off ; Production Value: Off ; http://php.net/register-argc-argv register_argc_argv = Off ; When enabl
ed
, the ENV, REQUEST and SERVER variables are creat
ed
when they're ; first us
ed
(Just In Time) instead of when the script starts. If these ; variables are not us
ed
within a script, having this directive on will result ; in a performance gain. The PHP directive register_argc_argv must be disabl
ed
; for this directive to have any affect. ; http://php.net/auto-globals-jit auto_globals_jit = On ; Whether PHP will read the POST data. ; This option is enabl
ed
by default. ; Most likely, you won't want to disable this option globally. It causes $_POST ; and $_FILES to always be empty; the only way you will be able to read the ; POST data will be through the php://input stream wrapper. This can be useful ; to proxy requests or to process the POST data in a memory efficient fashion. ; http://php.net/enable-post-data-reading ;enable_post_data_reading = Off ; Maximum size of POST data that PHP will accept. ; Its value may be 0 to disable the limit. It is ignor
ed
if POST data reading ; is disabl
ed
through enable_post_data_reading. ; http://php.net/post-max-size post_max_size = 8M ; Automatically add files before PHP document. ; http://php.net/auto-prepend-file auto_prepend_file = ; Automatically add files after PHP document. ; http://php.net/auto-append-file auto_append_file = ; By default, PHP will output a character encoding using ; the Content-type: header. To disable sending of the charset, simply ; set it to be empty. ; ; PHP's built-in default is text/html ; http://php.net/default-mimetype default_mimetype = "text/html" ; PHP's default character set is set to empty. ; http://php.net/default-charset ;default_charset = "UTF-8" ; Always populate the $HTTP_RAW_POST_DATA variable. PHP's default behavior is ; to disable this feature. If post reading is disabl
ed
through ; enable_post_data_reading, $HTTP_RAW_POST_DATA is *NOT* populat
ed
. ; http://php.net/always-populate-raw-post-data ;always_populate_raw_post_data = On ;;;;;;;;;;;;;;;;;;;;;;;;; ; Paths and Directories ; ;;;;;;;;;;;;;;;;;;;;;;;;; ; UNIX: "/path1:/path2" ;include_path = ".:/php/includes" ; ; Windows: "\path1;\path2" ;include_path = ".;c:\php\includes" ; ; PHP's default setting for include_path is ".;/path/to/php/pear" ; http://php.net/include-path ; The root of the PHP pages, us
ed
only if nonempty. ; if PHP was not
com
pil
ed
with FORCE_R
ED
IRECT, you SHOULD set doc_root ; if you are running php as a CGI under any web server (other than IIS) ; see documentation for security issues. The alternate is to use the ; cgi.force_r
ed
irect configuration below ; http://php.net/doc-root doc_root = ; The directory under which PHP opens the script using /~username us
ed
only ; if nonempty. ; http://php.net/user-dir user_dir = ; Directory in which the loadable extensions (modules) reside. ; http://php.net/extension-dir ; extension_dir = "./" ; On windows: ; extension_dir = "ext" ; Whether or not to enable the dl() function. The dl() function does NOT work ; properly in multithread
ed
servers, such as IIS or Zeus, and is automatically ; disabl
ed
on them. ; http://php.net/enable-dl enable_dl = Off ; cgi.force_r
ed
irect is necessary to provide security running PHP as a CGI under ; most web servers. Left undefin
ed
, PHP turns this on by default. You can ; turn it off here AT YOUR OWN RISK ; **You CAN safely turn this off for IIS, in fact, you MUST.** ; http://php.net/cgi.force-r
ed
irect ;cgi.force_r
ed
irect = 1 ; if cgi.nph is enabl
ed
it will force cgi to always sent Status: 200 with ; every request. PHP's default behavior is to disable this feature. ;cgi.nph = 1 ; if cgi.force_r
ed
irect is turn
ed
on, and you are not running under Apache or Netscape ; (iPlanet) web servers, you MAY ne
ed
to set an environment variable name that PHP ; will look for to know it is OK to continue execution. Setting this variable MAY ; cause security issues, KNOW WHAT YOU ARE DOING FIRST. ; http://php.net/cgi.r
ed
irect-status-env ;cgi.r
ed
irect_status_env = ; cgi.fix_pathinfo provides *real* PATH_INFO/PATH_TRANSLAT
ED
support for CGI. PHP's ; previous behaviour was to set PATH_TRANSLAT
ED
to SCRIPT_FILENAME, and to not grok ; what PATH_INFO is. For more information on PATH_INFO, see the cgi specs. Setting ; this to 1 will cause PHP CGI to fix its paths to conform to the spec. A setting ; of zero causes PHP to behave as before. Default is 1. You should fix your scripts ; to use SCRIPT_FILENAME rather than PATH_TRANSLAT
ED
. ; http://php.net/cgi.fix-pathinfo ;cgi.fix_pathinfo=1 ; FastCGI under IIS (on WINNT bas
ed
OS) supports the ability to impersonate ; security tokens of the calling client. This allows IIS to define the ; security context that the request runs under. mod_fastcgi under Apache ; does not currently support this feature (03/17/2002) ; Set to 1 if running under IIS. Default is zero. ; http://php.net/fastcgi.impersonate ;fastcgi.impersonate = 1 ; Disable logging through FastCGI connection. PHP's default behavior is to enable ; this feature. ;fastcgi.logging = 0 ; cgi.rfc2616_headers configuration option tells PHP what type of headers to ; use when sending HTTP response code. If it's set 0 PHP sends Status: header that ; is support
ed
by Apache. When this option is set to 1 PHP will send ; RFC2616
com
pliant header. ; Default is zero. ; http://php.net/cgi.rfc2616-headers ;cgi.rfc2616_headers = 0 ;;;;;;;;;;;;;;;; ; File Uploads ; ;;;;;;;;;;;;;;;; ; Whether to allow HTTP file uploads. ; http://php.net/file-uploads file_uploads = On ; Temporary directory for HTTP upload
ed
files (will use system default if not ; specifi
ed
). ; http://php.net/upload-tmp-dir ;upload_tmp_dir = ; Maximum allow
ed
size for upload
ed
files. ; http://php.net/upload-max-filesize upload_max_filesize = 2M ; Maximum number of files that can be upload
ed
via a single request max_file_uploads = 20 ;;;;;;;;;;;;;;;;;; ; Fopen wrappers ; ;;;;;;;;;;;;;;;;;; ; Whether to allow the treatment of URLs (like http:// or ftp://) as files. ; http://php.net/allow-url-fopen allow_url_fopen = On ; Whether to allow include/require to open URLs (like http:// or ftp://) as files. ; http://php.net/allow-url-include allow_url_include = Off ; Define the anonymous ftp password (your email address). PHP's default setting ; for this is empty. ; http://php.net/from ;from="john@doe.
com
" ; Define the User-Agent string. PHP's default setting for this is empty. ; http://php.net/user-agent ;user_agent="PHP" ; Default timeout for socket bas
ed
streams (seconds) ; http://php.net/default-socket-timeout default_socket_timeout = 60 ; If your scripts have to deal with files from Macintosh systems, ; or you are running on a Mac and ne
ed
to deal with files from ; unix or win32 systems, setting this flag will cause PHP to ; automatically detect the EOL character in those files so that ; fgets() and file() will work regardless of the source of the file. ; http://php.net/auto-detect-line-endings ;auto_detect_line_endings = Off ;;;;;;;;;;;;;;;;;;;;;; ; Dynamic Extensions ; ;;;;;;;;;;;;;;;;;;;;;; ; If you wish to have an extension load
ed
automatically, use the following ; syntax: ; ; extension=modulename.extension ; ; For example, on Windows: ; ; extension=msql.dll ; ; ... or under UNIX: ; ; extension=msql.so ; ; ... or with a path: ; ; extension=/path/to/extension/msql.so ; ; If you only provide the name of the extension, PHP will look for it in its ; default extension directory. ; ; Windows Extensions ; Note that ODBC support is built in, so no dll is ne
ed
ed
for it. ; Note that many DLL files are locat
ed
in the extensions/ (PHP 4) ext/ (PHP 5) ; extension folders as well as the separate PECL DLL download (PHP 5). ; Be sure to appropriately set the extension_dir directive. ; ;extension=php_bz2.dll ;extension=php_curl.dll ;extension=php_fileinfo.dll ;extension=php_gd2.dll ;extension=php_gettext.dll ;extension=php_gmp.dll ;extension=php_intl.dll ;extension=php_imap.dll ;extension=php_interbase.dll ;extension=php_ldap.dll ;extension=php_mbstring.dll ;extension=php_exif.dll ; Must be after mbstring as it depends on it ;extension=php_mysql.dll ;extension=php_mysqli.dll ;extension=php_oci8.dll ; Use with Oracle 10gR2 Instant Client ;extension=php_oci8_11g.dll ; Use with Oracle 11gR2 Instant Client ;extension=php_openssl.dll ;extension=php_pdo_firebird.dll ;extension=php_pdo_mysql.dll ;extension=php_pdo_oci.dll ;extension=php_pdo_odbc.dll ;extension=php_pdo_pgsql.dll ;extension=php_pdo_sqlite.dll ;extension=php_pgsql.dll ;extension=php_pspell.dll ;extension=php_shmop.dll ; The MIBS data available in the PHP distribution must be install
ed
. ; See http://www.php.net/manual/en/snmp.installation.php ;extension=php_snmp.dll ;extension=php_soap.dll ;extension=php_sockets.dll ;extension=php_sqlite3.dll ;extension=php_sybase_ct.dll ;extension=php_tidy.dll ;extension=php_xmlrpc.dll ;extension=php_xsl.dll ;;;;;;;;;;;;;;;;;;; ; Module Settings ; ;;;;;;;;;;;;;;;;;;; [CLI Server] ; Whether the CLI web server uses ANSI color coding in its terminal output. cli_server.color = On [Date] ; Defines the default timezone us
ed
by the date functions ; http://php.net/date.timezone ;date.timezone = ; http://php.net/date.default-latitude ;date.default_latitude = 31.7667 ; http://php.net/date.default-longitude ;date.default_longitude = 35.2333 ; http://php.net/date.sunrise-zenith ;date.sunrise_zenith = 90.583333 ; http://php.net/date.sunset-zenith ;date.sunset_zenith = 90.583333 [filter] ; http://php.net/filter.default ;filter.default = unsafe_raw ; http://php.net/filter.default-flags ;filter.default_flags = [iconv] ;iconv.input_encoding = ISO-8859-1 ;iconv.internal_encoding = ISO-8859-1 ;iconv.output_encoding = ISO-8859-1 [intl] ;intl.default_locale = ; This directive allows you to produce PHP errors when some error ; happens within intl functions. The value is the level of the error produc
ed
. ; Default is 0, which does not produce any errors. ;intl.error_level = E_WARNING [sqlite] ; http://php.net/sqlite.assoc-case ;sqlite.assoc_case = 0 [sqlite3] ;sqlite3.extension_dir = [Pcre] ;PCRE library backtracking limit. ; http://php.net/pcre.backtrack-limit ;pcre.backtrack_limit=100000 ;PCRE library recursion limit. ;Please note that if you set this value to a high number you may consume all ;the available process stack and eventually crash PHP (due to reaching the ;stack size limit impos
ed
by the Operating System). ; http://php.net/pcre.recursion-limit ;pcre.recursion_limit=100000 [Pdo] ; Whether to pool ODBC connections. Can be one of "strict", "relax
ed
" or "off" ; http://php.net/pdo-odbc.connection-pooling ;pdo_odbc.connection_pooling=strict ;pdo_odbc.db2_instance_name [Pdo_mysql] ; If mysqlnd is us
ed
: Number of cache slots for the internal result set cache ; http://php.net/pdo_mysql.cache_size pdo_mysql.cache_size = 2000 ; Default socket name for local MySQL connects. If empty, uses the built-in ; MySQL defaults. ; http://php.net/pdo_mysql.default-socket pdo_mysql.default_socket= [Phar] ; http://php.net/phar.readonly ;phar.readonly = On ; http://php.net/phar.require-hash ;phar.require_hash = On ;phar.cache_list = [mail function] ; For Win32 only. ; http://php.net/smtp SMTP = localhost ; http://php.net/smtp-port smtp_port = 25 ; For Win32 only. ; http://php.net/sendmail-from ;sendmail_from = me@example.
com
; For Unix only. You may supply arguments as well (default: "sendmail -t -i"). ; http://php.net/sendmail-path ;sendmail_path = ; Force the addition of the specifi
ed
parameters to be pass
ed
as extra parameters ; to the sendmail binary. These parameters will always replace the value of ; the 5th parameter to mail(), even in safe mode. ;mail.force_extra_parameters = ; Add X-PHP-Originating-Script: that will include uid of the script follow
ed
by the filename mail.add_x_header = On ; The path to a log file that will log all mail() calls. Log entries include ; the full path of the script, line number, To address and headers. ;mail.log = ; Log mail to syslog (Event Log on Windows). ;mail.log = syslog [SQL] ; http://php.net/sql.safe-mode sql.safe_mode = Off [ODBC] ; http://php.net/odbc.default-db ;odbc.default_db = Not yet implement
ed
; http://php.net/odbc.default-user ;odbc.default_user = Not yet implement
ed
; http://php.net/odbc.default-pw ;odbc.default_pw = Not yet implement
ed
; Controls the ODBC cursor model. ; Default: SQL_CURSOR_STATIC (default). ;odbc.default_cursortype ; Allow or prevent persistent links. ; http://php.net/odbc.allow-persistent odbc.allow_persistent = On ; Check that a connection is still valid before reuse. ; http://php.net/odbc.check-persistent odbc.check_persistent = On ; Maximum number of persistent links. -1 means no limit. ; http://php.net/odbc.max-persistent odbc.max_persistent = -1 ; Maximum number of links (persistent + non-persistent). -1 means no limit. ; http://php.net/odbc.max-links odbc.max_links = -1 ; Handling of LONG fields. Returns number of bytes to variables. 0 means ; passthru. ; http://php.net/odbc.defaultlrl odbc.defaultlrl = 4096 ; Handling of binary data. 0 means passthru, 1 return as is, 2 convert to char. ; See the documentation on odbc_binmode and odbc_longreadlen for an explanation ; of odbc.defaultlrl and odbc.defaultbinmode ; http://php.net/odbc.defaultbinmode odbc.defaultbinmode = 1 ;birdstep.max_links = -1 [Interbase] ; Allow or prevent persistent links. ibase.allow_persistent = 1 ; Maximum number of persistent links. -1 means no limit. ibase.max_persistent = -1 ; Maximum number of links (persistent + non-persistent). -1 means no limit. ibase.max_links = -1 ; Default database name for ibase_connect(). ;ibase.default_db = ; Default username for ibase_connect(). ;ibase.default_user = ; Default password for ibase_connect(). ;ibase.default_password = ; Default charset for ibase_connect(). ;ibase.default_charset = ; Default timestamp format. ibase.timestampformat = "%Y-%m-%d %H:%M:%S" ; Default date format. ibase.dateformat = "%Y-%m-%d" ; Default time format. ibase.timeformat = "%H:%M:%S" [MySQL] ; Allow accessing, from PHP's perspective, local files with LOAD DATA statements ; http://php.net/mysql.allow_local_infile mysql.allow_local_infile = On ; Allow or prevent persistent links. ; http://php.net/mysql.allow-persistent mysql.allow_persistent = On ; If mysqlnd is us
ed
: Number of cache slots for the internal result set cache ; http://php.net/mysql.cache_size mysql.cache_size = 2000 ; Maximum number of persistent links. -1 means no limit. ; http://php.net/mysql.max-persistent mysql.max_persistent = -1 ; Maximum number of links (persistent + non-persistent). -1 means no limit. ; http://php.net/mysql.max-links mysql.max_links = -1 ; Default port number for mysql_connect(). If unset, mysql_connect() will use ; the $MYSQL_TCP_PORT or the mysql-tcp entry in /etc/services or the ;
com
pile-time value defin
ed
MYSQL_PORT (in that order). Win32 will only look ; at MYSQL_PORT. ; http://php.net/mysql.default-port mysql.default_port = ; Default socket name for local MySQL connects. If empty, uses the built-in ; MySQL defaults. ; http://php.net/mysql.default-socket mysql.default_socket = ; Default host for mysql_connect() (doesn't apply in safe mode). ; http://php.net/mysql.default-host mysql.default_host = ; Default user for mysql_connect() (doesn't apply in safe mode). ; http://php.net/mysql.default-user mysql.default_user = ; Default password for mysql_connect() (doesn't apply in safe mode). ; Note that this is generally a *bad* idea to store passwords in this file. ; *Any* user with PHP access can run 'echo get_cfg_var("mysql.default_password") ; and reveal this password! And of course, any users with read access to this ; file will be able to reveal the password as well. ; http://php.net/mysql.default-password mysql.default_password = ; Maximum time (in seconds) for connect timeout. -1 means no limit ; http://php.net/mysql.connect-timeout mysql.connect_timeout = 60 ; Trace mode. When trace_mode is active (=On), warnings for table/index scans and ; SQL-Errors will be display
ed
. ; http://php.net/mysql.trace-mode mysql.trace_mode = Off [MySQLi] ; Maximum number of persistent links. -1 means no limit. ; http://php.net/mysqli.max-persistent mysqli.max_persistent = -1 ; Allow accessing, from PHP's perspective, local files with LOAD DATA statements ; http://php.net/mysqli.allow_local_infile ;mysqli.allow_local_infile = On ; Allow or prevent persistent links. ; http://php.net/mysqli.allow-persistent mysqli.allow_persistent = On ; Maximum number of links. -1 means no limit. ; http://php.net/mysqli.max-links mysqli.max_links = -1 ; If mysqlnd is us
ed
: Number of cache slots for the internal result set cache ; http://php.net/mysqli.cache_size mysqli.cache_size = 2000 ; Default port number for mysqli_connect(). If unset, mysqli_connect() will use ; the $MYSQL_TCP_PORT or the mysql-tcp entry in /etc/services or the ;
com
pile-time value defin
ed
MYSQL_PORT (in that order). Win32 will only look ; at MYSQL_PORT. ; http://php.net/mysqli.default-port mysqli.default_port = 3306 ; Default socket name for local MySQL connects. If empty, uses the built-in ; MySQL defaults. ; http://php.net/mysqli.default-socket mysqli.default_socket = ; Default host for mysql_connect() (doesn't apply in safe mode). ; http://php.net/mysqli.default-host mysqli.default_host = ; Default user for mysql_connect() (doesn't apply in safe mode). ; http://php.net/mysqli.default-user mysqli.default_user = ; Default password for mysqli_connect() (doesn't apply in safe mode). ; Note that this is generally a *bad* idea to store passwords in this file. ; *Any* user with PHP access can run 'echo get_cfg_var("mysqli.default_pw") ; and reveal this password! And of course, any users with read access to this ; file will be able to reveal the password as well. ; http://php.net/mysqli.default-pw mysqli.default_pw = ; Allow or prevent reconnect mysqli.reconnect = Off [mysqlnd] ; Enable / Disable collection of general statistics by mysqlnd which can be ; us
ed
to tune and monitor MySQL
operation
s. ; http://php.net/mysqlnd.collect_statistics mysqlnd.collect_statistics = On ; Enable / Disable collection of memory usage statistics by mysqlnd which can be ; us
ed
to tune and monitor MySQL
operation
s. ; http://php.net/mysqlnd.collect_memory_statistics mysqlnd.collect_memory_statistics = On ; Size of a pre-allocat
ed
buffer us
ed
when sending
com
mands to MySQL in bytes. ; http://php.net/mysqlnd.net_cmd_buffer_size ;mysqlnd.net_cmd_buffer_size = 2048 ; Size of a pre-allocat
ed
buffer us
ed
for reading data sent by the server in ; bytes. ; http://php.net/mysqlnd.net_read_buffer_size ;mysqlnd.net_read_buffer_size = 32768 [OCI8] ; Connection: Enables privileg
ed
connections using external ; cr
ed
entials (OCI_SYSOPER, OCI_SYSDBA) ; http://php.net/oci8.privileg
ed
-connect ;oci8.privileg
ed
_connect = Off ; Connection: The maximum number of persistent OCI8 connections per ; process. Using -1 means no limit. ; http://php.net/oci8.max-persistent ;oci8.max_persistent = -1 ; Connection: The maximum number of seconds a process is allow
ed
to ; maintain an idle persistent connection. Using -1 means idle ; persistent connections will be maintain
ed
forever. ; http://php.net/oci8.persistent-timeout ;oci8.persistent_timeout = -1 ; Connection: The number of seconds that must pass before issuing a ; ping during oci_pconnect() to check the connection validity. When ; set to 0, each oci_pconnect() will cause a ping. Using -1 disables ; pings
com
p
let
ely. ; http://php.net/oci8.ping-interval ;oci8.ping_interval = 60 ; Connection: Set this to a user chosen connection class to be us
ed
; for all pool
ed
server requests with Oracle 11g Database Resident ; Connection Pooling (DRCP). To use DRCP, this value should be set to ; the same string for all web servers running the same application, ; the database pool must be configur
ed
, and the connection string must ; specify to use a pool
ed
server. ;oci8.connection_class = ; High Availability: Using On
let
s PHP receive Fast Application ; Notification (FAN) events generat
ed
when a database node fails. The ; database must also be configur
ed
to post FAN events. ;oci8.events = Off ; Tuning: This option enables statement caching, and specifies how ; many statements to cache. Using 0 disables statement caching. ; http://php.net/oci8.statement-cache-size ;oci8.statement_cache_size = 20 ; Tuning: Enables statement prefetching and sets the default number of ; rows that will be fetch
ed
automatically after statement execution. ; http://php.net/oci8.default-prefetch ;oci8.default_prefetch = 100 ;
Com
patibility. Using On means oci_close() will not close ; oci_connect() and oci_new_connect() connections. ; http://php.net/oci8.old-oci-close-semantics ;oci8.old_oci_close_semantics = Off [PostgreSQL] ; Allow or prevent persistent links. ; http://php.net/pgsql.allow-persistent pgsql.allow_persistent = On ; Detect broken persistent links always with pg_pconnect(). ; Auto reset feature requires a little overheads. ; http://php.net/pgsql.auto-reset-persistent pgsql.auto_reset_persistent = Off ; Maximum number of persistent links. -1 means no limit. ; http://php.net/pgsql.max-persistent pgsql.max_persistent = -1 ; Maximum number of links (persistent+non persistent). -1 means no limit. ; http://php.net/pgsql.max-links pgsql.max_links = -1 ; Ignore PostgreSQL backends Notice message or not. ; Notice message logging require a little overheads. ; http://php.net/pgsql.ignore-notice pgsql.ignore_notice = 0 ; Log PostgreSQL backends Notice message or not. ; Unless pgsql.ignore_notice=0, module cannot log notice message. ; http://php.net/pgsql.log-notice pgsql.log_notice = 0 [Sybase-CT] ; Allow or prevent persistent links. ; http://php.net/sybct.allow-persistent sybct.allow_persistent = On ; Maximum number of persistent links. -1 means no limit. ; http://php.net/sybct.max-persistent sybct.max_persistent = -1 ; Maximum number of links (persistent + non-persistent). -1 means no limit. ; http://php.net/sybct.max-links sybct.max_links = -1 ; Minimum server message severity to display. ; http://php.net/sybct.min-server-severity sybct.min_server_severity = 10 ; Minimum client message severity to display. ; http://php.net/sybct.min-client-severity sybct.min_client_severity = 10 ; Set per-context timeout ; http://php.net/sybct.timeout ;sybct.timeout= ;sybct.packet_size ; The maximum time in seconds to wait for a connection attempt to succe
ed
before returning failure. ; Default: one minute ;sybct.login_timeout= ; The name of the host you claim to be connecting from, for display by sp_who. ; Default: none ;sybct.hostname= ; Allows you to define how often deadlocks are to be retri
ed
. -1 means "forever". ; Default: 0 ;sybct.deadlock_retry_count= [bcmath] ; Number of decimal digits for all bcmath functions. ; http://php.net/bcmath.scale bcmath.scale = 0 [browscap] ; http://php.net/browscap ;browscap = extra/browscap.ini [Session] ; Handler us
ed
to store/retrieve data. ; http://php.net/session.save-handler session.save_handler = files ; Argument pass
ed
to save_handler. In the case of files, this is the path ; where data files are stor
ed
. Note: Windows users have to change this ; variable in order to use PHP's session functions. ; ; The path can be defin
ed
as: ; ; session.save_path = "N;/path" ; ; where N is an integer. Instead of storing all the session files in ; /path, what this will do is use subdirectories N-levels deep, and ; store the session data in those directories. This is useful if you ; or your OS have problems with lots of files in one directory, and is ; a more efficient layout for servers that handle lots of sessions. ; ; NOTE 1: PHP will not create this directory structure automatically. ; You can use the script in the ext/session dir for that purpose. ; NOTE 2: See the section on garbage collection below if you choose to ; use subdirectories for session storage ; ; The file storage module creates files using mode 600 by default. ; You can change that by using ; ; session.save_path = "N;MODE;/path" ; ; where MODE is the octal representation of the mode. Note that this ; does not overwrite the process's umask. ; http://php.net/session.save-path ;session.save_path = "/tmp" ; Whether to use cookies. ; http://php.net/session.use-cookies session.use_cookies = 1 ; http://php.net/session.cookie-secure ;session.cookie_secure = ; This option forces PHP to fetch and use a cookie for storing and maintaining ; the session id. We encourage this
operation
as it's very helpful in
com
bating ; session hijacking when not specifying and managing your own session id. It is ; not the end all be all of session hijacking defense, but it's a good start. ; http://php.net/session.use-only-cookies session.use_only_cookies = 1 ; Name of the session (us
ed
as cookie name). ; http://php.net/session.name session.name = PHPSESSID ; Initialize session on request startup. ; http://php.net/session.auto-start session.auto_start = 0 ; Lifetime in seconds of cookie or, if 0, until browser is restart
ed
. ; http://php.net/session.cookie-lifetime session.cookie_lifetime = 0 ; The path for which the cookie is valid. ; http://php.net/session.cookie-path session.cookie_path = / ; The domain for which the cookie is valid. ; http://php.net/session.cookie-domain session.cookie_domain = ; Whether or not to add the httpOnly flag to the cookie, which makes it inaccessible to browser scripting languages such as JavaScript. ; http://php.net/session.cookie-httponly session.cookie_httponly = ; Handler us
ed
to serialize data. php is the standard serializer of PHP. ; http://php.net/session.serialize-handler session.serialize_handler = php ; Defines the probability that the 'garbage collection' process is start
ed
; on every session initialization. The probability is calculat
ed
by using ; gc_probability/gc_divisor. Where session.gc_probability is the numerator ; and gc_divisor is the denominator in the equation. Setting this value to 1 ; when the session.gc_divisor value is 100 will give you approximately a 1% chance ; the gc will run on any give request. ; Default Value: 1 ; Development Value: 1 ; Production Value: 1 ; http://php.net/session.gc-probability session.gc_probability = 1 ; Defines the probability that the 'garbage collection' process is start
ed
on every ; session initialization. The probability is calculat
ed
by using the following equation: ; gc_probability/gc_divisor. Where session.gc_probability is the numerator and ; session.gc_divisor is the denominator in the equation. Setting this value to 1 ; when the session.gc_divisor value is 100 will give you approximately a 1% chance ; the gc will run on any give request. Increasing this value to 1000 will give you ; a 0.1% chance the gc will run on any give request. For high volume production servers, ; this is a more efficient approach. ; Default Value: 100 ; Development Value: 1000 ; Production Value: 1000 ; http://php.net/session.gc-divisor session.gc_divisor = 1000 ; After this number of seconds, stor
ed
data will be seen as 'garbage' and ; clean
ed
up by the garbage collection process. ; http://php.net/session.gc-maxlifetime session.gc_maxlifetime = 1440 ; NOTE: If you are using the subdirectory option for storing session files ; (see session.save_path above), then garbage collection does *not* ; happen automatically. You will ne
ed
to do your own garbage ; collection through a shell script, cron entry, or some other method. ; For example, the following script would is the equivalent of ; setting session.gc_maxlifetime to 1440 (1440 seconds = 24 minutes): ; find /path/to/sessions -cmin +24 -type f | xargs rm ; Check HTTP Referer to invalidate externally stor
ed
URLs containing ids. ; HTTP_REFERER has to contain this substring for the session to be ; consider
ed
as valid. ; http://php.net/session.referer-check session.referer_check = ; How many bytes to read from the file. ; http://php.net/session.entropy-length ;session.entropy_length = 32 ; Specifi
ed
here to create the session id. ; http://php.net/session.entropy-file ; Defaults to /dev/urandom ; On systems that don't have /dev/urandom but do have /dev/arandom, this will default to /dev/arandom ; If neither are found at
com
pile time, the default is no entropy file. ; On windows, setting the entropy_length setting will activate the ; Windows random source (using the CryptoAPI) ;session.entropy_file = /dev/urandom ; Set to {nocache,private,public,} to determine HTTP caching aspects ; or leave this empty to avoid sending anti-caching headers. ; http://php.net/session.cache-limiter session.cache_limiter = nocache ; Document expires after n minutes. ; http://php.net/session.cache-expire session.cache_expire = 180 ; trans sid support is disabl
ed
by default. ; Use of trans sid may risk your users security. ; Use this option with caution. ; - User may send URL contains active session ID ; to other person via. email/irc/etc. ; - URL that contains active session ID may be stor
ed
; in publicly accessible
com
puter. ; - User may access your site with the same session ID ; always using URL stor
ed
in browser's history or bookmarks. ; http://php.net/session.use-trans-sid session.use_trans_sid = 0 ; Select a hash function for use in generating session ids. ; Possible Values ; 0 (MD5 128 bits) ; 1 (SHA-1 160 bits) ; This option may also be set to the name of any hash function support
ed
by ; the hash extension. A list of available hashes is return
ed
by the hash_algos() ; function. ; http://php.net/session.hash-function session.hash_function = 0 ; Define how many bits are stor
ed
in each character when converting ; the binary hash data to something readable. ; Possible values: ; 4 (4 bits: 0-9, a-f) ; 5 (5 bits: 0-9, a-v) ; 6 (6 bits: 0-9, a-z, A-Z, "-", ",") ; Default Value: 4 ; Development Value: 5 ; Production Value: 5 ; http://php.net/session.hash-bits-per-character session.hash_bits_per_character = 5 ; The URL rewriter will look for URLs in a defin
ed
set of HTML tags. ; form/fieldset are special; if you include them here, the rewriter will ; add a hidden
field with the info which is otherwise append
ed
; to URLs. If you want XHTML conformity, remove the form entry. ; Note that all valid entries require a "=", even if no value follows. ; Default Value: "a=href,area=href,frame=src,form=,fieldset=" ; Development Value: "a=href,area=href,frame=src,input=src,form=fakeentry" ; Production Value: "a=href,area=href,frame=src,input=src,form=fakeentry" ; http://php.net/url-rewriter.tags url_rewriter.tags = "a=href,area=href,frame=src,input=src,form=fakeentry" ; Enable upload progress tracking in $_SESSION ; Default Value: On ; Development Value: On ; Production Value: On ; http://php.net/session.upload-progress.enabl
ed
;session.upload_progress.enabl
ed
= On ; Cleanup the progress information as soon as all POST data has been read ; (i.e. upload
com
p
let
ed
). ; Default Value: On ; Development Value: On ; Production Value: On ; http://php.net/session.upload-progress.cleanup ;session.upload_progress.cleanup = On ; A prefix us
ed
for the upload progress key in $_SESSION ; Default Value: "upload_progress_" ; Development Value: "upload_progress_" ; Production Value: "upload_progress_" ; http://php.net/session.upload-progress.prefix ;session.upload_progress.prefix = "upload_progress_" ; The index name (concatenat
ed
with the prefix) in $_SESSION ; containing the upload progress information ; Default Value: "PHP_SESSION_UPLOAD_PROGRESS" ; Development Value: "PHP_SESSION_UPLOAD_PROGRESS" ; Production Value: "PHP_SESSION_UPLOAD_PROGRESS" ; http://php.net/session.upload-progress.name ;session.upload_progress.name = "PHP_SESSION_UPLOAD_PROGRESS" ; How frequently the upload progress should be updat
ed
. ; Given either in percentages (per-file), or in bytes ; Default Value: "1%" ; Development Value: "1%" ; Production Value: "1%" ; http://php.net/session.upload-progress.freq ;session.upload_progress.freq = "1%" ; The minimum delay between updates, in seconds ; Default Value: 1 ; Development Value: 1 ; Production Value: 1 ; http://php.net/session.upload-progress.min-freq ;session.upload_progress.min_freq = "1" [MSSQL] ; Allow or prevent persistent links. mssql.allow_persistent = On ; Maximum number of persistent links. -1 means no limit. mssql.max_persistent = -1 ; Maximum number of links (persistent+non persistent). -1 means no limit. mssql.max_links = -1 ; Minimum error severity to display. mssql.min_error_severity = 10 ; Minimum message severity to display. mssql.min_message_severity = 10 ;
Com
patibility mode with old versions of PHP 3.0. mssql.
com
patability_mode = Off ; Connect timeout ;mssql.connect_timeout = 5 ; Query timeout ;mssql.timeout = 60 ; Valid range 0 - 2147483647. Default = 4096. ;mssql.textlimit = 4096 ; Valid range 0 - 2147483647. Default = 4096. ;mssql.textsize = 4096 ; Limits the number of records in each batch. 0 = all records in one batch. ;mssql.batchsize = 0 ; Specify how datetime and datetim4 columns are return
ed
; On => Returns data convert
ed
to SQL server settings ; Off => Returns values as YYYY-MM-DD hh:mm:ss ;mssql.datetimeconvert = On ; Use NT authentication when connecting to the server mssql.secure_connection = Off ; Specify max number of processes. -1 = library default ; msdlib defaults to 25 ; FreeTDS defaults to 4096 ;mssql.max_procs = -1 ; Specify client character set. ; If empty or not set the client charset from freetds.conf is us
ed
; This is only us
ed
when
com
pil
ed
with FreeTDS ;mssql.charset = "ISO-8859-1" [Assertion] ; Assert(expr); active by default. ; http://php.net/assert.active ;assert.active = On ; Issue a PHP warning for each fail
ed
assertion. ; http://php.net/assert.warning ;assert.warning = On ; Don't bail out by default. ; http://php.net/assert.bail ;assert.bail = Off ; User-function to be call
ed
if an assertion fails. ; http://php.net/assert.callback ;assert.callback = 0 ; Eval the expression with current error_reporting(). Set to true if you want ; error_reporting(0) around the eval(). ; http://php.net/assert.quiet-eval ;assert.quiet_eval = 0 [
COM
] ; path to a file containing GUIDs, IIDs or filenames of files with TypeLibs ; http://php.net/
com
.typelib-file ;
com
.typelib_file = ; allow Distribut
ed
-
COM
calls ; http://php.net/
com
.allow-d
com
;
com
.allow_d
com
= true ; autoregister constants of a
com
ponents typlib on
com
_load() ; http://php.net/
com
.autoregister-typelib ;
com
.autoregister_typelib = true ; register constants casesensitive ; http://php.net/
com
.autoregister-casesensitive ;
com
.autoregister_casesensitive = false ; show warnings on duplicate constant registrations ; http://php.net/
com
.autoregister-verbose ;
com
.autoregister_verbose = true ; The default character set code-page to use when passing strings to and from
COM
objects. ; Default: system ANSI code page ;
com
.code_page= [mbstring] ; language for internal character representation. ; http://php.net/mbstring.language ;mbstring.language = Japanese ; internal/script encoding. ; Some encoding cannot work as internal encoding. ; (e.g. SJIS, BIG5, ISO-2022-*) ; http://php.net/mbstring.internal-encoding ;mbstring.internal_encoding = EUC-JP ; http input encoding. ; http://php.net/mbstring.http-input ;mbstring.http_input = auto ; http output encoding. mb_output_handler must be ; register
ed
as output buffer to function ; http://php.net/mbstring.http-output ;mbstring.http_output = SJIS ; enable automatic encoding translation according to ; mbstring.internal_encoding setting. Input chars are ; convert
ed
to internal encoding by setting this to On. ; Note: Do _not_ use automatic encoding translation for ; portable libs/applications. ; http://php.net/mbstring.encoding-translation ;mbstring.encoding_translation = Off ; automatic encoding detection order. ; auto means ; http://php.net/mbstring.detect-order ;mbstring.detect_order = auto ; substitute_character us
ed
when character cannot be convert
ed
; one from another ; http://php.net/mbstring.substitute-character ;mbstring.substitute_character = none; ; overload(replace) single byte functions by mbstring functions. ; mail(), ereg(), etc are overload
ed
by mb_send_mail(), mb_ereg(), ; etc. Possible values are 0,1,2,4 or
com
bination of them. ; For example, 7 for overload everything. ; 0: No overload ; 1: Overload mail() function ; 2: Overload str*() functions ; 4: Overload ereg*() functions ; http://php.net/mbstring.func-overload ;mbstring.func_overload = 0 ; enable strict encoding detection. ;mbstring.strict_detection = Off ; This directive specifies the regex pattern of content types for which mb_output_handler() ; is activat
ed
. ; Default: mbstring.http_output_conv_mimetype=^(text/|application/xhtml\+xml) ;mbstring.http_output_conv_mimetype= [gd] ; Tell the jpeg decode to ignore warnings and try to create ; a gd image. The warning will then be display
ed
as notices ; disabl
ed
by default ; http://php.net/gd.jpeg-ignore-warning ;gd.jpeg_ignore_warning = 0 [exif] ; Exif UNICODE user
com
ments are handl
ed
as UCS-2BE/UCS-2LE and JIS as JIS. ; With mbstring support this will automatically be convert
ed
into the encoding ; given by corresponding encode setting. When empty mbstring.internal_encoding ; is us
ed
. For the decode settings you can distinguish between motorola and ; intel byte order. A decode setting cannot be empty. ; http://php.net/exif.encode-unicode ;exif.encode_unicode = ISO-8859-15 ; http://php.net/exif.decode-unicode-motorola ;exif.decode_unicode_motorola = UCS-2BE ; http://php.net/exif.decode-unicode-intel ;exif.decode_unicode_intel = UCS-2LE ; http://php.net/exif.encode-jis ;exif.encode_jis = ; http://php.net/exif.decode-jis-motorola ;exif.decode_jis_motorola = JIS ; http://php.net/exif.decode-jis-intel ;exif.decode_jis_intel = JIS [Tidy] ; The path to a default tidy configuration file to use when using tidy ; http://php.net/tidy.default-config ;tidy.default_config = /usr/local/lib/php/default.tcfg ; Should tidy clean and repair output automatically? ; WARNING: Do not use this option if you are generating non-html content ; such as dynamic images ; http://php.net/tidy.clean-output tidy.clean_output = Off [soap] ; Enables or disables WSDL caching feature. ; http://php.net/soap.wsdl-cache-enabl
ed
soap.wsdl_cache_enabl
ed
=1 ; Sets the directory name where SOAP extension will put cache files. ; http://php.net/soap.wsdl-cache-dir soap.wsdl_cache_dir="/tmp" ; (time to live) Sets the number of second while cach
ed
file will be us
ed
; instead of original one. ; http://php.net/soap.wsdl-cache-ttl soap.wsdl_cache_ttl=86400 ; Sets the size of the cache limit. (Max. number of WSDL files to cache) soap.wsdl_cache_limit = 5 [sysvshm] ; A default size of the shar
ed
memory segment ;sysvshm.init_mem = 10000 [ldap] ; Sets the maximum number of open links or -1 for unlimit
ed
. ldap.max_links = -1 [mcrypt] ; For more information about mcrypt settings see http://php.net/mcrypt-module-open ; Directory where to load mcrypt algorithms ; Default:
Com
pil
ed
in into libmcrypt (usually /usr/local/lib/libmcrypt) ;mcrypt.algorithms_dir= ; Directory where to load mcrypt modes ; Default:
Com
pil
ed
in into libmcrypt (usually /usr/local/lib/libmcrypt) ;mcrypt.modes_dir= [dba] ;dba.default_handler= [curl] ; A default value for the CURLOPT_CAINFO option. This is requir
ed
to be an ; absolute path. ;curl.cainfo = ; Local Variables: ; tab-width: 4 ; End:
Eclipse
58,451
社区成员
49,460
社区内容
发帖
与我相关
我的任务
Eclipse
Java Eclipse
复制链接
扫一扫
分享
社区描述
Java Eclipse
社区管理员
加入社区
获取链接或二维码
近7日
近30日
至今
加载中
查看更多榜单
社区公告
暂无公告
试试用AI创作助手写篇文章吧
+ 用AI写文章