insert 问题 -----inserted value too large for column

tsingdao 2002-06-08 03:45:01
提示:inserted value too large for column

我甚至用的是pb生成的sql

我插两列还是不行

大虾救命
...全文
482 10 打赏 收藏 转发到动态 举报
写回复
用AI写文章
10 条回复
切换为时间正序
请发表友善的回复…
发表回复
tsingdao 2002-06-10
  • 打赏
  • 举报
回复
create table jy(
JYH NOT NULL VARCHAR2(8)
JYLX CHAR(1)
JYSJ DATE
KHBM VARCHAR2(7)
HWBM VARCHAR2(3)
HWHD NUMBER(5)
HWGG NUMBER(5)
JS NUMBER(5)
TSCLW CHAR(1)
JZSJ DATE
JZR VARCHAR2(10)
JE NUMBER(6,2)
JGBM VARCHAR2(3)
HWMC VARCHAR2(50)
JGMC VARCHAR2(50)
KHMC VARCHAR2(50)
GJS NUMBER(5,2)
DJ NUMBER(5,2)
XSYBH CHAR(3)
XSYXM VARCHAR2(30)
)
tsingdao 2002-06-10
  • 打赏
  • 举报
回复
LENGTHB('悟空')
---------------
4

INSERT INTO "JYJL"
( "JYH",
"JYLX",
"JYSJ",
"KHBM",
"HWBM",
"HWHD",
"HWGG",
"JS",
"TSCLW",
"JZSJ",
"JZR",
"JE",
"JGBM",
"HWMC",
"JGMC",
"KHMC",
"GJS",
"DJ",
"XSYBH",
"XSYXM" )
VALUES ( '0',
'2',
'08-JUN-02',
':ls_khbm',
' :ls',
200,
300,
20,
'0',
'08-JUN-02',
':ls_jzr',
55.22,
':ls_jg',
':ls_h',
':ls',
':lhmc',
2.0,
55,
':ls',
':ls_xsyxm ')
tsingdao 2002-06-10
  • 打赏
  • 举报
回复
字符集:American_america.zhs16cgb231280

好像上面说的不行

我几十插入一列也不行,单字符
我用的是varchar2
3yugui 2002-06-09
  • 打赏
  • 举报
回复
哈哈,Y14
jlandzpa 2002-06-09
  • 打赏
  • 举报
回复
难道提示还不够清楚吗?
ATCG 2002-06-09
  • 打赏
  • 举报
回复
补充一下,这归根结底还是字符集的问题。

ATCG 留
ATCG 2002-06-09
  • 打赏
  • 举报
回复
Mr.执着;
Let me tell you, the statement must be 'ORA-01401: inserted value too large for column'.

tsingdao (楚留情):
你好!这是一个基本概念问题,正如弱水先生所说的,
‘系统错误提示都已经这么清楚了’,
除此之外,你用以下语句去检查一下有汉字的字段的的长度,
假设;字段名为NAME VARCHAR2(8)
值为‘悟空’
SELECT LENGTHB(NAME) FROM TABLE;
若返回的长度为 6
嘿,那我要告诉你,你的汉字是三个字节长度的,这时,你要插入‘孙悟空’
当然不行了,因为‘孙悟空’是9个字节呀,很奇怪吧???

你用的操作系统是什么?
你的Oracle Server的字符集是什么?是UTF8吗?

tomcat1022 2002-06-09
  • 打赏
  • 举报
回复
please write down the statment that is error
backlove 2002-06-08
  • 打赏
  • 举报
回复
9494
KingSunSha 2002-06-08
  • 打赏
  • 举报
回复
你检查一下,肯定有些插入的数值是否比字段定义大

系统错误提示都已经这么清楚了
Contents Overview 1 Lesson 1: Concepts – Locks and Lock Manager 3 Lesson 2: Concepts – Batch and Transaction 31 Lesson 3: Concepts – Locks and Applications 51 Lesson 4: Information Collection and Analysis 63 Lesson 5: Concepts – Formulating and Implementing Resolution 81 Module 4: Troubleshooting Locking and Blocking Overview At the end of this module, you will be able to:  Discuss how lock manager uses lock mode, lock resources, and lock compatibility to achieve transaction isolation.  Describe the various transaction types and how transactions differ from batches.  Describe how to troubleshoot blocking and locking issues.  Analyze the output of blocking scripts and Microsoft® SQL Server™ Profiler to troubleshoot locking and blocking issues.  Formulate hypothesis to resolve locking and blocking issues. Lesson 1: Concepts – Locks and Lock Manager This lesson outlines some of the common causes that contribute to the perception of a slow server. What You Will Learn After completing this lesson, you will be able to:  Describe locking architecture used by SQL Server.  Identify the various lock modes used by SQL Server.  Discuss lock compatibility and concurrent access.  Identify different types of lock resources.  Discuss dynamic locking and lock escalation.  Differentiate locks, latches, and other SQL Server internal “locking” mechanism such as spinlocks and other synchronization objects. Recommended Reading  Chapter 14 “Locking”, Inside SQL Server 2000 by Kalen Delaney  SOX000821700049 – SQL 7.0 How to interpret lock resource Ids  SOX000925700237 – TITLE: Lock escalation in SQL 7.0  SOX001109700040 – INF: Queries with PREFETCH in the plan hold lock until the end of transaction Locking Concepts Delivery Tip Prior to delivering this material, test the class to see if they fully understand the different isolation levels. If the class is not confident in their understanding, review appendix A04_Locking and its accompanying PowerPoint® file. Transactions in SQL Server provide the ACID properties: Atomicity A transaction either commits or aborts. If a transaction commits, all of its effects remain. If it aborts, all of its effects are undone. It is an “all or nothing” operation. Consistency An application should maintain the consistency of a database. For example, if you defer constraint checking, it is your responsibility to ensure that the database is consistent. Isolation Concurrent transactions are isolated from the updates of other incomplete transactions. These updates do not constitute a consistent state. This property is often called serializability. For example, a second transaction traversing the doubly linked list mentioned above would see the list before or after the insert, but it will see only complete changes. Durability After a transaction commits, its effects will persist even if there are system failures. Consistency and isolation are the most important in describing SQL Server’s locking model. It is up to the application to define what consistency means, and isolation in some form is needed to achieve consistent results. SQL Server uses locking to achieve isolation. Definition of Dependency: A set of transactions can run concurrently if their outputs are disjoint from the union of one another’s input and output sets. For example, if T1 writes some object that is in T2’s input or output set, there is a dependency between T1 and T2. Bad Dependencies These include lost updates, dirty reads, non-repeatable reads, and phantoms. ANSI SQL Isolation Levels An isolation level determines the degree to which data is isolated for use by one process and guarded against interference from other processes. Prior to SQL Server 7.0, REPEATABLE READ and SERIALIZABLE isolation levels were synonymous. There was no way to prevent non-repeatable reads while not preventing phantoms. By default, SQL Server 2000 operates at an isolation level of READ COMMITTED. To make use of either more or less strict isolation levels in applications, locking can be customized for an entire session by setting the isolation level of the session with the SET TRANSACTION ISOLATION LEVEL statement. To determine the transaction isolation level currently set, use the DBCC USEROPTIONS statement, for example: USE pubs GO SET TRANSACTION ISOLATION LEVEL REPEATABLE READ GO DBCC USEROPTIONS GO Multigranular Locking Multigranular Locking In our example, if one transaction (T1) holds an exclusive lock at the table level, and another transaction (T2) holds an exclusive lock at the row level, each of the transactions believe they have exclusive access to the resource. In this scenario, since T1 believes it locks the entire table, it might inadvertently make changes to the same row that T2 thought it has locked exclusively. In a multigranular locking environment, there must be a way to effectively overcome this scenario. Intent lock is the answer to this problem. Intent Lock Intent Lock is the term used to mean placing a marker in a higher-level lock queue. The type of intent lock can also be called the multigranular lock mode. An intent lock indicates that SQL Server wants to acquire a shared (S) lock or exclusive (X) lock on some of the resources lower down in the hierarchy. For example, a shared intent lock placed at the table level means that a transaction intends on placing shared (S) locks on pages or rows within that table. Setting an intent lock at the table level prevents another transaction from subsequently acquiring an exclusive (X) lock on the table containing that page. Intent locks improve performance because SQL Server examines intent locks only at the table level to determine whether a transaction can safely acquire a lock on that table. This removes the requirement to examine every row or page lock on the table to determine whether a transaction can lock the entire table. Lock Mode The code shown in the slide represents how the lock mode is stored internally. You can see these codes by querying the master.dbo.spt_values table: SELECT * FROM master.dbo.spt_values WHERE type = N'L' However, the req_mode column of master.dbo.syslockinfo has lock mode code that is one less than the code values shown here. For example, value of req_mode = 3 represents the Shared lock mode rather than the Schema Modification lock mode. Lock Compatibility These locks can apply at any coarser level of granularity. If a row is locked, SQL Server will apply intent locks at both the page and the table level. If a page is locked, SQL Server will apply an intent lock at the table level. SIX locks imply that we have shared access to a resource and we have also placed X locks at a lower level in the hierarchy. SQL Server never asks for SIX locks directly, they are always the result of a conversion. For example, suppose a transaction scanned a page using an S lock and then subsequently decided to perform a row level update. The row would obtain an X lock, but now the page would require an IX lock. The resultant mode on the page would be SIX. Another type of table lock is a schema stability lock (Sch-S) and is compatible with all table locks except the schema modification lock (Sch-M). The schema modification lock (Sch-M) is incompatible with all table locks. Locking Resources Delivery Tip Note the differences between Key and Key Range locks. Key Range locks will be covered in a couple of slides. SQL Server can lock these resources: Item Description DB A database. File A database file Index An entire index of a table. Table An entire table, including all data and indexes. Extent A contiguous group of data pages or index pages. Page An 8-KB data page or index page. Key Row lock within an index. Key-range A key-range. Used to lock ranges between records in a table to prevent phantom insertions or deletions into a set of records. Ensures serializable transactions. RID A Row Identifier. Used to individually lock a single row within a table. Application A lock resource defined by an application. The lock manager knows nothing about the resource format. It simply compares the 'strings' representing the lock resources to determine whether it has found a match. If a match is found, it knows that resource is already locked. Some of the resources have “sub-resources.” The followings are sub-resources displayed by the sp_lock output: Database Lock Sub-Resources: Full Database Lock (default) [BULK-OP-DB] – Bulk Operation Lock for Database [BULK-OP-LOG] – Bulk Operation Lock for Log Table Lock Sub-Resources: Full Table Lock (default) [UPD-STATS] – Update statistics Lock [COMPILE] – Compile Lock Index Lock sub-Resources: Full Index Lock (default) [INDEX_ID] – Index ID Lock [INDEX_NAME] – Index Name Lock [BULK_ALLOC] – Bulk Allocation Lock [DEFRAG] – Defragmentation Lock For more information, see also… SOX000821700049 SQL 7.0 How to interpret lock resource Ids Lock Resource Block The resource type has the following resource block format: Resource Type (Code) Content DB (2) Data 1: sub-resource; Data 2: 0; Data 3: 0 File (3) Data 1: File ID; Data 2: 0; Data 3: 0 Index (4) Data 1: Object ID; Data 2: sub-resource; Data 3: Index ID Table (5) Data 1: Object ID; Data 2: sub-resource; Data 3: 0. Page (6) Data 1: Page Number; Data 3: 0. Key (7) Data 1: Object ID; Data 2: Index ID; Data 3: Hashed Key Extent (8) Data 1: Extent ID; Data 3: 0. RID (9) Data 1: RID; Data 3: 0. Application (10) Data 1: Application resource name The rsc_bin column of master..syslockinfo contains the resource block in hexadecimal format. For an example of how to decode value from this column using the information above, let us assume we have the following value: 0x000705001F83D775010002014F0BEC4E With byte swapping within each field, this can be decoded as: Byte 0: Flag – 0x00 Byte 1: Resource Type – 0x07 (Key) Byte 2-3: DBID – 0x0005 Byte 4-7: ObjectID – 0x 75D7831F (1977058079) Byte 8-9: IndexID – 0x0001 Byte 10-16: Hash Key value – 0x 02014F0BEC4E For more information about how to decode this value, see also… Inside SQL Server 2000, pages 803 and 806. Key Range Locking Key Range Locking To support SERIALIZABLE transaction semantics, SQL Server needs to lock sets of rows specified by a predicate, such as WHERE salary BETWEEN 30000 AND 50000 SQL Server needs to lock data that does not exist! If no rows satisfy the WHERE condition the first time the range is scanned, no rows should be returned on any subsequent scans. Key range locks are similar to row locks on index keys (whether clustered or not). The locks are placed on individual keys rather than at the node level. The hash value consists of all the key components and the locator. So, for a nonclustered index over a heap, where columns c1 and c2 where indexed, the hash would contain contributions from c1, c2 and the RID. A key range lock applied to a particular key means that all keys between the value locked and the next value would be locked for all data modification. Key range locks can lock a slightly larger range than that implied by the WHERE clause. Suppose the following select was executed in a transaction with isolation level SERIALIZABLE: SELECT * FROM members WHERE first_name between ‘Al’ and ‘Carl’ If 'Al', 'Bob', and 'Dave' are index keys in the table, the first two of these would acquire key range locks. Although this would prevent anyone from inserting either 'Alex' or 'Ben', it would also prevent someone from inserting 'Dan', which is not within the range of the WHERE clause. Prior to SQL Server 7.0, page locking was used to prevent phantoms by locking the entire set of pages on which the phantom would exist. This can be too conservative. Key Range locking lets SQL Server lock only a much more restrictive area of the table. Impact Key-range locking ensures that these scenarios are SERIALIZABLE:  Range scan query  Singleton fetch of nonexistent row  Delete operation  Insert operation However, the following conditions must be satisfied before key-range locking can occur:  The transaction-isolation level must be set to SERIALIZABLE.  The operation performed on the data must use an index range access. Range locking is activated only when query processing (such as the optimizer) chooses an index path to access the data. Key Range Lock Mode Again, the req_mode column of master.dbo.syslockinfo has lock mode code that is one less than the code values shown here. Dynamic Locking When modifying individual rows, SQL Server typically would take row locks to maximize concurrency (for example, OLTP, order-entry application). When scanning larger volumes of data, it would be more appropriate to take page or table locks to minimize the cost of acquiring locks (for example, DSS, data warehouse, reporting). Locking Decision The decision about which unit to lock is made dynamically, taking many factors into account, including other activity on the system. For example, if there are multiple transactions currently accessing a table, SQL Server will tend to favor row locking more so than it otherwise would. It may mean the difference between scanning the table now and paying a bit more in locking cost, or having to wait to acquire a more coarse lock. A preliminary locking decision is made during query optimization, but that decision can be adjusted when the query is actually executed. Lock Escalation When the lock count for the transaction exceeds and is a multiple of ESCALATION_THRESHOLD (1250), the Lock Manager attempts to escalate. For example, when a transaction acquired 1250 locks, lock manager will try to escalate. The number of locks held may continue to increase after the escalation attempt (for example, because new tables are accessed, or the previous lock escalation attempts failed due to incompatible locks held by another spid). If the lock count for this transaction reaches 2500 (1250 * 2), Lock Manager will attempt escalation again. The Lock Manager looks at the lock memory it is using and if it is more than 40 percent of SQL Server’s allocated buffer pool memory, it tries to find a scan (SDES) where no escalation has already been performed. It then repeats the search operation until all scans have been escalated or until the memory used drops under the MEMORY_LOAD_ESCALATION_THRESHOLD (40%) value. If lock escalation is not possible or fails to significantly reduce lock memory footprint, SQL Server can continue to acquire locks until the total lock memory reaches 60 percent of the buffer pool (MAX_LOCK_RESOURCE_MEMORY_PERCENTAGE=60). Lock escalation may be also done when a single scan (SDES) holds more than LOCK_ESCALATION_THRESHOLD (765) locks. There is no lock escalation on temporary tables or system tables. Trace Flag 1211 disables lock escalation. Important Do not relay this to the customer without careful consideration. Lock escalation is a necessary feature, not something to be avoided completely. Trace flags are global and disabling lock escalation could lead to out of memory situations, extremely poor performing queries, or other problems. Lock escalation tracing can be seen using the Profiler or with the general locking trace flag, -T1200. However, Trace Flag 1200 shows all lock activity so it should not be usable on a production system. For more information, see also… SOX000925700237 “TITLE: SQL 7.0 Lock escalation in SQL 7.0” Lock Timeout Application Lock Timeout An application can set lock timeout for a session with the SET option: SET LOCK_TIMEOUT N where N is a number of milliseconds. A value of -1 means that there will be no timeout, which is equivalent to the version 6.5 behavior. A value of 0 means that there will be no waiting; if a process finds a resource locked, it will generate error message 1222 and continue with the next statement. The current value of LOCK_TIMEOUT is stored in the global variable @@lock_timeout. Note After a lock timeout any transaction containing the statement, is rolled back or canceled by SQL Server 2000 (bug#352640 was filed). This behavior is different from that of SQL Server 7.0. With SQL Server 7.0, the application must have an error handler that can trap error 1222 and if an application does not trap the error, it can proceed unaware that an individual statement within a transaction has been canceled, and errors can occur because statements later in the transaction may depend on the statement that was never executed. Bug#352640 is fixed in hotfix build 8.00.266 whereby a lock timeout will only Internal Lock Timeout At time, internal operations within SQL Server will attempt to acquire locks via lock manager. Typically, these lock requests are issued with “no waiting.” For example, the ghost record processing might try to clean up rows on a particular page, and before it can do that, it needs to lock the page. Thus, the ghost record manager will request a page lock with no wait so that if it cannot lock the page, it will just move on to other pages; it can always come back to this page later. If you look at SQL Profiler Lock: Timeout events, internal lock timeout typically have a duration value of zero. Lock Duration Lock Mode and Transaction Isolation Level For REPEATABLE READ transaction isolation level, update locks are held until data is read and processed, unless promoted to exclusive locks. "Data is processed" means that we have decided whether the row in question matched the search criteria; if not then the update lock is released, otherwise, we get an exclusive lock and make the modification. Consider the following query: use northwind go dbcc traceon(3604, 1200, 1211) -- turn on lock tracing -- and disable escalation go set transaction isolation level repeatable read begin tran update dbo.[order details] set discount = convert (real, discount) where discount = 0.0 exec sp_lock Update locks are promoted to exclusive locks when there is a match; otherwise, the update lock is released. The sp_lock output verifies that the SPID does not hold any update locks or shared locks at the end of the query. Lock escalation is turned off so that exclusive table lock is not held at the end. Warning Do not use trace flag 1200 in a production environment because it produces a lot of output and slows down the server. Trace flag 1211 should not be used unless you have done extensive study to make sure it helps with performance. These trace flags are used here for illustration and learning purposes only. Lock Ownership Most of the locking discussion in this lesson relates to locks owned by “transactions.” In addition to transaction, cursor and session can be owners of locks and they both affect how long locks are held. For every row that is fetched, when SCROLL_LOCKS option is used, regardless of the state of a transaction, a cursor lock is held until the next row is fetched or when the cursor is closed. Locks owned by session are outside the scope of a transaction. The duration of these locks are bounded by the connection and the process will continue to hold these locks until the process disconnects. A typical lock owned by session is the database (DB) lock. Locking – Read Committed Scan Under read committed isolation level, when database pages are scanned, shared locks are held when the page is read and processed. The shared locks are released “behind” the scan and allow other transactions to update rows. It is important to note that the shared lock currently acquired will not be released until shared lock for the next page is successfully acquired (this is commonly know as “crabbing”). If the same pages are scanned again, rows may be modified or deleted by other transactions. Locking – Repeatable Read Scan Under repeatable read isolation level, when database pages are scanned, shared locks are held when the page is read and processed. SQL Server continues to hold these shared locks, thus preventing other transactions to update rows. If the same pages are scanned again, previously scanned rows will not change but new rows may be added by other transactions. Locking – Serializable Read Scan Under serializable read isolation level, when database pages are scanned, shared locks are held not only on rows but also on scanned key range. SQL Server continues to hold these shared locks until the end of transaction. Because key range locks are held, not only will this prevent other transactions from modifying the rows, no new rows can be inserted. Prefetch and Isolation Level Prefetch and Locking Behavior The prefetch feature is available for use with SQL Server 7.0 and SQL Server 2000. When searching for data using a nonclustered index, the index is searched for a particular value. When that value is found, the index points to the disk address. The traditional approach would be to immediately issue an I/O for that row, given the disk address. The result is one synchronous I/O per row and, at most, one disk at a time working to evaluate the query. This does not take advantage of striped disk sets. The prefetch feature takes a different approach. It continues looking for more record pointers in the nonclustered index. When it has collected a number of them, it provides the storage engine with prefetch hints. These hints tell the storage engine that the query processor will need these particular records soon. The storage engine can now issue several I/Os simultaneously, taking advantage of striped disk sets to execute multiple operations simultaneously. For example, if the engine is scanning a nonclustered index to determine which rows qualify but will eventually need to visit the data page as well to access columns that are not in the index, it may decide to submit asynchronous page read requests for a group of qualifying rows. The prefetch data pages are then revisited later to avoid waiting for each individual page read to complete in a serial fashion. This data access path requires that a lock be held between the prefetch request and the row lookup to stabilize the row on the page so it is not to be moved by a page split or clustered key update. For our example, the isolation level of the query is escalated to REPEATABLE READ, overriding the transaction isolation level. With SQL Server 7.0 and SQL Server 2000, portions of a transaction can execute at a different transaction isolation level than the entire transaction itself. This is implemented as lock classes. Lock classes are used to control lock lifetime when portions of a transaction need to execute at a stricter isolation level than the underlying transaction. Unfortunately, in SQL Server 7.0 and SQL Server 2000, the lock class is created at the topmost operator of the query and hence released only at the end of the query. Currently there is no support to release the lock (lock class) after the row has been discarded or fetched by the filter or join operator. This is because isolation level can be set at the query level via a lock class, but no lower. Because of this, locks acquired during the query will not be released until the query completes. If prefetch is occurring you may see a single SPID that holds hundreds of Shared KEY or PAG locks even though the connection’s isolation level is READ COMMITTED. Isolation level can be determined from DBCC PSS output. For details about this behavior see “SOX001109700040 INF: Queries with PREFETCH in the plan hold lock until the end of transaction”. Other Locking Mechanism Lock manager does not manage latches and spinlocks. Latches Latches are internal mechanisms used to protect pages while doing operations such as placing a row physically on a page, compressing space on a page, or retrieving rows from a page. Latches can roughly be divided into I/O latches and non-I/O latches. If you see a high number of non-I/O related latches, SQL Server is usually doing a large number of hash or sort operations in tempdb. You can monitor latch activities via DBCC SQLPERF(‘WAITSTATS’) command. Spinlock A spinlock is an internal data structure that is used to protect vital information that is shared within SQL Server. On a multi-processor machine, when SQL Server tries to access a particular resource protected by a spinlock, it must first acquire the spinlock. If it fails, it executes a loop that will check to see if the lock is available and if not, decrements a counter. If the counter reaches zero, it yields the processor to another thread and goes into a “sleep” (wait) state for a pre-determined amount of time. When it wakes, hopefully, the lock is free and available. If not, the loop starts again and it is terminated only when the lock is acquired. The reason for implementing a spinlock is that it is probably less costly to “spin” for a short time rather than yielding the processor. Yielding the processor will force an expensive context switch where:  The old thread’s state must be saved  The new thread’s state must be reloaded  The data stored in the L1 and L2 cache are useless to the processor On a single-processor computer, the loop is not useful because no other thread can be running and thus, no one can release the spinlock for the currently executing thread to acquire. In this situation, the thread yields the processor immediately. Lesson 2: Concepts – Batch and Transaction This lesson outlines some of the common causes that contribute to the perception of a slow server. What You Will Learn After completing this lesson, you will be able to:  Review batch processing and error checking.  Review explicit, implicit and autocommit transactions and transaction nesting level.  Discuss how commit and rollback transaction done in stored procedure and trigger affects transaction nesting level.  Discuss various transaction isolation level and their impact on locking.  Discuss the difference between aborting a statement, a transaction, and a batch.  Describe how @@error, @@transcount, and @@rowcount can be used for error checking and handling. Recommended Reading  Charter 12 “Transactions and Triggers”, Inside SQL Server 2000 by Kalen Delaney Batch Definition SQL Profiler Statements and Batches To help further your understanding of what is a batch and what is a statement, you can use SQL Profiler to study the definition of batch and statement.  Try This: Using SQL Profiler to Analyze Batch 1. Log on to a server with Query Analyzer 2. Startup the SQL Profiler against the same server 3. Start a trace using the “StandardSQLProfiler” template 4. Execute the following using Query Analyzer: SELECT @@VERSION SELECT @@SPID The ‘SQL:BatchCompleted’ event is captured by the trace. It shows both the statements as a single batch. 5. Now execute the following using Query Analyzer {call sp_who()} What shows up? The ‘RPC:Completed’ with the sp_who information. RPC is simply another entry point to the SQL Server to call stored procedures with native data types. This allows one to avoid parsing. The ‘RPC:Completed’ event should be considered the same as a batch for the purposes of this discussion. Stop the current trace and start a new trace using the “SQLProfilerTSQL_SPs” template. Issue the same command as outlines in step 5 above. Looking at the output, not only can you see the batch markers but each statement as executed within the batch. Autocommit, Explicit, and Implicit Transaction Autocommit Transaction Mode (Default) Autocommit mode is the default transaction management mode of SQL Server. Every Transact-SQL statement, whether it is a standalone statement or part of a batch, is committed or rolled back when it completes. If a statement completes successfully, it is committed; if it encounters any error, it is rolled back. A SQL Server connection operates in autocommit mode whenever this default mode has not been overridden by either explicit or implicit transactions. Autocommit mode is also the default mode for ADO, OLE DB, ODBC, and DB-Library. A SQL Server connection operates in autocommit mode until a BEGIN TRANSACTION statement starts an explicit transaction, or implicit transaction mode is set on. When the explicit transaction is committed or rolled back, or when implicit transaction mode is turned off, SQL Server returns to autocommit mode. Explicit Transaction Mode An explicit transaction is a transaction that starts with a BEGIN TRANSACTION statement. An explicit transaction can contain one or more statements and must be terminated by either a COMMIT TRANSACTION or a ROLLBACK TRANSACTION statement. Implicit Transaction Mode SQL Server can automatically or, more precisely, implicitly start a transaction for you if a SET IMPLICIT_TRANSACTIONS ON statement is run or if the implicit transaction option is turned on globally by running sp_configure ‘user options’ 2. (Actually, the bit mask 0x2 must be turned on for the user option so you might have to perform an ‘OR’ operation with the existing user option value.) See SQL Server 2000 Books Online on how to turn on implicit transaction under ODBC and OLE DB (acdata.chm::/ac_8_md_06_2g6r.htm). Transaction Nesting Explicit transactions can be nested. Committing inner transactions is ignored by SQL Server other than to decrements @@TRANCOUNT. The transaction is either committed or rolled back based on the action taken at the end of the outermost transaction. If the outer transaction is committed, the inner nested transactions are also committed. If the outer transaction is rolled back, then all inner transactions are also rolled back, regardless of whether the inner transactions were individually committed. Each call to COMMIT TRANSACTION applies to the last executed BEGIN TRANSACTION. If the BEGIN TRANSACTION statements are nested, then a COMMIT statement applies only to the last nested transaction, which is the innermost transaction. Even if a COMMIT TRANSACTION transaction_name statement within a nested transaction refers to the transaction name of the outer transaction, the commit applies only to the innermost transaction. If a ROLLBACK TRANSACTION statement without a transaction_name parameter is executed at any level of a set of nested transaction, it rolls back all the nested transactions, including the outermost transaction. The @@TRANCOUNT function records the current transaction nesting level. Each BEGIN TRANSACTION statement increments @@TRANCOUNT by one. Each COMMIT TRANSACTION statement decrements @@TRANCOUNT by one. A ROLLBACK TRANSACTION statement that does not have a transaction name rolls back all nested transactions and decrements @@TRANCOUNT to 0. A ROLLBACK TRANSACTION that uses the transaction name of the outermost transaction in a set of nested transactions rolls back all the nested transactions and decrements @@TRANCOUNT to 0. When you are unsure if you are already in a transaction, SELECT @@TRANCOUNT to determine whether it is 1 or more. If @@TRANCOUNT is 0 you are not in a transaction. You can also find the transaction nesting level by checking the sysprocess.open_tran column. See SQL Server 2000 Books Online topic “Nesting Transactions” (acdata.chm::/ac_8_md_06_66nq.htm) for more information. Statement, Transaction, and Batch Abort One batch can have many statements and one transaction can have multiple statements, also. One transaction can span multiple batches and one batch can have multiple transactions. Statement Abort Currently executing statement is aborted. This can be a bit confusing when you start talking about statements in a trigger or stored procedure. Let us look closely at the following trigger: CREATE TRIGGER TRG8134 ON TBL8134 AFTER INSERT AS BEGIN SELECT 1/0 SELECT 'Next command in trigger' END To fire the INSERT trigger, the batch could be as simple as ‘INSERT INTO TBL8134 VALUES(1)’. However, the trigger contains two statements that must be executed as part of the batch to satisfy the clients insert request. When the ‘SELECT 1/0’ causes the divide by zero error, a statement abort is issued for the ‘SELECT 1/0’ statement. Batch and Transaction Abort On SQL Server 2000 (and SQL Server 7.0) whenever a non-informational error is encountered in a trigger, the statement abort is promoted to a batch and transactional abort. Thus, in the example the statement abort for ‘select 1/0’ promotion results in an entire batch abort. No further statements in the trigger or batch will be executed and a rollback is issued. On SQL Server 6.5, the statement aborts immediately and results in a transaction abort. However, the rest of the statements within the trigger are executed. This trigger could return ‘Next command in trigger’ as a result set. Once the trigger completes the batch abort promotion takes effect. Conversely, submitting a similar set of statements in a standalone batch can result in different behavior. SELECT 1/0 SELECT 'Next command in batch' Not considering the set option possibilities, a divide by zero error generally results in a statement abort. Since it is not in a trigger, the promotion to a batch abort is avoided and subsequent SELECT statement can execute. The programmer should add an “if @@ERROR” check immediately after the ‘select 1/0’ to T-SQL execution to control the flow correctly. Aborting and Set Options ARITHABORT If SET ARITHABORT is ON, these error conditions cause the query or batch to terminate. If the errors occur in a transaction, the transaction is rolled back. If SET ARITHABORT is OFF and one of these errors occurs, a warning message is displayed, and NULL is assigned to the result of the arithmetic operation. When an INSERT, DELETE, or UPDATE statement encounters an arithmetic error (overflow, divide-by-zero, or a domain error) during expression evaluation when SET ARITHABORT is OFF, SQL Server inserts or updates a NULL value. If the target column is not nullable, the insert or update action fails and the user receives an error. XACT_ABORT When SET XACT_ABORT is ON, if a Transact-SQL statement raises a run-time error, the entire transaction is terminated and rolled back. When OFF, only the Transact-SQL statement that raised the error is rolled back and the transaction continues processing. Compile errors, such as syntax errors, are not affected by SET XACT_ABORT. For example: CREATE TABLE t1 (a int PRIMARY KEY) CREATE TABLE t2 (a int REFERENCES t1(a)) GO INSERT INTO t1 VALUES (1) INSERT INTO t1 VALUES (3) INSERT INTO t1 VALUES (4) INSERT INTO t1 VALUES (6) GO SET XACT_ABORT OFF GO BEGIN TRAN INSERT INTO t2 VALUES (1) INSERT INTO t2 VALUES (2) /* Foreign key error */ INSERT INTO t2 VALUES (3) COMMIT TRAN SELECT 'Continue running batch 1...' GO SET XACT_ABORT ON GO BEGIN TRAN INSERT INTO t2 VALUES (4) INSERT INTO t2 VALUES (5) /* Foreign key error */ INSERT INTO t2 VALUES (6) COMMIT TRAN SELECT 'Continue running batch 2...' GO /* Select shows only keys 1 and 3 added. Key 2 insert failed and was rolled back, but XACT_ABORT was OFF and rest of transaction succeeded. Key 5 insert error with XACT_ABORT ON caused all of the second transaction to roll back. Also note that 'Continue running batch 2...' is not Returned to indicate that the batch is aborted. */ SELECT * FROM t2 GO DROP TABLE t2 DROP TABLE t1 GO Compile and Run-time Errors Compile Errors Compile errors are encountered during syntax checks, security checks, and other general operations to prepare the batch for execution. These errors can prevent the optimization of the query and thus lead to immediate abort. The statement is not run and the batch is aborted. The transaction state is generally left untouched. For example, assume there are four statements in a particular batch. If the third statement has a syntax error, none of the statements in the batch is executed. Optimization Errors Optimization errors would include rare situations where the statement encounters a problem when attempting to build an optimal execution plan. Example: “too many tables referenced in the query” error is reported because a “work table” was added to the plan. Runtime Errors Runtime errors are those that are encountered during the execution of the query. Consider the following batch: SELECT * FROM pubs.dbo.titles UPDATE pubs.dbo.authors SET au_lname = au_lname SELECT * FROM foo UPDATE pubs.dbo.authors SET au_lname = au_lname If you run the above statements in a batch, the first two statements will be executed, the third statement will fail because table foo does not exist, and the batch will terminate. Deferred Name Resolution is the feature that allows this batch to start executing before resolving the object foo. This feature allows SQL Server to delay object resolution and place a “placeholder” in the query’s execution. The object referenced by the placeholder is resolved until the query is executed. In our example, the execution of the statement “SELECT * FROM foo” will trigger another compile process to resolve the name again. This time, error message 208 is returned. Error: 208, Level 16, State 1, Line 1 Invalid object name 'foo'. Message 208 can be encountered as a runtime or compile error depending on whether the Deferred Name Resolution feature is available. In SQL Server 6.5 this would be considered a compile error and on SQL Server 2000 (and SQL Server7.0) as a runtime error due to Deferred Name Resolution. In the following example, if a trigger referenced authors2, the error is detected as SQL Server attempts to execute the trigger. However, under SQL Server 6.5 the create trigger statement fails because authors2 does not exist at compile time. When errors are encountered in a trigger, generally, the statement, batch, and transaction are aborted. You should be able to observe this by running the following script in pubs database: Create table tblTest(iID int) go create trigger trgInsert on tblTest for INSERT as begin select * from authors select * from authors2 select * from titles end go begin tran select 'Before' insert into tblTest values(1) select 'After' go select @@TRANCOUNT go When run in a batch, the statement and the batch are aborted but the transaction remains active. The follow script illustrates this: begin tran select 'Before' select * from authors2 select 'After' go select @@TRANCOUNT go One other factor in a compile versus runtime error is implicit data type conversions. If you were to run the following statements on SQL Server 6.5 and SQL Server 2000 (and SQL Server 7.0): create table tblData(dtData datetime) go select 1 insert into tblData values(12/13/99) go On SQL Server 6.5, you get an error before execution of the batch begins so no statements are executed and the batch is aborted. Error: 206, Level 16, State 2, Line 2 Operand type clash: int is incompatible with datetime On SQL Server 2000, you get the default value (1900-01-01 00:00:00.000) inserted into the table. SQL Server 2000 implicit data type conversion treats this as integer division. The integer division of 12/13/99 is 0, so the default date and time value is inserted, no error returned. To correct the problem on either version is to wrap the date string with quotes. See Bug #56118 (sqlbug_70) for more details about this situation. Another example of a runtime error is a 605 message. Error: 605 Attempt to fetch logical page %S_PGID in database '%.*ls' belongs to object '%.*ls', not to object '%.*ls'. A 605 error is always a runtime error. However, depending on the transaction isolation level, (e.g. using the NOLOCK lock hint), established by the SPID the handling of the error can vary. Specifically, a 605 error is considered an ACCESS error. Errors associated with buffer and page access are found in the 600 series of errors. When the error is encountered, the isolation level of the SPID is examined to determine proper handling based on information or fatal error level. Transaction Error Checking Not all errors cause transactions to automatically rollback. Although it is difficult to determine exactly which errors will rollback transactions and which errors will not, the main idea here is that programmers must perform error checking and handle errors appropriately. Error Handling Raiserror Details Raiserror seems to be a source of confusion but is really rather simple. Raiserror with severity levels of 20 or higher will terminate the connection. Of course, when the connection is terminated a full rollback of any open transaction will immediately be instantiated by the SQL Server (except distributed transaction with DTC involved). Severity levels lower than 20 will simply result in the error message being returned to the client. They do not affect the transaction scope of the connection. Consider the following batch: use pubs begin tran update authors set au_lname = 'smith' raiserror ('This is bad', 19, 1) with log select @@trancount With severity set at 19, the 'select @@trancount' will be executed after the raiserror statement and will return a value of 1. If severity is changed to 20, then the select statement will not run and the connection is broken. Important Error handling must occur not only in T-SQL batches and stored procedures, but also in application program code. Transactions and Triggers (1 of 2) Basic behavior assumes the implicit transactions setting is set to OFF. This behavior makes it possible to identify business logic errors in a trigger, raise an error, rollback the action, and add an audit table entry. Logically, the insert to the audit table cannot take place before the ROLLBACK action and you would not want to build in the audit table insert into every applications error handler that violated the business rule of the trigger. For more information, see also… SQL Server 2000 Books Online topic “Rollbacks in stored procedure and triggers“ (acdata.chm::/ac_8_md_06_4qcz.htm) IMPLICIT_TRANSACTIONS ON Behavior The behavior of firing other triggers on the same table can be tricky. Say you added a trigger that checks the CODE field. Read only versions of the rows contain the code ‘RO’ and read/write versions use ‘RW.’ Whenever someone tries to delete a row with a code ‘RO’ the trigger issues the rollback and logs an audit table entry. However, you also have a second trigger that is responsible for cascading delete operations. One client could issue the delete without implicit transactions on and only the current trigger would execute and then terminate the batch. However, a second client with implicit transactions on could issue the same delete and the secondary trigger would fire. You end up with a situation in which the cascading delete operations can take place (are committed) but the initial row remains in the table because of the rollback operation. None of the delete operations should be allowed but because the transaction scope was restarted because of the implicit transactions setting, they did. Transactions and Triggers (2 of 2) It is extremely difficult to determine the execution state of a trigger when using explicit rollback statements in combination with implicit transactions. The RETURN statement is not allowed to return a value. The only way I have found to set the @@ERROR is using a ‘raiserror’ as the last execution statement in the last trigger to execute. If you modify the example, this following RAISERROR statement will set @@ERROR to 50000: CREATE TRIGGER trgTest on tblTest for INSERT AS BEGIN ROLLBACK INSERT INTO tblAudit VALUES (1) RAISERROR('This is bad', 14,1) END However, this value does not carry over to a secondary trigger for the same table. If you raise an error at the end of the first trigger and then look at @@ERROR in the secondary trigger the @@ERROR remains 0. Carrying Forward an Active/Open Transaction It is possible to exit from a trigger and carry forward an open transaction by issuing a BEGIN TRAN or by setting implicit transaction on and doing INSERT, UPDATE, or DELETE. Warning It is never recommended that a trigger call BEGIN TRANSACTION. By doing this you increment the transaction count. Invalid code logic, not calling commit transaction, can lead to a situation where the transaction count remains elevated upon exit of the trigger. Transaction Count The behavior is better explained by understanding how the server works. It does not matter whether you are in a transaction, when a modification takes place the transaction count is incremented. So, in the simplest form, during the processing of an insert the transaction count is 1. On completion of the insert, the server will commit (and thus decrement the transaction count). If the commit identifies the transaction count has returned to 0, the actual commit processing is completed. Issuing a commit when the transaction count is greater than 1 simply decrements the nested transaction counter. Thus, when we enter a trigger, the transaction count is 1. At the completion of the trigger, the transaction count will be 0 due to the commit issued at the end of the modification statement (insert). In our example, if the connection was already in a transaction and called the second INSERT, since implicit transaction is ON, the transaction count in the trigger will be 2 as long as the ROLLBACK is not executed. At the end of the insert, the commit is again issued to decrement the transaction reference count to 1. However, the value does not return to 0 so the transaction remains open/active. Subsequent triggers are only fired if the transaction count at the end of the trigger remains greater than or equal to 1. The key to continuation of secondary triggers and the batch is the transaction count at the end of a trigger execution. If the trigger that performs a rollback has done an explicit begin transaction or uses implicit transactions, subsequent triggers and the batch will continue. If the transaction count is not 1 or greater, subsequent triggers and the batch will not execute. Warning Forcing the transaction count after issuing a rollback is dangerous because you can easily loose track of your transaction nesting level. When performing an explicit rollback in a trigger, you should immediately issue a return statement to maintain consistent behavior between a connection with and without implicit transaction settings. This will force the trigger(s) and batch to terminate immediately. One of the methods of dealing with this issue is to run ‘SET IMPLICIT_TRANSACTIONS OFF’ as the first statement of any trigger. Other methods may entails checking @@TRANCOUNT at the end of the trigger and continue to COMMIT the transaction as long as @@TRANCOUNT is greater than 1. Examples The following examples are based on this table: create table tbl50000Insert (iID int NOT NULL) go Note If more than one trigger is used, to guarantee the trigger firing sequence, the sp_settriggerorder command should be used. This command is omitted in these examples to simplify the complexity of the statements. First Example In the first example, the second trigger was never fired and the batch, starting with the insert statement, was aborted. Thus, the print statement was never issued. print('Trigger issues rollback - cancels batch') go create trigger trg50000Insert on tbl50000Insert for INSERT as begin select 'Inserted', * from inserted rollback tran select 'End of trigger', @@TRANCOUNT as 'TRANCOUNT' end go create trigger trg50000Insert2 on tbl50000Insert for INSERT as begin select 'In Trigger2' select 'Trigger 2 Inserted', * from inserted end go insert into tbl50000Insert values(1) print('---------------------- In same batch') select * from tbl50000Insert go -- Cleanup drop trigger trg50000Insert drop trigger trg50000Insert2 go delete from tbl50000Insert Second Example The next example shows that since a new transaction is started, the second trigger will be fired and the print statement in the batch will be executed. Note that the insert is rolled back. print('Trigger issues rollback - increases tran count to continue batch') go create trigger trg50000Insert on tbl50000Insert for INSERT as begin select 'Inserted', * from inserted rollback tran begin tran end go create trigger trg50000Insert2 on tbl50000Insert for INSERT as begin select 'In Trigger2' select 'Trigger 2 Inserted', * from inserted end go insert into tbl50000Insert values(2) print('---------------------- In same batch') select * from tbl50000Insert go -- Cleanup drop trigger trg50000Insert drop trigger trg50000Insert2 go delete from tbl50000Insert Third Example In the third example, the raiserror statement is used to set the @@ERROR value and the BEGIN TRAN statement is used in the trigger to allow the batch to continue to run. print('Trigger issues rollback - uses raiserror to set @@ERROR') go create trigger trg50000Insert on tbl50000Insert for INSERT as begin select 'Inserted', * from inserted rollback tran begin tran -- Increase @@trancount to allow -- batch to continue select @@trancount as ‘Trancount’ raiserror('This is from the trigger', 14,1) end go insert into tbl50000Insert values(3) select @@ERROR as 'ERROR', @@TRANCOUNT as 'Trancount' go -- Cleanup drop trigger trg50000Insert go delete from tbl50000Insert Fourth Example For the fourth example, a second trigger is added to illustrate the fact that @@ERROR value set in the first trigger will not be seen in the second trigger nor will it show up in the batch after the second trigger is fired. print('Trigger issues rollback - uses raiserror to set @@ERROR, not seen in second trigger and cleared in batch') go create trigger trg50000Insert on tbl50000Insert for INSERT as begin select 'Inserted', * from inserted rollback begin tran -- Increase @@trancount to -- allow batch to continue select @@TRANCOUNT as 'Trancount' raiserror('This is from the trigger', 14,1) end go create trigger trg50000Insert2 on tbl50000Insert for INSERT as begin select @@ERROR as 'ERROR', @@TRANCOUNT as 'Trancount' end go insert into tbl50000Insert values(4) select @@ERROR as 'ERROR', @@TRANCOUNT as 'Trancount' go -- Cleanup drop trigger trg50000Insert drop trigger trg50000Insert2 go delete from tbl50000Insert Lesson 3: Concepts – Locks and Applications This lesson outlines some of the common causes that contribute to the perception of a slow server. What You Will Learn After completing this lesson, you will be able to:  Explain how lock hints are used and their impact.  Discuss the effect on locking when an application uses Microsoft Transaction Server.  Identify the different kinds of deadlocks including distributed deadlock. Recommended Reading  Charter 14 “Locking”, Inside SQL Server 2000 by Kalen Delaney  Charter 16 “Query Tuning”, Inside SQL Server 2000 by Kalen Delaney Q239753 – Deadlock Situation Not Detected by SQL Server Q288752 – Blocked SPID Not Participating in Deadlock May Incorrectly be Chosen as victim Locking Hints UPDLOCK If update locks are used instead of shared locks while reading a table, the locks are held until the end of the statement or transaction. UPDLOCK has the advantage of allowing you to read data (without blocking other readers) and update it later with the assurance that the data has not changed since you last read it. READPAST READPAST is an optimizer hint for use with SELECT statements. When this hint is used, SQL Server will read past locked rows. For example, assume table T1 contains a single integer column with the values of 1, 2, 3, 4, and 5. If transaction A changes the value of 3 to 8 but has not yet committed, a SELECT * FROM T1 (READPAST) yields values 1, 2, 4, 5. Tip READPAST only applies to transactions operating at READ COMMITTED isolation and only reads past row-level locks. This lock hint can be used to implement a work queue on a SQL Server table. For example, assume there are many external work requests being thrown into a table and they should be serviced in approximate insertion order but they do not have to be completely FIFO. If you have 4 worker threads consuming work items from the queue they could each pick up a record using read past locking and then delete the entry from the queue and commit when they're done. If they fail, they could rollback, leaving the entry on the queue for the next worker thread to pick up. Caution The READPAST hint is not compatible with HOLDLOCK.  Try This: Using Locking Hints 1. Open a Query Window and connect to the pubs database. 2. Execute the following statements (--Conn 1 is optional to help you keep track of each connection): BEGIN TRANSACTION -- Conn 1 UPDATE titles SET price = price * 0.9 WHERE title_id = 'BU1032' 3. Open a second connection and execute the following statements: SELECT @@lock_timeout -- Conn 2 GO SELECT * FROM titles SELECT * FROM authors 4. Open a third connection and execute the following statements: SET LOCK_TIMEOUT 0 -- Conn 3 SELECT * FROM titles SELECT * FROM authors 5. Open a fourth connection and execute the following statement: SELECT * FROM titles (READPAST) -- Conn 4 WHERE title_ID < 'C' SELECT * FROM authors How many records were returned? 3 6. Open a fifth connection and execute the following statement: SELECT * FROM titles (NOLOCK) -- Conn 5 WHERE title_ID 0 the lock manager also checks for deadlocks every time a SPID gets blocked. So a single deadlock will trigger 20 seconds of more immediate deadlock detection, but if no additional deadlocks occur in that 20 seconds, the lock manager no longer checks for deadlocks at each block and detection again only happens every 5 seconds. Although normally not needed, you may use trace flag -T1205 to trace the deadlock detection process. Note Please note the distinction between application lock and other locks’ deadlock detection. For application lock, we do not rollback the transaction of the deadlock victim but simply return a -3 to sp_getapplock, which the application needs to handle itself. Deadlock Resolution How is a deadlock resolved? SQL Server picks one of the connections as a deadlock victim. The victim is chosen based on either which is the least expensive transaction (calculated using the number and size of the log records) to roll back or in which process “SET DEADLOCK_PRIORITY LOW” is specified. The victim’s transaction is rolled back, held locks are released, and SQL Server sends error 1205 to the victim’s client application to notify it that it was chosen as a victim. The other process can then obtain access to the resource it was waiting on and continue. Error 1205: Your transaction (process ID #%d) was deadlocked with another process and has been chosen as the deadlock victim. Rerun your transaction. Symptoms of deadlocking Error 1205 usually is not written to the SQL Server errorlog. Unfortunately, you cannot use sp_altermessage to cause 1205 to be written to the errorlog. If the client application does not capture and display error 1205, some of the symptoms of deadlock occurring are:  Clients complain of mysteriously canceled queries when using certain features of an application.  May be accompanied by excessive blocking. Lock contention increases the chances that a deadlock will occur. Triggers and Deadlock Triggers promote the deadlock priority of the SPID for the life of the trigger execution when the DEADLOCK PRIORITY is not set to low. When a statement in a trigger causes a deadlock to occur, the SPID executing the trigger is given preferential treatment and will not become the victim. Warning Bug 235794 is filed against SQL Server 2000 where a blocked SPID that is not a participant of a deadlock may incorrectly be chosen as a deadlock victim if the SPID is blocked by one of the deadlock participants and the SPID has the least amount of transaction logging. See KB article Q288752: “Blocked Spid Not Participating in Deadlock May Incorrectly be Chosen as victim” for more information. Distributed Deadlock – Scenario 1 Distributed Deadlocks The term distributed deadlock is ambiguous. There are many types of distributed deadlocks. Scenario 1 Client application opens connection A, begins a transaction, acquires some locks, opens connection B, connection B gets blocked by A but the application is designed to not commit A’s transaction until B completes. Note SQL Server has no way of knowing that connection A is somehow dependent on B – they are two distinct connections with two distinct transactions. This situation is discussed in scenario #4 in “Q224453 INF: Understanding and Resolving SQL Server 7.0 Blocking Problems”. Distributed Deadlock – Scenario 2 Scenario 2 Distributed deadlock involving bound connections. Two connections can be bound into a single transaction context with sp_getbindtoken/sp_bindsession or via DTC. Spid 60 enlists in a transaction with spid 61. A third spid 62 is blocked by spid 60, but spid 61 is blocked by spid 62. Because they are doing work in the same transaction, spid 60 cannot commit until spid 61 finishes his work, but spid 61 is blocked by 62 who is blocked by 60. This scenario is described in article “Q239753 - Deadlock Situation Not Detected by SQL Server.” Note SQL Server 6.5 and 7.0 do not detect this deadlock. The SQL Server 2000 deadlock detection algorithm has been enhanced to detect this type of distributed deadlock. The diagram in the slide illustrates this situation. Resources locked by a spid are below that spid (in a box). Arrows indicate blocking and are drawn from the blocked spid to the resource that the spid requires. A circle represents a transaction; spids in the same transaction are shown in the same circle. Distributed Deadlock – Scenario 3 Scenario 3 Distributed deadlock involving linked servers or server-to-server RPC. Spid 60 on Server 1 executes a stored procedure on Server 2 via linked server. This stored procedure does a loopback linked server query against a table on Server 1, and this connection is blocked by a lock held by Spid 60. Note No version of SQL Server is currently designed to detect this distributed deadlock. Lesson 4: Information Collection and Analysis This lesson outlines some of the common causes that contribute to the perception of a slow server. What You Will Learn After completing this lesson, you will be able to:  Identify specific information needed for troubleshooting issues.  Locate and collect information needed for troubleshooting issues.  Analyze output of DBCC Inputbuffer, DBCC PSS, and DBCC Page commands.  Review information collected from master.dbo.sysprocesses table.  Review information collected from master.dbo.syslockinfo table.  Review output of sp_who, sp_who2, sp_lock.  Analyze Profiler log for query usage pattern.  Review output of trace flags to help troubleshoot deadlocks. Recommended Reading Q244455 - INF: Definition of Sysprocesses Waittype and Lastwaittype Fields Q244456 - INF: Description of DBCC PSS Command for SQL Server 7.0 Q271509 - INF: How to Monitor SQL Server 2000 Blocking Q251004 - How to Monitor SQL Server 7.0 Blocking Q224453 - Understanding and Resolving SQL Server 7.0 Blocking Problem Q282749 – BUG: Deadlock information reported with SQL Server 2000 Profiler Locking and Blocking  Try This: Examine Blocked Processes 1. Open a Query Window and connect to the pubs database. Execute the following statements: BEGIN TRAN -- connection 1 UPDATE titles SET price = price + 1 2. Open another connection and execute the following statement: SELECT * FROM titles-- connection 2 3. Open a third connection and execute sp_who; note the process id (spid) of the blocked process. (Connection 3) 4. In the same connection, execute the following: SELECT spid, cmd, waittype FROM master..sysprocesses WHERE waittype 0 -- connection 3 5. Do not close any of the connections! What was the wait type of the blocked process?  Try This: Look at locks held Assumes all your connections are still open from the previous exercise. • Execute sp_lock -- Connection 3 What locks is the process from the previous example holding? Make sure you run ROLLBACK TRAN in Connection 1 to clean up your transaction. Collecting Information See Module 2 for more about how to gather this information using various tools. Recognizing Blocking Problems How to Recognize Blocking Problems  Users complain about poor performance at a certain time of day, or after a certain number of users connect.  SELECT * FROM sysprocesses or sp_who2 shows non-zero values in the blocked or BlkBy column.  More severe blocking incidents will have long blocking chains or large sysprocesses.waittime values for blocked spids.  Possibl
Contents Overview 1 Lesson 1: Index Concepts 3 Lesson 2: Concepts – Statistics 29 Lesson 3: Concepts – Query Optimization 37 Lesson 4: Information Collection and Analysis 61 Lesson 5: Formulating and Implementing Resolution 75 Module 6: Troubleshooting Query Performance Overview At the end of this module, you will be able to:  Describe the different types of indexes and how indexes can be used to improve performance.  Describe what statistics are used for and how they can help in optimizing query performance.  Describe how queries are optimized.  Analyze the information collected from various tools.  Formulate resolution to query performance problems. Lesson 1: Index Concepts Indexes are the most useful tool for improving query performance. Without a useful index, Microsoft® SQL Server™ must search every row on every page in table to find the rows to return. With a multitable query, SQL Server must sometimes search a table multiple times so each page is scanned much more than once. Having useful indexes speeds up finding individual rows in a table, as well as finding the matching rows needed to join two tables. What You Will Learn After completing this lesson, you will be able to:  Understand the structure of SQL Server indexes.  Describe how SQL Server uses indexes to find rows.  Describe how fillfactor can impact the performance of data retrieval and insertion.  Describe the different types of fragmentation that can occur within an index. Recommended Reading  Chapter 8: “Indexes”, Inside SQL Server 2000 by Kalen Delaney  Chapter 11: “Batches, Stored Procedures and Functions”, Inside SQL Server 2000 by Kalen Delaney Finding Rows without Indexes With No Indexes, A Table Must Be Scanned SQL Server keeps track of which pages belong to a table or index by using IAM pages. If there is no clustered index, there is a sysindexes row for the table with an indid value of 0, and that row will keep track of the address of the first IAM for the table. The IAM is a giant bitmap, and every 1 bit indicates that the corresponding extent belongs to the table. The IAM allows SQL Server to do efficient prefetching of the table’s extents, but every row still must be examined. General Index Structure All SQL Server Indexes Are Organized As B-Trees Indexes in SQL Server store their information using standard B-trees. A B-tree provides fast access to data by searching on a key value of the index. B-trees cluster records with similar keys. The B stands for balanced, and balancing the tree is a core feature of a B-tree’s usefulness. The trees are managed, and branches are grafted as necessary, so that navigating down the tree to find a value and locate a specific record takes only a few page accesses. Because the trees are balanced, finding any record requires about the same amount of resources, and retrieval speed is consistent because the index has the same depth throughout. Clustered and Nonclustered Indexes Both Index Types Have Many Common Features An index consists of a tree with a root from which the navigation begins, possible intermediate index levels, and bottom-level leaf pages. You use the index to find the correct leaf page. The number of levels in an index will vary depending on the number of rows in the table and the size of the key column or columns for the index. If you create an index using a large key, fewer entries will fit on a page, so more pages (and possibly more levels) will be needed for the index. On a qualified select, update, or delete, the correct leaf page will be the lowest page of the tree in which one or more rows with the specified key or keys reside. A qualified operation is one that affects only specific rows that satisfy the conditions of a WHERE clause, as opposed to accessing the whole table. An index can have multiple node levels An index page above the leaf is called a node page. Each index row in node pages contains an index key (or set of keys for a composite index) and a pointer to a page at the next level for which the first key value is the same as the key value in the current index row. Leaf Level contains all key values In any index, whether clustered or nonclustered, the leaf level contains every key value, in key sequence. In SQL Server 2000, the sequence can be either ascending or descending. The sysindexes table contains all sizing, location and distribution information Any information about size of indexes or tables is stored in sysindexes. The only source of any storage location information is the sysindexes table, which keeps track of the address of the root page for every index, and the first IAM page for the index or table. There is also a column for the first page of the table, but this is not guaranteed to be reliable. SQL Server can find all pages belonging to an index or table by examining the IAM pages. Sysindexes contains a pointer to the first IAM page, and each IAM page contains a pointer to the next one. The Difference between Clustered and Nonclustered Indexes The main difference between the two types of indexes is how much information is stored at the leaf. The leaf levels of both types of indexes contain all the key values in order, but they also contain other information. Clustered Indexes The Leaf Level of a Clustered Index Is the Data The leaf level of a clustered index contains the data pages, not just the index keys. Another way to say this is that the data itself is part of the clustered index. A clustered index keeps the data in a table ordered around the key. The data pages in the table are kept in a doubly linked list called the page chain. The order of pages in the page chain, and the order of rows on the data pages, is the order of the index key or keys. Deciding which key to cluster on is an important performance consideration. When the index is traversed to the leaf level, the data itself has been retrieved, not simply pointed to. Uniqueness Is Maintained In Key Values In SQL Server 2000, all clustered indexes are unique. If you build a clustered index without specifying the unique keyword, SQL Server forces uniqueness by adding a uniqueifier to the rows when necessary. This uniqueifier is a 4-byte value added as an additional sort key to only the rows that have duplicates of their primary sort key. You can see this extra value if you use DBCC PAGE to look at the actual index rows the section on indexes internal. . Finding Rows in a Clustered Index The Leaf Level of a Clustered Index Contains the Data A clustered index is like a telephone directory in which all of the rows for customers with the same last name are clustered together in the same part of the book. Just as the organization of a telephone directory makes it easy for a person to search, SQL Server quickly searches a table with a clustered index. Because a clustered index determines the sequence in which rows are stored in a table, there can only be one clustered index for a table at a time. Performance Considerations Keeping your clustered key value small increases the number of index rows that can be placed on an index page and decreases the number of levels that must be traversed. This minimizes I/O. As we’ll see, the clustered key is duplicated in every nonclustered index row, so keeping your clustered key small will allow you to have more index fit per page in all your indexes. Note The query corresponding to the slide is: SELECT lastname, firstname FROM member WHERE lastname = ‘Ota’ Nonclustered Indexes The Leaf Level of a Nonclustered Index Contains a Bookmark A nonclustered index is like the index of a textbook. The data is stored in one place and the index is stored in another. Pointers indicate the storage location of the indexed items in the underlying table. In a nonclustered index, the leaf level contains each index key, plus a bookmark that tells SQL Server where to find the data row corresponding to the key in the index. A bookmark can take one of two forms:  If the table has a clustered index, the bookmark is the clustered index key for the corresponding data row. This clustered key can be multiple column if the clustered index is composite, or is defined to be non-unique.  If the table is a heap (in other words, it has no clustered index), the bookmark is a RID, which is an actual row locator in the form File#:Page#:Slot#. Finding Rows with a NC Index on a Heap Nonclustered Indexes Are Very Efficient When Searching For A Single Row After the nonclustered key at the leaf level of the index is found, only one more page access is needed to find the data row. Searching for a single row using a nonclustered index is almost as efficient as searching for a single row in a clustered index. However, if we are searching for multiple rows, such as duplicate values, or keys in a range, anything more than a small number of rows will make the nonclustered index search very inefficient. Note The query corresponding to the slide is: SELECT lastname, firstname FROM member WHERE lastname BETWEEN ‘Master’ AND ‘Rudd’ Finding Rows with a NC Index on a Clustered Table A Clustered Key Is Used as the Bookmark for All Nonclustered Indexes If the table has a clustered index, all columns of the clustered key will be duplicated in the nonclustered index leaf rows, unless there is overlap between the clustered and nonclustered key. For example, if the clustered index is on (lastname, firstname) and a nonclustered index is on firstname, the firstname value will not be duplicated in the nonclustered index leaf rows. Note The query corresponding to the slide is: SELECT lastname, firstname, phone FROM member WHERE firstname = ‘Mike’ Covering Indexes A Covering Index Provides the Fastest Data Access A covering index contains ALL the fields accessed in the query. Normally, only the columns in the WHERE clause are helpful in determining useful indexes, but for a covering index, all columns must be included. If all columns needed for the query are in the index, SQL Server never needs to access the data pages. If even one column in the query is not part of the index, the data rows must be accessed. The leaf level of an index is the only level that contains every key value, or set of key values. For a clustered index, the leaf level is the data itself, so in reality, a clustered index ALWAYS covers any query. Nevertheless, for most of our optimization discussions, we only consider nonclustered indexes. Scanning the leaf level of a nonclustered index is almost always faster than scanning a clustered index, so covering indexes are particular valuable when we need ALL the key values of a particular nonclustered index. Example: Select an aggregate value of a column with a clustered index. Suppose we have a nonclustered index on price, this query is covered: SELECT avg(price) from titles Since the clustered key is included in every nonclustered index row, the clustered key can be included in the covering. Suppose you have a nonclustered index on price and a clustered index on title_id; then this query is covered: SELECT title_id, price FROM titles WHERE price between 10 and 20 Performance Considerations In general, you do want to keep your indexes narrow. However, if you have a critical query that just is not giving you satisfactory performance no matter what you do, you should consider creating an index to cover it, or adding one or two extra columns to an existing index, so that the query will be covered. The leaf level of a nonclustered index is like a ‘mini’ clustered index, so you can have most of the benefits of clustering, even if there already is another clustered index on the table. The tradeoff to adding more, wider indexes for covering queries are the added disk space, and more overhead for updating those columns that are now part of the index. Bug In general, SQL Server will detect when a query is covered, and detect the possible covering indexes. However, in some cases, you must force SQL Server to use a covering index by including a WHERE clause, even if the WHERE clause will return ALL the rows in the table. This is SHILOH bug #352079 Steps to reproduce 1. Make copy of orders table from Northwind: USE Northwind CREATE TABLE [NewOrders] ( [OrderID] [int] NOT NULL , [CustomerID] [nchar] (5) NULL , [EmployeeID] [int] NULL , [OrderDate] [datetime] NULL , [RequiredDate] [datetime] NULL , [ShippedDate] [datetime] NULL , [ShipVia] [int] NULL , [Freight] [money] NULL , [ShipName] [nvarchar] (40) NULL, [ShipAddress] [nvarchar] (60) , [ShipCity] [nvarchar] (15) NULL, [ShipRegion] [nvarchar] (15) NULL, [ShipPostalCode] [nvarchar] (10) NULL, [ShipCountry] [nvarchar] (15) NULL ) INSERT into NewOrders SELECT * FROM Orders 2. Build nc index on OrderDate: create index dateindex on neworders(orderdate) 3. Test Query by looking at query plan: select orderdate from NewOrders The index is being scanned, as expected. 4. Build an index on orderId: create index orderid_index on neworders(orderID) 5. Test Query by looking at query plan: select orderdate from NewOrders Now the TABLE is being scanned, instead of the original index! Index Intersection Multiple Indexes Can Be Used On A Single Table In versions prior to SQL Server 7, only one index could be used for any table to process any single query. The only exception was a query involving an OR. In current SQL Server versions, multiple nonclustered indexes can each be accessed, retrieving a set of keys with bookmarks, and then the result sets can be joined on the common bookmarks. The optimizer weighs the cost of performing the unindexed join on the intermediate result sets, with the cost of only using one index, and then scanning the entire result set from that single index. Fillfactor and Performance Creating an Index with a Low Fillfactor Delays Page Splits when Inserting DBCC SHOWCONTIG will show you a low value for “Avg. Page Density” when a low fillfactor has been specified. This is good for inserts and updates, because it will delay the need to split pages to make room for new rows. It can be bad for scans, because fewer rows will be on each page, and more pages must be read to access the same amount of data. However, this cost will be minimal if the scan density value is good. Index Reorganization DBCC SHOWCONTIG Provides Lots of Information Here’s some sample output from running a basic DBCC SHOWCONTIG on the order details table in the Northwind database: DBCC SHOWCONTIG scanning 'Order Details' table... Table: 'Order Details' (325576198); index ID: 1, database ID:6 TABLE level scan performed. - Pages Scanned................................: 9 - Extents Scanned..............................: 6 - Extent Switches..............................: 5 - Avg. Pages per Extent........................: 1.5 - Scan Density [Best Count:Actual Count].......: 33.33% [2:6] - Logical Scan Fragmentation ..................: 0.00% - Extent Scan Fragmentation ...................: 16.67% - Avg. Bytes Free per Page.....................: 673.2 - Avg. Page Density (full).....................: 91.68% By default, DBCC SHOWCONTIG scans the page chain at the leaf level of the specified index and keeps track of the following values:  Average number of bytes free on each page (Avg. Bytes Free per Page)  Number of pages accessed (Pages scanned)  Number of extents accessed (Extents scanned)  Number of times a page had a lower page number than the previous page in the scan (This value for Out of order pages is not displayed, but is used for additional computations.)  Number of times a page in the scan was on a different extent than the previous page in the scan (Extent switches) SQL Server also keeps track of all the extents that have been accessed, and then it determines how many gaps are in the used extents. An extent is identified by the page number of its first page. So, if extents 8, 16, 24, 32, and 40 make up an index, there are no gaps. If the extents are 8, 16, 24, and 40, there is one gap. The value in DBCC SHOWCONTIG’s output called Extent Scan Fragmentation is computed by dividing the number of gaps by the number of extents, so in this example the Extent Scan Fragmentation is ¼, or 25 percent. A table using extents 8, 24, 40, and 56 has three gaps, and its Extent Scan Fragmentation is ¾, or 75 percent. The maximum number of gaps is the number of extents - 1, so Extent Scan Fragmentation can never be 100 percent. The value in DBCC SHOWCONTIG’s output called Logical Scan Fragmentation is computed by dividing the number of Out of order pages by the number of pages in the table. This value is meaningless in a heap. You can use either the Extent Scan Fragmentation value or the Logical Scan Fragmentation value to determine the general level of fragmentation in a table. The lower the value, the less fragmentation there is. Alternatively, you can use the value called Scan Density, which is computed by dividing the optimum number of extent switches by the actual number of extent switches. A high value means that there is little fragmentation. Scan Density is not valid if the table spans multiple files; therefore, it is less useful than the other values. SQL Server 2000 allows online defragmentation You can choose from several methods for removing fragmentation from an index. You could rebuild the index and have SQL Server allocate all new contiguous pages for you. To rebuild the index, you can use a simple DROP INDEX and CREATE INDEX combination, but in many cases using these commands is less than optimal. In particular, if the index is supporting a constraint, you cannot use the DROP INDEX command. Alternatively, you can use DBCC DBREINDEX, which can rebuild all the indexes on a table in one operation, or you can use the drop_existing clause along with CREATE INDEX. The drawback of these methods is that the table is unavailable while SQL Server is rebuilding the index. When you are rebuilding only nonclustered indexes, SQL Server takes a shared lock on the table, which means that users cannot make modifications, but other processes can SELECT from the table. Of course, those SELECT queries cannot take advantage of the index you are rebuilding, so they might not perform as well as they would otherwise. If you are rebuilding a clustered index, SQL Server takes an exclusive lock and does not allow access to the table, so your data is temporarily unavailable. SQL Server 2000 lets you defragment an index without completely rebuilding it. DBCC INDEXDEFRAG reorders the leaf-level pages into physical order as well as logical order, but using only the pages that are already allocated to the leaf level. This command does an in-place ordering, which is similar to a sorting technique called bubble sort (you might be familiar with this technique if you've studied and compared various sorting algorithms). In-place ordering can reduce logical fragmentation to 2 percent or less, making an ordered scan through the leaf level much faster. DBCC INDEXDEFRAG also compacts the pages of an index, based on the original fillfactor. The pages will not always end up with the original fillfactor, but SQL Server uses that value as a goal. The defragmentation process attempts to leave at least enough space for one average-size row on each page. In addition, if SQL Server cannot obtain a lock on a page during the compaction phase of DBCC INDEXDEFRAG, it skips the page and does not return to it. Any empty pages created as a result of compaction are removed. The algorithm SQL Server 2000 uses for DBCC INDEXDEFRAG finds the next physical page in a file belonging to the index's leaf level and the next logical page in the leaf level to swap it with. To find the next physical page, the algorithm scans the IAM pages belonging to that index. In a database spanning multiple files, in which a table or index has pages on more than one file, SQL Server handles pages on different files separately. SQL Server finds the next logical page by scanning the index's leaf level. After each page move, SQL Server drops all locks and saves the last key on the last page it moved. The next iteration of the algorithm uses the last key to find the next logical page. This process lets other users update the table and index while DBCC INDEXDEFRAG is running. Let us look at an example in which an index's leaf level consists of the following pages in the following logical order: 47 22 83 32 12 90 64 The first key is on page 47, and the last key is on page 64. SQL Server would have to scan the pages in this order to retrieve the data in sorted order. As its first step, DBCC INDEXDEFRAG would find the first physical page, 12, and the first logical page, 47. It would then swap the pages, using a temporary buffer as a holding area. After the first swap, the leaf level would look like this: 12 22 83 32 47 90 64 The next physical page is 22, which is also the next logical page, so no work would be necessary. DBCC INDEXDEFRAG would then swap the next physical page, 32, with the next logical page, 83: 12 22 32 83 47 90 64 After the next swap of 47 with 83, the leaf level would look like this: 12 22 32 47 83 90 64 Then, the defragmentation process would swap 64 with 83: 12 22 32 47 64 90 83 and 83 with 90: 12 22 32 47 64 83 90 At the end of the DBCC INDEXDEFRAG operation, the pages in the table or index are not contiguous, but their logical order matches their physical order. Now, if the pages were accessed from disk in sorted order, the head would need to move in only one direction. Keep in mind that DBCC INDEXDEFRAG uses only pages that are already part of the index's leaf level; it allocates no new pages. In addition, defragmenting a large table can take quite a while, and you will get a report every 5 minutes about the estimated percentage completed. However, except for the locks on the pages being switched, this command needs no additional locks. All the table's other pages and indexes are fully available for your applications to use during the defragmentation process. If you must completely rebuild an index because you want a new fillfactor, or if simple defragmentation is not enough because you want to remove all fragmentation from your indexes, another SQL Server 2000 improvement makes index rebuilding less of an imposition on the rest of the system. SQL Server 2000 lets you create an index in parallel—that is, using multiple processors—which drastically reduces the time necessary to perform the rebuild. The algorithm SQL Server 2000 uses, allows near-linear scaling with the number of processors you use for the rebuild, so four processors will take only one-fourth the time that one processor requires to rebuild an index. System availability increases because the length of time that a table is unavailable decreases. Note that only the SQL Server 2000 Enterprise Edition supports parallel index creation. Indexes on Views and Computed Columns Building an Index Gives the Data Physical Existence Normally, views are only logical and the rows comprising the view’s data are not generated until the view is accessed. The values for computed columns are typically not stored anywhere in the database; only the definition for the computation is stored and the computation is redone every time a computed column is accessed. The first index on a view must be a clustered index, so that the leaf level can hold all the actual rows that make up the view. Once that clustered index has been build, and the view’s data is now physical, additional (nonclustered) indexes can be built. An index on a computed column can be nonclustered, because all we need to store is the index key values. Common Prerequisites for Indexed Views and Indexes on Computed Columns In order for SQL Server to create use these special indexes, you must have the seven SET options correctly specified: ARITHABORT, CONCAT_NULL_YIELDS_NULL, QUOTED_IDENTIFIER, ANSI_NULLS, ANSI_PADDING, ANSI_WARNING must be all ON NUMERIC_ROUNDABORT must be OFF Only deterministic expressions can be used in the definition of Indexed Views or indexes on Computed Columns. See the BOL for the list of deterministic functions and expressions. Property functions are available to check if a column or view meets the requirements and is indexable. SELECT OBJECTPROPERTY (Object_id, ‘IsIndexable’) SELECT COLUMNPROPERTY (Object_id, column_name , ‘IsIndexable’ ) Schema Binding Guarantees That Object Definition Won’t Change A view can only be indexed if it has been built with schema binding. The SQL Server Optimizer Determines If the Indexed View Can Be Used The query must request a subset of the data contained in the view. The ability of the optimizer to use the indexed view even if the view is not directly referenced is available only in SQL Server 2000 Enterprise Edition. In Standard edition, you can create indexed views, and you can select directly from them, but the optimizer will not choose to use them if they are not directly referenced. Examples of Indexed Views: The best candidates for improvement by indexed views are queries performing aggregations and joins. We will explain how the useful indexed views may be created for these two major groups of queries. The considerations are valid also for queries and indexed views using both joins and aggregations. -- Example: USE Northwind -- Identify 5 products with overall biggest discount total. -- This may be expressed for example by two different queries: -- Q1. select TOP 5 ProductID, SUM(UnitPrice*Quantity)- SUM(UnitPrice*Quantity*(1.00-Discount)) Rebate from [order details] group by ProductID order by Rebate desc --Q2. select TOP 5 ProductID, SUM(UnitPrice*Quantity*Discount) Rebate from [order details] group by ProductID order by Rebate desc --The following indexed view will be used to execute Q1. create view Vdiscount1 with schemabinding as select SUM(UnitPrice*Quantity) SumPrice, SUM(UnitPrice*Quantity*(1.00-Discount)) SumDiscountPrice, COUNT_BIG(*) Count, ProductID from dbo.[order details] group By ProductID create unique clustered index VDiscountInd on Vdiscount1 (ProductID) However, it will not be used by the Q2 because the indexed view does not contain the SUM(UnitPrice*Quantity*Discount) aggregate. We can construct another indexed view create view Vdiscount2 with schemabinding as select SUM(UnitPrice*Quantity) SumPrice, SUM(UnitPrice*Quantity*(1.00-Discount)) SumDiscountPrice, SUM(UnitPrice*Quantity*Discount) SumDiscoutPrice2, COUNT_BIG(*) Count, ProductID from dbo.[order details] group By ProductID create unique clustered index VDiscountInd on Vdiscount2 (ProductID) This view may be used by both Q1 and Q2. Observe that the indexed view Vdiscount2 will have the same number of rows and only one more column compared to Vdiscount1, and it may be used by more queries. In general, try to design indexed views that may be used by more queries. The following query asking for the order with the largest total discount -- Q3. select TOP 3 OrderID, SUM(UnitPrice*Quantity*Discount) OrderRebate from dbo.[order details] group By OrderID Q3 can use neither of the Vdiscount views because the column OrderID is not included in the view definition. To address this variation of the discount analysis query we may create a different indexed view, similar to the query itself. An attempt to generalize the previous indexed view Vdiscount2 so that all three queries Q1, Q2, and Q3 can take advantage of a single indexed view would require a view with both OrderID and ProductID as grouping columns. Because the OrderID, ProductID combination is unique in the original order details table the resulting view would have as many rows as the original table and we would see no savings in using such view compared to using the original table. Consider the size of the resulting indexed view. In the case of pure aggregation, the indexed view may provide no significant performance gains if its size is close to the size of the original table. Complex aggregates (STDEV, VARIANCE, AVG) cannot participate in the index view definition. However, SQL Server may use an indexed view to execute a query containing AVG aggregate. Query containing STDEV or VARIANCE cannot use indexed view to pre-compute these values. The next example shows a query producing the average price for a particular product -- Q4. select ProductName, od.ProductID, AVG(od.UnitPrice*(1.00-Discount)) AvgPrice, SUM(od.Quantity) Units from [order details] od, Products p where od.ProductID=p.ProductID group by ProductName, od.ProductID This is an example of indexed view that will be considered by the SQL Server to answer the Q4 create view v3 with schemabinding as select od.ProductID, SUM(od.UnitPrice*(1.00-Discount)) Price, COUNT_BIG(*) Count, SUM(od.Quantity) Units from dbo.[order details] od group by od.ProductID go create UNIQUE CLUSTERED index iv3 on v3 (ProductID) go Observe that the view definition does not contain the table Products. The indexed view does not need to contain all tables used in the query that uses the indexed view. In addition, the following query (same as above Q4 only with one additional search condition) will use the same indexed view. Observe that the added predicate references only columns from tables not present in the v3 view definition. -- Q5. select ProductName, od.ProductID, AVG(od.UnitPrice*(1.00-Discount)) AvgPrice, SUM(od.Quantity) Units from [order details] od, Products p where od.ProductID=p.ProductID and p.ProductName like '%tofu%' group by ProductName, od.ProductID The following query cannot use the indexed view because the added search condition od.UnitPrice>10 contains a column from the table in the view definition and the column is neither grouping column nor the predicate appears in the view definition. -- Q6. select ProductName, od.ProductID, AVG(od.UnitPrice*(1.00-Discount)) AvgPrice, SUM(od.Quantity) Units from [order details] od, Products p where od.ProductID=p.ProductID and od.UnitPrice>10 group by ProductName, od.ProductID To contrast the Q6 case, the following query will use the indexed view v3 since the added predicate is on the grouping column of the view v3. -- Q7. select ProductName, od.ProductID, AVG(od.UnitPrice*(1.00-Discount)) AvgPrice, SUM(od.Quantity) Units from [order details] od, Products p where od.ProductID=p.ProductID and od.ProductID in (1,2,13,41) group by ProductName, od.ProductID -- The previous query Q6 will use the following indexed view V4: create view V4 with schemabinding as select ProductName, od.ProductID, SUM(od.UnitPrice*(1.00-Discount)) AvgPrice, SUM(od.Quantity) Units, COUNT_BIG(*) Count from dbo.[order details] od, dbo.Products p where od.ProductID=p.ProductID and od.UnitPrice>10 group by ProductName, od.ProductID create unique clustered index VDiscountInd on V4 (ProductName, ProductID) The same index on the view V4 will be used also for a query where a join to the table Orders is added, for example -- Q8. select ProductName, od.ProductID, AVG(od.UnitPrice*(1.00-Discount)) AvgPrice, SUM(od.Quantity) Units from dbo.[order details] od, dbo.Products p, dbo.Orders o where od.ProductID=p.ProductID and o.OrderID=od.OrderID and od.UnitPrice>10 group by ProductName, od.ProductID We will show several modifications of the query Q8 and explain why such modifications cannot use the above view V4. -- Q8a. select ProductName, od.ProductID, AVG(od.UnitPrice*(1.00-Discount)) AvgPrice, SUM(od.Quantity) Units from dbo.[order details] od, dbo.Products p, dbo.Orders o where od.ProductID=p.ProductID and o.OrderID=od.OrderID and od.UnitPrice>25 group by ProductName, od.ProductID 8a cannot use the indexed view because of the where clause mismatch. Observe that table Orders does not participate in the indexed view V4 definition. In spite of that, adding a predicate on this table will disallow using the indexed view because the added predicate may eliminate additional rows participating in the aggregates as it is shown in Q8b. -- Q8b. select ProductName, od.ProductID, AVG(od.UnitPrice*(1.00-Discount)) AvgPrice, SUM(od.Quantity) Units from dbo.[order details] od, dbo.Products p, dbo.Orders o where od.ProductID=p.ProductID and o.OrderID=od.OrderID and od.UnitPrice>10 and o.OrderDate>'01/01/1998' group by ProductName, od.ProductID Locking and Indexes In General, You Should Let SQL Server Control the Locking within Indexes The stored procedure sp_indexoption lets you manually control the unit of locking within an index. It also lets you disallow page locks or row locks within an index. Since these options are available only for indexes, there is no way to control the locking within the data pages of a heap. (But remember that if a table has a clustered index, the data pages are part of the index and are affected by the sp_indexoption setting.) The index options are set for each table or index individually. Two options, Allow Rowlocks and AllowPageLocks, are both set to TRUE initially for every table and index. If both of these options are set to FALSE for a table, only full table locks are allowed. As described in Module 4, SQL Server determines at runtime whether to initially lock rows, pages, or the entire table. The locking of rows (or keys) is heavily favored. The type of locking chosen is based on the number of rows and pages to be scanned, the number of rows on a page, the isolation level in effect, the update activity going on, the number of users on the system needing memory for their own purposes, and so on. SAP databases frequently use sp_indexoption to reduce deadlocks Setting vs. Querying In SQL Server 2000, the procedure sp_indexoption should only be used for setting an index option. To query an option, use the INDEXPROPERTY function. Lesson 2: Concepts – Statistics Statistics are the most important tool that the SQL Server query optimizer has to determine the ideal execution plan for a query. Statistics that are out of date or nonexistent seriously jeopardize query performance. SQL Server 2000 computes and stores statistics in a completely different format that all earlier versions of SQL Server. One of the improvements is an increased ability to determine which values are out of the normal range in terms of the number of occurrences. The new statistics maintenance routines are particularly good at determining when a key value has a very unusual skew of data. What You Will Learn After completing this lesson, you will be able to:  Define terms related to statistics collected by SQL Server.  Describe how statistics are maintained by SQL Server.  Discuss the autostats feature of SQL Server.  Describe how statistics are used in query optimization. Recommended Reading  Statistics Used by the Query Optimizer in Microsoft SQL Server 2000 http://msdn.microsoft.com/library/techart/statquery.htm Definitions Cardinality The cardinality means how many unique values exist in the data. Density For each index and set of column statistics, SQL Server keeps track of details about the uniqueness (or density) of the data values encountered, which provides a measure of how selective the index is. A unique index, of course, has the lowest density —by definition, each index entry can point to only one row. A unique index has a density value of 1/number of rows in the table. Density values range from 0 through 1. Highly selective indexes have density values of 0.10 or lower. For example, a unique index on a table with 8345 rows has a density of 0.00012 (1/8345). If a nonunique nonclustered index has a density of 0.2165 on the same table, each index key can be expected to point to about 1807 rows (0.2165 × 8345). This is probably not selective enough to be more efficient than just scanning the table, so this index is probably not useful. Because driving the query from a nonclustered index means that the pages must be retrieved in index order, an estimated 1807 data page accesses (or logical reads) are needed if there is no clustered index on the table and the leaf level of the index contains the actual RID of the desired data row. The only time a data page doesn’t need to be reaccessed is when the occasional coincidence occurs in which two adjacent index entries happen to point to the same data page. In general, you can think of density as the average number of duplicates. We can also talk about the term ‘join density’, which applies to the average number of duplicates in the foreign key column. This would answer the question: in this one-to-many relationship, how many is ‘many’? Selectivity In general selectivity applies to a particular data value referenced in a WHERE clause. High selectivity means that only a small percentage of the rows satisfy the WHERE clause filter, and a low selectivity means that many rows will satisfy the filter. For example, in an employees table, the column employee_id is probably very selective, and the column gender is probably not very selective at all. Statistics Statistics are a histogram consisting of an even sampling of values for a column or for an index key (or the first column of the key for a composite index) based on the current data. The histogram is stored in the statblob field of the sysindexes table, which is of type image. (Remember that image data is actually stored in structures separate from the data row itself. The data row merely contains a pointer to the image data. For simplicity’s sake, we’ll talk about the index statistics as being stored in the image field called statblob.) To fully estimate the usefulness of an index, the optimizer also needs to know the number of pages in the table or index; this information is stored in the dpages column of sysindexes. During the second phase of query optimization, index selection, the query optimizer determines whether an index exists for a columns in your WHERE clause, assesses the index’s usefulness by determining the selectivity of the clause (that is, how many rows will be returned), and estimates the cost of finding the qualifying rows. Statistics for a single column index consist of one histogram and one density value. The multicolumn statistics for one set of columns in a composite index consist of one histogram for the first column in the index and density values for each prefix combination of columns (including the first column alone). The fact that density information is kept for all columns helps the optimizer decide how useful the index is for joins. Suppose, for example, that an index is composed of three key fields. The density on the first column might be 0.50, which is not too useful. However, as you look at more key columns in the index, the number of rows pointed to is fewer than (or in the worst case, the same as) the first column, so the density value goes down. If you are looking at both the first and second columns, the density might be 0.25, which is somewhat better. Moreover, if you examine three columns, the density might be 0.03, which is highly selective. It does not make sense to refer to the density of only the second column. The lead column density is always needed. Statistics Maintenance Statistics Information Tracks the Distribution of Key Values SQL Server statistics is basically a histogram that contains up to 200 values of a given key column. In addition to the histogram, the statblob field contains the following information:  The time of the last statistics collection  The number of rows used to produce the histogram and density information  The average key length  Densities for other combinations of columns In the statblob column, up to 200 sample values are stored; the range of key values between each sample value is called a step. The sample value is the endpoint of the range. Three values are stored along with each step: a value called EQ_ROWS, which is the number of rows that have a value equal to that sample value; a value called RANGE_ROWS, which specifies how many other values are inside the range (between two adjacent sample values); and the number of distinct values, or RANGE_DENSITY of the range. DBCC SHOW_STATISTICS The DBCC SHOW_STATISTICS output shows us the first two of these three values, but not the range density. The RANGE_DENSITY is instead used to compute two additional values:  DISTINCT_RANGE_ROWS—the number of distinct rows inside this range (not counting the RANGE_HI_KEY value itself. This is computed as 1/RANGE_DENSITY.  AVG_RANGE_ROWS—the average number of rows per distinct value, computed as RANGE_DENSITY * RANGE_ROWS. In addition to statistics on indexes, SQL Server can also keep track of statistics on columns with no indexes. Knowing the density, or the likelihood of a particular value occurring, can help the optimizer determine an optimum processing strategy, even if SQL Server can’t use an index to actually locate the values. Statistics on Columns Column statistics can be useful for two main purposes  When the SQL Server optimizer is determining the optimal join order, it frequently is best to have the smaller input processed first. By ‘input’ we mean table after all filters in the WHERE clause have been applied. Even if there is no useful index on a column in the WHERE clause, statistics could tell us that only a few rows will quality, and those the resulting input will be very small.  The SQL Server query optimizer can use column statistics on non-initial columns in a composite nonclustered index to determine if scanning the leaf level to obtain the bookmarks will be an efficient processing strategy. For example, in the member table in the credit database, the first name column is almost unique. Suppose we have a nonclustered index on (lastname, firstname), and we issue this query: select * from member where firstname = 'MPRO' In this case, statistics on the firstname column would indicate very few rows satisfying this condition, so the optimizer will choose to scan the nonclustered index, since it is smaller than the clustered index (the table). The small number of bookmarks will then be followed to retrieve the actual data. Manually Updating Statistics You can also manually force statistics to be updated in one of two ways. You can run the UPDATE STATISTICS command on a table or on one specific index or column statistics, or you can also execute the procedure sp_updatestats, which runs UPDATE STATISTICS against all user-defined tables in the current database. You can create statistics on unindexed columns using the CREATE STATISTICS command or by executing sp_createstats, which creates single-column statistics for all eligible columns for all user tables in the current database. This includes all columns except computed columns and columns of the ntext, text, or image datatypes, and columns that already have statistics or are the first column of an index. Autostats By Default SQL Server Will Update Statistics on Any Index or Column as Needed Every database is created with the database options auto create statistics and auto update statistics set to true, but you can turn either one off. You can also turn off automatic updating of statistics for a specific table in one of two ways:  UPDATE STATISTICS In addition to updating the statistics, the option WITH NORECOMPUTE indicates that the statistics should not be automatically recomputed in the future. Running UPDATE STATISTICS again without the WITH NORECOMPUTE option enables automatic updates.  sp_autostats This procedure sets or unsets a flag for a table to indicate that statistics should or should not be updated automatically. You can also use this procedure with only the table name to find out whether the table is set to automatically have its index statistics updated. ' However, setting the database option auto update statistics to FALSE overrides any individual table settings. In other words, no automatic updating of statistics takes place. This is not a recommended practice unless thorough testing has shown you that you do not need the automatic updates or that the performance overhead is more than you can afford. Trace Flags Trace flag 205 – reports recompile due to autostats. Trace flag 8721 – writes information to the errorlog when AutoStats has been run. For more information, see the following Knowledge Base article: Q195565 “INF: How SQL Server 7.0 Autostats Work.” Statistics and Performance The Performance Penalty of NOT Having Up-To-Date Statistics Far Outweighs the Benefit of Avoiding Automatic Updating Autostats should be turned off only after thorough testing shows it to be necessary. Because autostats only forces a recompile after a certain number or percentage of rows has been changed, you do not have to make any adjustments for a read-only database. Lesson 3: Concepts – Query Optimization What You Will Learn After completing this lesson, you will be able to:  Describe the phases of query optimization.  Discuss how SQL Server estimates the selectivity of indexes and column and how this estimate is used in query optimization. Recommended Reading  Chapter 15: “The Query Processor”, Inside SQL Server 2000 by Kalen Delaney  Chapter 16: “Query Tuning”, Inside SQL Server 2000 by Kalen Delaney  Whitepaper about SQL Server Query Processor Architecture by Hal Berenson and Kalen Delaney http://msdn.microsoft.com/library/backgrnd/html/sqlquerproc.htm Phases of Query Optimization Query Optimization Involves several phases Trivial Plan Optimization Optimization itself goes through several steps. The first step is something called Trivial Plan Optimization. The whole idea of trivial plan optimization is that cost based optimization is a bit expensive to run. The optimizer can try a great many possible variations trying to find the cheapest plan. If SQL Server knows that there is only one really viable plan for a query, it could avoid a lot of work. A prime example is a query that consists of an INSERT with a VALUES clause. There is only one possible plan. Another example is a SELECT where all the columns are in a unique covering index, and that index is the only one that is useable. There is no other index that has that set of columns in it. These two examples are cases where SQL Server should just generate the plan and not try to find something better. The trivial plan optimizer finds the really obvious plans, which are typically very inexpensive. In fact, all the plans that get through the autoparameterization template result in plans that the trivial plan optimizer can find. Between those two mechanisms, the plans that are simple tend to be weeded out earlier in the process and do not pay a lot of the compilation cost. This is a good thing, because the number of potential plans in 7.0 went up astronomically as SQL Server added hash joins, merge joins and index intersections, to its list of processing techniques. Simplification and Statistics Loading If a plan is not found by the trivial plan optimizer, SQL Server can perform some simplifications, usually thought of as syntactic transformations of the query itself, looking for commutative properties and operations that can be rearranged. SQL Server can do constant folding, and other operations that do not require looking at the cost or analyzing what indexes are, but that can result in a more efficient query. SQL Server then loads up the metadata including the statistics information on the indexes, and then the optimizer goes through a series of phases of cost based optimization. Cost Based Optimization Phases The cost based optimizer is designed as a set of transformation rules that try various permutations of indexes and join strategies. Because of the number of potential plans in SQL Server 7.0 and SQL Server 2000, if the optimizer just ran through all the combinations and produced a plan, the optimization process would take a very long time to run. Therefore, optimization is broken up into phases. Each phase is a set of rules. After each phase is run, the cost of any resulting plan is examined, and if SQL Server determines that the plan is cheap enough, that plan is kept and executed. If the plan is not cheap enough, the optimizer runs the next phase, which is another set of rules. In the vast majority of cases, a good plan will be found in the preliminary phases. Typically, if the plan that a query would have had in SQL Server 6.5 is also the optimal plan in SQL Server 7.0 and SQL Server 2000, the plan will tend to be found either by the trivial plan optimizer or by the first phase of the cost based optimizer. The rules were intentionally organized to try to make that be true. The plan will probably consist of using a single index and using nested loops. However, every once in a while, because of lack of statistical information, or some other nuance, the optimizer will have to proceed with the later phases of optimization. Sometimes this is because there is a real possibility that the optimizer could find a better plan. When a plan is found, it becomes the optimizer’s output, and then SQL Server goes through all the caching mechanisms that we have already discussed in Module 5. Full Optimization At some point, the optimizer determines that it has gone through enough preliminary phases, and it reverts to a phase called full optimization. If the optimizer goes through all the preliminary phases, and still has not found a cheap plan, it examines the cost for the plan that it has so far. If the cost is above the threshold, the optimizer goes into a phase called full optimization. This threshold is configurable, as the configuration option ‘cost threshold for parallelism’. The full optimization phase assumes that this plan should be run this in parallel. If the machine is very busy, the plan will end up running it in serial, but the optimizer has a goal to produce a good parallel. If the cost is below the threshold (or a single processor machine), the full optimization phase just uses a brute force method to find a serial plan. Selectivity Estimation Selectivity Is One of The Most Important Pieces of Information One of the most import things the optimizer needs to know is the number of rows from any table that will meet all the conditions in the query. If there are no restrictions on a table, and all the rows will be needed, the optimizer can determine the number of rows from the sysindexes table. This number is not absolutely guaranteed to be accurate, but it is the number the optimizer uses. If there is a filter on the table in a WHERE clause, the optimizer needs statistics information. Indexes automatically maintain statistics, and the optimizer will use these values to determine the usefulness of the index. If there is no index on the column involved in the filter, then column statistics can be used or generated. Optimizing Search Arguments In General, the Filters in the WHERE Clause Determine Which Indexes Will Be Useful If an indexed column is referenced in a Search Argument (SARG), the optimizer will analyze the cost of using that index. A SARG has the form:  column valuevalue column  Operator must be one of =, >, >= <, <= The value can be a constant, an operation, or a variable. Some functions also will be treated as SARGs. These queries have SARGs, and a nonclustered index on firstname will be used in most cases: select * from member where firstname < 'AKKG' select * from member where firstname = substring('HAAKGALSFJA', 2,5) select * from member where firstname = 'AA' + 'KG' declare @name char(4) set @name = 'AKKG' select * from member where firstname < @name Not all functions can be used in SARGs. select * from charge where charge_amt < 2*2 select * from charge where charge_amt < sqrt(16) Compare these queries to ones using = instead of <. With =, the optimizer can use the density information to come up with a good row estimate, even if it’s not going to actually perform the function’s calculations. A filter with a variable is usually a SARG The issue is, can the optimizer come up with useful costing information? A filter with a variable is not a SARG if the variable is of a different datatype, and the column must be converted to the variable’s datatype For more information, see the following Knowledge Base article: Q198625 Enter Title of KB Article Here Use credit go CREATE TABLE [member2] ( [member_no] [smallint] NOT NULL , [lastname] [shortstring] NOT NULL , [firstname] [shortstring] NOT NULL , [middleinitial] [letter] NULL , [street] [shortstring] NOT NULL , [city] [shortstring] NOT NULL , [state_prov] [statecode] NOT NULL , [country] [countrycode] NOT NULL , [mail_code] [mailcode] NOT NULL ) GO insert into member2 select member_no, lastname, firstname, middleinitial, street, city, state_prov, country, mail_code from member alter table member2 add constraint pk_member2 primary key clustered (lastname, member_no, firstname, country) declare @id int set @id = 47 update member2 set city = city + ' City', state_prov = state_prov + ' State' where lastname = 'Barr' and member_no = @id and firstname = 'URQYJBFVRRPWKVW' and country = 'USA' These queries don’t have SARGs, and a table scan will be done: select * from member where substring(lastname, 1,2) = ‘BA’ Some non-SARGs can be converted select * from member where lastname like ‘ba%’ In some cases, you can rewrite your query to turn a non-SARG into a SARG; for example, you can rewrite the substring query above and the LIKE query that follows it. Join Order and Types of Joins Join Order and Strategy Is Determined By the Optimizer The execution plan output will display the join order from top to bottom; i.e. the table listed on top is the first one accessed in a join. You can override the optimizer’s join order decision in two ways:  OPTION (FORCE ORDER) applies to one query  SET FORCEPLAN ON applies to entire session, until set OFF If either of these options is used, the join order is determined by the order the tables are listed in the query’s FROM clause, and no optimizer on JOIN ORDER is done. Forcing the JOIN order may force a particular join strategy. For example, in most outer join operations, the outer table is processed first, and a nested loops join is done. However, if you force the inner table to be accessed first, a merge join will need to be done. Compare the query plan for this query with and without the FORCE ORDER hint: select * from titles right join publishers on titles.pub_id = publishers.pub_id -- OPTION (FORCE ORDER) Nested Loop Join A nested iteration is when the query optimizer constructs a set of nested loops, and the result set grows as it progresses through the rows. The query optimizer performs the following steps. 1. Finds a row from the first table. 2. Uses that row to scan the next table. 3. Uses the result of the previous table to scan the next table. Evaluating Join Combinations The query optimizer automatically evaluates at least four or more possible join combinations, even if those combinations are not specified in the join predicate. You do not have to add redundant clauses. The query optimizer balances the cost and uses statistics to determine the number of join combinations that it evaluates. Evaluating every possible join combination is inefficient and costly. Evaluating Cost of Query Performance When the query optimizer performs a nested join, you should be aware that certain costs are incurred. Nested loop joins are far superior to both merge joins and hash joins when executing small transactions, such as those affecting only a small set of rows. The query optimizer:  Uses nested loop joins if the outer input is quite small and the inner input is indexed and quite large.  Uses the smaller input as the outer table.  Requires that a useful index exist on the join predicate for the inner table.  Always uses a nested loop join strategy if the join operation uses an operator other than an equality operator. Merge Joins The columns of the join conditions are used as inputs to process a merge join. SQL Server performs the following steps when using a merge join strategy: 1. Gets the first input values from each input set. 2. Compares input values. 3. Performs a merge algorithm. • If the input values are equal, the rows are returned. • If the input values are not equal, the lower value is discarded, and the next input value from that input is used for the next comparison. 4. Repeats the process until all of the rows from one of the input sets have been processed. 5. Evaluates any remaining search conditions in the query and returns only rows that qualify. Note Only one pass per input is done. The merge join operation ends after all of the input values of one input have been evaluated. The remaining values from the other input are not processed. Requires That Joined Columns Are Sorted If you execute a query with join operations, and the joined columns are in sorted order, the query optimizer processes the query by using a merge join strategy. A merge join is very efficient because the columns are already sorted, and it requires fewer page I/O. Evaluates Sorted Values For the query optimizer to use the merge join, the inputs must be sorted. The query optimizer evaluates sorted values in the following order: 1. Uses an existing index tree (most typical). The query optimizer can use the index tree from a clustered index or a covered nonclustered index. 2. Leverages sort operations that the GROUP BY, ORDER BY, and CUBE clauses use. The sorting operation only has to be performed once. 3. Performs its own sort operation in which a SORT operator is displayed when graphically viewing the execution plan. The query optimizer does this very rarely. Performance Considerations Consider the following facts about the query optimizer's use of the merge join:  SQL Server performs a merge join for all types of join operations (except cross join or full join operations), including UNION operations.  A merge join operation may be a one-to-one, one-to-many, or many-to-many operation. If the merge join is a many-to-many operation, SQL Server uses a temporary table to store the rows. If duplicate values from each input exist, one of the inputs rewinds to the start of the duplicates as each duplicate value from the other input is processed.  Query performance for a merge join is very fast, but the cost can be high if the query optimizer must perform its own sort operation. If the data volume is large and the desired data can be obtained presorted from existing Balanced-Tree (B-Tree) indexes, merge join is often the fastest join algorithm.  A merge join is typically used if the two join inputs have a large amount of data and are sorted on their join columns (for example, if the join inputs were obtained by scanning sorted indexes).  Merge join operations can only be performed with an equality operator in the join predicate. Hashing is a strategy for dividing data into equal sets of a manageable size based on a given property or characteristic. The grouped data can then be used to determine whether a particular data item matches an existing value. Note Duplicate data or ranges of data are not useful for hash joins because the data is not organized together or in order. When a Hash Join Is Used The query optimizer uses a hash join option when it estimates that it is more efficient than processing queries by using a nested loop or merge join. It typically uses a hash join when an index does not exist or when existing indexes are not useful. Assigns a Build and Probe Input The query optimizer assigns a build and probe input. If the query optimizer incorrectly assigns the build and probe input (this may occur because of imprecise density estimates), it reverses them dynamically. The ability to change input roles dynamically is called role reversal. Build input consists of the column values from a table with the lowest number of rows. Build input creates a hash table in memory to store these values. The hash bucket is a storage place in the hash table in which each row of the build input is inserted. Rows from one of the join tables are placed into the hash bucket where the hash key value of the row matches the hash key value of the bucket. Hash buckets are stored as a linked list and only contain the columns that are needed for the query. A hash table contains hash buckets. The hash table is created from the build input. Probe input consists of the column values from the table with the most rows. Probe input is what the build input checks to find a match in the hash buckets. Note The query optimizer uses column or index statistics to help determine which input is the smaller of the two. Processing a Hash Join The following list is a simplified description of how the query optimizer processes a hash join. It is not intended to be comprehensive because the algorithm is very complex. SQL Server: 1. Reads the probe input. Each probe input is processed one row at a time. 2. Performs the hash algorithm against each probe input and generates a hash key value. 3. Finds the hash bucket that matches the hash key value. 4. Accesses the hash bucket and looks for the matching row. 5. Returns the row if a match is found. Performance Considerations Consider the following facts about the hash joins that the query optimizer uses:  Similar to merge joins, a hash join is very efficient, because it uses hash buckets, which are like a dynamic index but with less overhead for combining rows.  Hash joins can be performed for all types of join operations (except cross join operations), including UNION and DIFFERENCE operations.  A hash operator can remove duplicates and group data, such as SUM (salary) GROUP BY department. The query optimizer uses only one input for both the build and probe roles.  If join inputs are large and are of similar size, the performance of a hash join operation is similar to a merge join with prior sorting. However, if the size of the join inputs is significantly different, the performance of a hash join is often much faster.  Hash joins can process large, unsorted, non-indexed inputs efficiently. Hash joins are useful in complex queries because the intermediate results: • Are not indexed (unless explicitly saved to disk and then indexed). • Are often not sorted for the next operation in the execution plan.  The query optimizer can identify incorrect estimates and make corrections dynamically to process the query more efficiently.  A hash join reduces the need for database denormalization. Denormalization is typically used to achieve better performance by reducing join operations despite redundancy, such as inconsistent updates. Hash joins give you the option to vertically partition your data as part of your physical database design. Vertical partitioning represents groups of columns from a single table in separate files or indexes. Subquery Performance Joins Are Not Inherently Better Than Subqueries Here is an example showing three different ways to update a table, using a second table for lookup purposes. The first uses a JOIN with the update, the second uses a regular introduced with IN, and the third uses a correlated subquery. All three yield nearly identical performance. Note Note that performance comparisons cannot just be made based on I/Os. With HASHING and MERGING techniques, the number of reads may be the same for two queries, yet one may take a lot longer and use more memory resources. Also, always be sure to monitor statistics time. Suppose you want to add a 5 percent discount to order items in the Order Details table for which the supplier is Exotic Liquids, whose supplierid is 1. -- JOIN solution BEGIN TRAN UPDATE OD SET discount = discount + 0.05 FROM [Order Details] AS OD JOIN Products AS P ON OD.productid = P.productid WHERE supplierid = 1 ROLLBACK TRAN -- Regular subquery solution BEGIN TRAN UPDATE [Order Details] SET discount = discount + 0.05 WHERE productid IN (SELECT productid FROM Products WHERE supplierid = 1) ROLLBACK TRAN -- Correlated Subquery Solution BEGIN TRAN UPDATE [Order Details] SET discount = discount + 0.05 WHERE EXISTS(SELECT supplierid FROM Products WHERE [Order Details].productid = Products.productid AND supplierid = 1) ROLLBACK TRAN Internally, Your Join May Be Rewritten SQL Server’s query processor had many different ways of resolving your JOIN expressions. Subqueries may be converted to a JOIN with an implied distinct, which may result in a logical operator of SEMI JOIN. Compare the plans of the first two queries: USE credit select member_no from member where member_no in (select member_no from charge) select distinct m.member_no from member m join charge c on m.member_no = c.member_no The second query uses a HASH MATCH as the final step to remove the duplicates. The first query only had to do a semi join. For these queries, although the I/O values are the same, the first query (with the subquery) runs much faster (almost twice as fast). Another similar looking join is
******************************************* ************ WPTOOLS 6 History ************ ********** http://www.wpcubed.com ********* **** Copyright (C) 2012 J. Ziersch and **** **** WPCubed GmbH, Munich, Germany ******** ******************************************* * - = bug fix * + = new feature * * = changed, expanded feature ******************************************* * IMPORTANT: Please edit file WPINC.INC to * activate WPREPORTER, WPSPELL, wPDF, TBX * and GraphicEx / PNGImage support! * Delphi 2009 and later as inbuilt PNGImage ******************************************* * THE MANUAL (PDF) AND REFERENCE (HLP,CHM) * ARE PROVIDED AS SEPERATE DOWNLOADS! ******************************************* ~~~~~~~~~~ Important Notes ~~~~~~~~~~~~ - With Delphi XE2 or XE3 it is possible to compile WPTools into a 64 bit applications. You need WPTools 6 PRO or WPTools 6 Premium for this. - Localization and Inch/CM selection see http://www.wpcubed.com/manuals/wp5man/index.html?localization.htm Also see demo\tasks\localization. There are the required XML files. - if flag wpDontAddExternalFontLeading is active in property FormatOptionsEx the text will be formatted more like WPTools4/MS-Word. You can alternatively set global variable WPDoNotAddExternalFontLeading := TRUE and select the Printer WYSIWYG mode: TWPRichText1.HeaderFooter.UpdateReformatMode(true) - The FormatOption wpDisableAutosizeTables can be required for good display of tables. wpNoMinimumCellPadding for narrow layout, too. - If you plan to use DBWPRichText please check http://www.wpcubed.com/forum/viewtopic.php?p=3456 - The reader/writer receive their options through "FormatStrings" please see list at http://www.wpcubed.com/manuals/formatstrings.htm - WPTools is configured using the file WPINC.INC, here WPREPORTER is activated and the optional WPShared, WPSPell and wPDF can be selected to be compiled into main WPTools package. - also see FAQ: http://www.wpcubed.com/forum/viewforum.php?f=15 - and support forum: http://www.wpcubed.com/forum/ - WPTools 6 PRO and PREMIUM will be compiled as Version 5 if $define "WP6" is not set ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ We recommend to use TMS components to create modern user interfaces, toolbars and ribbons. 19.2.2013 - WPTools 6.29.1 - fix in rtf writing code to solve problem with merged cells - fix possible rangecheck error - fix problem with TextObject.LoadFromFile and Delphi XE3 * RTF reader now handles UNC file links which use "\\" in the path * the cursor was not painted if DoubleBuffered was set to true for the parent of the editor + WPTools Premium: Saves and loads \column * improved theming of TWPToolbar and TWPToolPanel + new event: OnPaintDesktopBackground. It can be used to draw the parent of the editor, for example if it is a TMS panel or pager control. Example: procedure TForm1.WPRichText1PaintDesktopBackground(Sender: TObject; Canvas: TCanvas; R: TRect); begin // This would paint the TWPRichText, too - but TWPRichText is locked for repaint during this event AdvOfficePager1.PaintTo(Canvas, -WPRichText1.Left, -WPRichText1.Top); end; - HighlightTextColor can now also be used if 2Pass Painting is used 21.12.2012 - WPTools 6.29 - images in RTF label were not painted when label was moved + added support for XE3 to WPTools STD edition * stream RTFvariables were not loaded from WPT format. They are loaded now. 9.11.2012 - WPTools 6.28 - Update to RTF reader to load landscape flag for sections better - when page mirror was used, after a page break the text indentation was sometimes wrong - hyphenation code was broken - workaround for word files which have space characters in table definitions 16.10.2012 - WPTools 6.27'' * some additions to the PRO edition for XE3 26.9.2012 - WPTools 6.27' * The PRO Version now supports Delphi XE3 3.8.2012 - WPTools 6.27 - fix for wrong display of tables with header and footer rows. Sometimes both were painted without any data. + to load old Hiedit templates as RTF code use the formatstring -HiEditFields. This will create merge fields for ALL fields. - NL sign was not shown right after CTRL+ENTER was pressed (requires ShowNL) - fix for rangecheck exception with paintpages array - fix for footer and page mirror - doubleclick word selection now stops at NL - Workaround for Windows Spooler problems - some images would get lost - sections use footer and header of previous section, not general - ASetBorder did change all border types 12.3.2012 - WPTools 6.25.4 * allows changing of column width in redonly editors. Can be switchoed off in EditOptions or set compiler define TOTALREADONLY + wpDisableSelectAll in EditOptionsEx2 * changed reformat/repaint after Undo/Redo - pro and premium: Due to a problem with precompiler cursoir selection did not work correctly 8.3.2012 - WPTools 6.25.3' - borders for paragraphs with multiple lines were not drawn correctly - change in DBWPRich.pas to use LoadFromString instead of Text - fix possible range check error - change in WPTbar.pas to use different default for BevelOuter - change in WPIOHTML to use default charattr of paragraph is a paragraph is empty 9.2.2012 - WPTools 6.25.2 * new 2-pass painting triggers CustomPaint event only on second loop (when the text is painted) * changed protection of empty paragraph to WPTools 5 way + inside of the OnPrepareImageForSaving event it is now possible to set Writer.CurrentImageName to the name of the file which should be saved. This is only useful if ObjRef = nil and so no ObjRef.Filename can be set. + TParagraph.GetSubText now has optional parameter to disable the object reference char codes #1, #2 and #3 * HTML writer ignores #13 codes when writing the text. * additional savety code in HTML writer * HTML reader sets image width and height to contents, (if possible) * HTML reader: changes UTF8 handling for UTF8Sequence=1 * HTML reader does not stop on \0 anymore * HTML writer writes img tag also for empty images IF a name is provided in event PrepareImageforSaving * HTML reader does not add space at end anymore - the UNICODE reader uses attribute of current paragraph. This is important for consistent behaviour between ANSI and UNICODE version - when writing HTML background color is not set to white when shading is 0. 0 is treated as default value. - image align center and right now works in HTML - fix an endless loop when image was too large - improvement of table border drawing - improvement for right align in table cells + numbering will be used when wpFormatAsWebpage was set in AsWebpage 10.1.2012 - WPTools 6.25 + HTML reader reads cell heighs - RTF writer writes background color easier to understand by Word * improved XML reader/writer (unit WPIOXml1) * improved word wise cursor movement when fields are used + new "paint attributes" mode. Use WPRichText1.BrushStart to select this mode. - dashes were not painted using the current font color - some stability improvements 7.11.2011 - WPTools 6.22 + procedure TParagraph.CellSelectionAdd; + procedure CellSelectionRemove; + EditOptionsEx2: wpCellMultiSelect - allows multiselection in tables when CTRL is pressed + improved XML import/export (unit WPIOXML1.PAS) - some smaller bugs fixed 3.11.2011 - WPTools 6.21.2 - fix problem with TWPToolButton - improved HTML writer to write parameters in "" - improved display of arabic text 24.10.2011 - WPTools 6.21.1 - fix problem when painting insertpoints after tab stops. They were painted two times. - fix in XML and HTML writer. Close tags when extracting text from fields 19.10.2011 - WPTools 6.21 + CodeLocate can now also accessed in non-visual TWPRTFDataCollection and not just TWPCustomRtfEdit + CodeSelect can now also accessed in non-visual TWPRTFDataCollection and not just TWPCustomRtfEdit + TWPRTFDataCollection.CodeLocatePair( FormatName : String; var spar, epar : TParagraph; var spos, epos, : Integer ) :Integer; - solves problem with integrated Bin64 decoder + DeleteField now has optional "Contents" parameter to delete a certain field with contents + the text writer now understand the option -softlinebreaks to create a \n at the end of every line. In fact all soft line breaks will be handled like the #10 code. + CodeLoadSaveEmbeddedText - load or save text in fields, bookmarks, hyperlinks + the regular save and load methods (LoadFromFile, SaveToFile) can now access text wrapped by paired objects. specify the fieldname in the format string, i.e. "f:name=RTF" to save or load the contents of the field "name". + There is an overloaded LoadFromString which expects a WideString as parameter (Delphi 6+) - The Setup named the Delphi 2009 files "Delphi 2005" due to a typo. (Delphi 2005 units are not included anymore) - fix probable range check error in WPRTEDEFS 13.10.2011 - WPTools 6.20 + completely new setup procedure. The PRO and Premium releases don't include object files which makes them much smaller. * compiled new WPTools 6 Reference (CHM file) * Delphi XE2: several small changes to improve theming support * Delphi XE2: several small changes to provide compatibilty to 64bit compiler (requires WPTools PRO) + new demo developed with Delphi XE2, showcases actions, splitscreen, simulated MDI and property dialogs (Demos\XE2\WPWord) + wpDeleteAtEOFRemovesSpaceAfter in EditOptionsEx2 - TextObjectsGetList did not work * WPRuler uses Delphi XE2 VCL Theme plus * added defaults to properties TWPRichText - change in RTF reader to let section inherit the default layout, not the current page layout - fix of problem with table borders when also PageMirror was used. * change to DeleteMarkedChar. It now has additional parameter DeleteEmptyParAndTables : Boolean * change in unit WPWordConv to handle RTF as DOC files if they do not start with "{\rtf" * updated border dialog TWPParagraphBorderDlgEx * updated border drawing code - now supports dotted lines with wider lines. * modified method DeleteColumn * modified WPT reading code to repair table width which were negative + improved image rendering code for transparent (PNG) images. They will be drawn transparently also when scaled and also in high resolution rendering mode. + new code to draw dotted lines which also supports wider lines + new function WPReplaceTokens (unit WPUtils.PAS). It is a ReplaceTokens function to be used on a TWPRTFDataCollection, not TWPRichText - WPPremium - fix problem when there were too columns * MergeText now restores before Merge Cursor position and selection (except for cell selection) * resizing a table column does not move the cursor to the nearby cell anymore * different frame line when resizing columns and rows + InsertColumn now also works if wpAllowSplitOfCombinedCellsOnly was used in EditOptionsEx + new event OnPaintTextFrameRect let you paint background and borders for text frames, i.e. the text body or, with WPTools Premium, custom frames. + WPPREMIUM: In OnMeasureTextPage it is possible to set columns for certain pages. Using PageInfo.colflags=1 it is possible to activate a line between the columns It is also possible to add custom frames using PageInfo.rtfpage.AddFrameRect. + new ViewOptionEx: wpHideParBordersBeforAndAfterPageBreaks + improved paint routine now avoids clipping of characters which were overlapping their bounding box, such as italic letters or "f". The improvement is especially visible when selecting text or using character background colors + WPPREMIUM: it is now possible to print a line between columns using wpShowColumnCenterLine in ViewOptionsEx + With WPTools Premium it is now possible to print a line between certain colums - use par.ASet(WPAT_COLFLAGS, WPCOLUM_LINE); + paragraph styles can now contain border definion for paragraphs + TWPTextObjList now has a IndexOfSource function and a Source[] string array to access the objects * revised code to draw double borders - always draws twou lines on screen even when zoomed * improved saving of numbering attributes with styles * style dialog can now apply number level even if style does not have numbering yet. * revised wpNoEditOutsideTable - was not checked for keyboard input * fix problem with - - - - - at end of line - fix problem with spell-as-you go after hyperlinks - fix problem with page numbers in sections when tables were spanning pages - fix problem with right aligned negative numbers in merge fields * automatic text attribute was not inherited to tables inserted in fields * images with mode "under text " can now be also clicked at positiones where there is no text. - WPLngCtr now defines DONT_OVERRIDE_COM, that fixes the IDE problem with DevExpress Toolbar controls 18.7.2011 - WPTools 6.16.2 * ObjectMode wpobjPositionAtCenter can be used to aligh character based images to the center line. (Other RTF reader will not understand this feature) * some changes to prepare 64bit compatibility (requires WPTools 6 PRO) 7.7.2011 - WPTools 6.16.1 * change to avoid flickering when doing auto scroll * message strings are now loaded from resource strings - modified DBCS support for RTF reader * outerborder action now works for sinle cells and paragraphs 15.6.2011 - WPTools 6.16 + Many improvements to new Border Dialog. It is now possible to apply borders and colors to odd and even rows only. * RTF reader defines symbol IGNORE_AUTOWIDTH_TBL. This disable the Word2010 auto width columns which are not read correctly. Reason: Word 2010 seem to always add an empty column to the end. + using WPRichText1.Caret.Blink:=true it is possible to activate the blinking caret (0.5sec interval). * updated actions to apply inner and outer borders to selected cells * update to ExSymbolDialog, Tab and Table dialog - improved WPTOOLS reader to read and apply outline properties to current outline setting (unless wpLoadDoesNotOverride is used) - wpsec_ResetPageNumber flag is now saved in WPTOOLS format * tripple click in margin selects paragraph + double click in margin selects current cell + tripple click in margin selects current row - change for PaintEngine and TWPRichTextLabel to not shrink tables which are small enough to fit the page This solves a problem with dissapearing text in WPRichTextLabel - fix selection problem when several images were linked to same paragraph - when moving images the Z order will not be reset - HTML Writer: A style with name "DIV" will be added to the style sheet to save the default font - HTML Writer: BaseFont tag will now be written with font size (requires -writebasefont option) - improved display of character background color for fields and other special code - Saving a style sheet did not work with Unicode Compiler 8.5.2011 - WPTools 6.15 * updated border painting * updated Inner/Outer Border Action * new object sizing routine lets the user make the size larger than the page - update in WPTools reader to overwrite outline styles when loading tzhe same group 9.3.2011 - WPTools 6.14.6 * change in format routine to fix problem when a nested table cell caused a page break. 14.12.2010 - WPTools 6.14 * fix for SetAsString code (Unicode Delphi) * several fixes and updates in editor * WPTools Premium: $define DONT_AUTOENTER_TEXTBOXES in WPINC.INC to switch of the behavior, that when editing a textbox the user can click on any other and edit that. 22.9.2010 - WPTools 6.13.3 * WPCtrMemo.PAS now defines TEXT_IS_UNICODE fro Delphi 2009 and later. Now the property Text and SelText reads and writes unicode strigs * change in RTF reader to read ANSI characters in the range 128..255 * Tables and rows can now be hidden. (TParagraph.Hidden) * The Lines property now supports unicode strings (Delphi 2009 and later) + HTML reader and writer now use the entitly ­ as soft hyphen 27.8.2010 - WPTools 6.13.2a + new ViewOptionEx wpUnderlineWebLinks. If active links like http://www.wptools.de will be drawn using the attributes for hyperlinks The HyperlinkCursor will be selected and the hyperlink event will be triggered * other fixes in RTF engine - Memo._OverrideCharset was not set to -1 27.7.2010 - WPTools 6.13.1 + WPRichText1.Memo.ColorGridLines can be used to change the color of the grid lines (ViewOptions) 23.7.2010 - WPTools 6.13 * several improvements of editor * improvement to RTF writer to when writing table cells * improved right aligned text * fixed problem with line heights of lines which are empty except for new line character 22.6.2010 - WPTools 6.12.1b - fix for problem when pressing Accent + Backspace - fix for WPReporter to assign default attribute to destination - fix for WPReporter to move images when converting templates 18.6.2010 - WPTools 6.12.1a * the new border dialog now reads the current border attributes from table cells, tables or selections - Premium: fix problem when loading columns which started on first line - fix in wpfUseKerning mode (FormatOptionsEx2) - it did not work as expected with some texts (We recommend to use wpfUseKerning - it produces better print quality on screen) * Pasting of HTML now works better - fix for right tabs when also borders were used 13.6.2010 - WPTools 6.12 + all new, powerful yet intuitive Border dialog. The border dialog can modify a range of selected cells, columns, rows, tables and also only modify certain properties while leaving the others unchanged. You need to activate the compiler symbol NEWBORDER to use it by default + new method: procedure SetBorderProperties(Inner, Outer: TWPTextStyle; ApplyMode : TWPParagraphBordApply; Mode : TWPUpdateBorderProperties = [wpSetBorderFlags, wpSetBorderType, wpSetBorderWidth, wpSetBorderColor, wpSetParColor, wpSetParShading]); This method is mainly used by the new border dialog. + new flags in ViewOptionsEx: wpShowCurrentCellAsSelected, // Displays current cell to be selected. Disables current selection wpShowCurrentRowAsSelected, // Displays current table row to be selected. Disables current selection wpShowCurrentTableAsSelected // Displays current table to be selected. Disables current selection + new property YOffsetNormal to defined an upper border for normal and wordwrap view. * feature Header.MarginMirror changed to work like MS Word new flag: wpMarginMirrorBookPrint in FormatOptionsEx2 to enable the previous logic * workaround for MouseWheel UP beeing triggered too often (10 mms check) + ClipboardOption wpcoDontPasteHTML to disable HTML pasting completely (avoid problems with firefox) + it is now possible to load base64 embedded JPEGs from HTML * it is now possible to change width of tables which exceed right margin - fix bug in HTML writer for lists in table cells - fix in RTF writer to write character colors also for text which is using a character style - fix: numbering was not always updated - fix: better use fonts in certain RTF files written by MS Word + Update to WPLanguageControl to make Localization easier to use. Only< this code is not required: procedure TForm1.WPLanguageControl1Loaded(Sender: TObject); begin WPLangInterface := TWPLocalizationInterface.Create(WPLanguageControl1); WPLocalizeAllStrings; end; 6.5.2010 - WPTools 6.11.2 * improvement to border rendering * improvement to XML unit WPIOXML1 (Premium) 5.5.2010 - WPTools 6.11.1 + ConvertTableToText now supports option to also handle soft line breaks - fix problems with underlines at end of line - fix problem when loading hyperlinks in RTF - fix problem when saving attributes to XML (WPTools Premium) - fix problem with text rendering - fix problem with tables which habe header row and pages with different header margin 19.4.2010 - WPTools 6.11 + EditOptionEx: wpRowMultiSelect + new event: OnInternPaintPar + new event: RTFDataCollection AfterApplyUndoObject 14.4.2010 - WPTools 6.10.6 * fields are now passed as unicode strings to PDF exporter * Delphi 2010/2009 import has been improved to load unicode values which are stroed in fields. 5.4.2010 - WPTools 6.10.5 + flag: wpDontExtendSelectionToRightMargin. Do not extend selection to end of line + wpInvertActiveRow in ViewOptionsEx - some fixes for Delphi 2009 and Delphi 2010 - fix for section support - WPTools premium: Fix for images in text boxes - workwaround to load RTF which use emfblib for pngblib 28.2.2010 - WPTools 6.10 - improve word left/right movement to skip hidden text - improve http load of images - improve support for numbering - improve saving of character style attributes 11.2.2010 - WPTools 6.09.1 + LoadFromString now has a "WithClear" parameter - fix in RTF reader to better load files which do not define codepage - fix in paint routine to solve a rare lockup - fix in format routine to improve section support 6.2.2010 - WPTools 6.09.1 - fix of problem in save routine when footnotes were used (WPTools premium) - fix in HTML writer - Image optiions now have a Rotation property which allows 90, 180 and 270 setting. - HTML reader and writer now support different colors for left,right,top, bottom lines 1.2.2010 - WPTools 6.09 - HTML writer will write 8 hard spaces for TAB stops at the start of a paragraph - HTML writer will write page information only if -PageInfo was used in format string - fix problem with left aligned text and image wrap around (wrong alignment) - fix problem with sometimes duplicated images in PDF export - fix problem with black rectangle in first line under Windows 7, 64 bit - Improvment to RTF reader to ignore section properties which are not followed by \sect 14.12.2009 - WPTools 6.08 - graphics are resized to fill text area - fixed problems in numbering - fixed problem with one word paragraphs in justified paragraphs - other improvements in editor 27.10.2009 - WPTools 6.07 - fixed one leaking TList per TWPRichText * improved layout of most important dialogs * improved extended insert symbol dialog - fix in RTF reader to load sections and header+footer written by Word 2003 - don't add unwanted cell padding when loading table cells - fix in WPTools reader to read custom number styles - if paragraph styles use number styles the indent defined in the style has priority over numberstyle - LabelDef now also works for one row and one column - better handling of mousewheel event - fix for tabs in tables + GIF animation (requires GifImage) library (not threaded) to use it You need to set ViewOption wpUseOwnDoubleBuffer and call the method RefreshAniImages using a timer object. - fix problem when sections were used with LabeDef.Active = true * change in HTML writer to close tags before paragraph end 4.10.2009 - WPTools 6.06 - fix problem with Delphi 2010 support (language control) - fix problem with PDF export to reduce PDF size - improve support for IME * improve AsWebPage format mode. Now WordWrap propery is supported. - fix searching text upwards with "Whole Word" selected + RTF writer now can use format option "-writehighlight" to writh \highlight instead of \cb 14.9.2009 - WPTools 6.05.9 + added Delphi 2010 Support - problem when pasting from "The Bat" - fix problem in Delphi 2009 (Assign Method) 3.8.2009 - WPTools 6.05.8' * when using "Delete All" in the tabstop dialog, all tabs will be cleared added to manual: Tabstop Category - fixed problem when deleting text in a paragraph. The alignment was cleared unexpectedly. - fix problem with installer, WPMangeHeaderFooter.DFM was not included - fix for IPara in mail merge field objects - improved handling of hoover effect for hyperlinks - improved text rendering for wPDF output (CID Mode) - add correct WPManHeadFoot.dfm 23.7.2009 - WPTools 6.05.7 + WRITE_PROP_WPTOOLSOBJ $define in WPIOWriteRTF. Avoid problems when saving RTF and opting in Word In case of special objects, such as SPAN codes, \*\wpfldinst is beeing written what is ignored by WOrd + Dialog HeaderFooter can optionally create and manage header&footer for the current section + new KeepN Handling. This is by default activated in FormatOptionsEx2 + new wpfHideParagraphWithHiddenText in FormatOptionsEx2. Now paragraphs will be hidden if empty or only contain hidden text. + new format option -zap1 will remove the every first byte to convert a two byte stream into singly byte -zap2 will remove every second byte. Usie this option when loading data from unicode data sets - bugfix for table loading in RTF 15.7.2009 - WPTools 6.05 + TParagraph.Trim method to remove white spaces at start and end + Vertical Scrolling by pressing the middle mouse button now works. + improved auto thumbnail mode * enhancement to HTML reader / writer to handle embedded SPAN objects + new method: ApplySPANStyles(and_remove : Boolean=false; ignore_charattr : Boolean = false); can be used to apply SPAN styles to the text which it embeds + The function InputSpanObjects( Attributes : TWPAbstractCharAttrInterface ) : TWPTextObj; can be used to wrap the selected text into SPAN objects + method LoadCSSheet can be used to load paragraph styles in CSS format from a string - fix problem with Wordrwap and centered text + new even OnTextObjectMovePosition (move event) - OnTextObjectMove is still used for resize (unchanged) 28.6.2009 - WPTools 6.04 + WPTools Premium: Column Balancing * many improvements in RTF reader. Word documents are now understood better * Improvement in check for protected text (ppMergedText) + new ViewOptionsEx property - auto hyperlinks were not working + TWPComboBox has an event OnUpdateItems which will be triggered after the items had been automatically assigned. 24.6.2009 - WPTools 6.03.6 * thinner page borders in thumbnail mode. ViewOptionsEx: wpAutoThumbnailMode will show pagenumbers only when in thubmbnail mode (= wpShowPageNRinGap in ViewOptions) + property ColorDesktop and DeskGradientHorizontal to render the background with a gradient fill * fix for protected text handling (CR after a field) * fix for text alignment near a movable image - EditOption AutoDetectHyperlinks was not working anymore * WPReporter: SuperMerge.Stack.PageBreakAFterGroup := true was ot working when footers were used 1.6.2009 - WPTools 6.03.5 - fix problem with display of character attributes when attributes were inherited from paragraph styles - fix problems with selection deletion in single column, single row tables - improvement of RTF writer when writing sections 11.5.2009 - WPTools 6.03.3 - improved report band dialog, new ShowOptions property - fix in RTF reader to load header/footer - change in HTML writer to save SPAN instead of FONT tag - several fixes in editor * WPTools Premium: better column support. Fixed column height now splits correctly on 2 pages. 28.4.2009 - WPTools 6.03.2 - fix problem with justified text in PDF 21.4.2009 - WPTools 6.03.1 - fix problem with images when used in Delphi 2009 - better support for header/footer in RTF files created by word. (Ignore bogus header/footer) - soome stability fixes 25.3.2009 - WPTools 6.03 + improved text rendering - optimation for character distances on screen to provide better display + improvement on ShowMergeFieldNames to improve cursor movement and drag and drop + automatic disable dragging of fields inside of fields + improved merge field selection. TextObject.SelectedObject now returns the mergefield if it was completely selected + change in HTML saving code to save src in after width adn height (for outlook) * various bugfixes 17.1.2009 - WPTools 6.02 - WPPREMIUM: Text after Columns initialized with WPAT_COLUMNS_Y is now allowed + TWPToolBar FontName drop down now lists fonts used by document first - fix for tables which use a fixed row height and are splitted on different pages + improvements necessary for Delphi 2009 - the Locaization demo now works + EditOptionEx wpDontPreserveObjectsAgainstDeletion - fix problem in ImageObject LoadFromStream when GraphicEx is used - fix problem with Delphi 2009 when loading WPReporter templates - fix problem with HTML reader with paragraph style of first paragraph + GetFontUnicodeRanges dynamically linked to avoid Win98 problem 26.10.2008 - WPTools 6.01 * updated HTTP Demo, now with "Source View" + DELETE/BACKSPC at start of line removes right/center alignment + loads background images for paragraphs, tables and styles * improvement to text protection (empty lines) - improvements to HTML and CSS reader - improved HTML format routine - improved MIME loading - now supports binary data despite Synapse does not) + MIME reader capturesHTML body for SourceVIew * DataProvider now uses MergeText('',true) instead of MergeText + boolean wphttp_Disable to disconnect HTTP temporarily * several changes to improve compatibility with Delphi 2009 17.10.2008 - WPTools 6.00.1 - several changes to fix problems which occured with use of Delphi 2009 * update to WPIO_MIME to also load binary encoded GIFS and JPEGS from EML files 16.10.2008 - WPTools 6.00 * new installer + VCL demo * fixed problem with TWPComboBox and default attributes * updated "Tasks\Mini" demo project - fix in default actions 3.10.2008 - WPTools 6 Beta c - remove not required unit reference form WPRTEReport * update to manual 2.10.2008 - WPTools 6 Beta b - fix in Installation script 30.9.2008 - WPTools 6 Beta a 1) Application-Server-Mode WPTools 6 introduces a feature which is called "Application-Server-Mode". This is only available when you have the PREMIUM version. This mode is activated when true is assigned to the global boolean variable WPAppServerMode. When this mode is activated the editor does not use the double buffered output anymore. While this can cause some flickering the network traffic is reduced when the application runs on an application server, such as Citrix. Please note, effective with WPTools 6, software which was written for application servers (such as clinic software) may only be distributed when a TEAM or SITE license of WPTools was acquired.END 2) Integrated Label Printing When activated the integrated label printing shows multiple labels on one virtual sheet of paper. The cursor can move from label to label freely. The user can so edit the labels, add new or delete unwanted labels before the complete sheet is printed. This is a very unique and versatile feature. 3) Additions to the PDF export with wPDF V3 Create embedded data objects Create edit / memo fields Create a check box field Also the creation of PDF tags was enhanced. So now hints to paragraph styles will be also exported. 4) Additional Control over Clipboard Actions a) properties to select the format b) added security 5) Added Section API + WPRichText1.ActiveSection + WPRichText1.InputSection + TWPPagePropDlg has new method ExecuteEx. Use it to change current page size or Section WPPagePropDlg1.ExecuteEx(WPRichText1.ActiveSection); 6) Load over HTTP connections (requires Synapse) 7) Load and save MIME encoded HTML with embedded images (requires Synapse) 8) Integrated XML syntax highlighting (non destructive) 9) WPReporter: Token to Template conversion with syntax highlighting 10) Additions to paragraph API 11) Special mode to format HTML documents
目录1 译者的话 14 第一周概貌 16 从这里开始 16 第一天SQL 简介 17 SQL 简史 17 数据库简史 17 设计数据库的结构21 SQL 总览23 流行的SQL 开发工具 24 SQL 在编程中的应用 27 第二天查询— — SELECT 语句的使用 30 目标 30 背景 30 一般的语法规则 30 你的第一个查询 33 总结 37 问与答 38 校练场 38 练习 39 第三天表达式条件语句与运算 40 第四天函数对数据的进一步处理 60 目标 60 汇总函数 60 COUNT61 SUM 61 AVG 63 MAX 63 MIN 64 VARIANCE65 STDDEV66 日期/时间函数66 ADD_MONTHS 67 LAST_DAY68 MONTHS_BETWEEN 69 NEW_TIME70 NEXT_DAY71 SYSDATE72 数学函数 72 ABS 73 CEIL 和FLOOR 73 COS COSH SIN SINH TAN TANH 73 EXP 75 LN and LOG75 MOD 76 POWER77 SIGN 77 SQRT 78 字符函数 79 CHR 79 CONCAT 79 INITCAP80 LOWER 和UPPER 81 LPAD 与RPAD 82 LTRIM 与RTRIM 83 REPLACE 84 SUBSTR85 TRANSLATE88 INSTR 88 LENGTH89 转换函数 89 TO_CHAR90 TO_NUMBER91 其它函数 91 GREATEST 与LEAST 91 USER 92 总结 92 问与答 93 校练场 93 练习 94 第五天SQL 中的子句95 目标 95 WHERE 子句96 STARTING WITH子句 98 ORDER BY 子句99 GROUP BY 子句104 HAVING 子句109 子句的综合应用112 总结117 问与答117 校练场117 练习118 第六天表的联合119 介绍119 在一个SELECT 语句中使用多个表119 正确地找到列123 等值联合124 不等值联合129 外部联合与内部联合130 表的自我联合132 总结134 问与答134 校练场134 练习135 第七天子查询内嵌的SQL 子句136 目标136 建立一个子查询136 在子查询中使用汇总函数140 子查询的嵌套141 相关子查询144 EXISTS ANY ALL 的使用147 总结151 问与答151 校练场152 练习153 第一周回顾154 预览154 第二周概貌155 这一周都讲些什么155 第八天操作数据156 目标156 数据操作语句156 插入语句157 INSERT VALUES 语句157 INSERT SELECT 语句161 UPDATE语句163 DELETE 语句166 从外部数据源中导入和导出数据169 Microsoft Access 170 Microsoft and Sybase SQL Server 171 Personal Oracle7171 总结172 问与答172 校练场173 练习173 第九天创建和操作表174 目标174 CREATE DATABASE 语句174 建立数据库时的选项175 设计数据库176 建立数据字典176 建立关键字段177 CREATE TABLE 语句178 表名179 FIRST NAME179 空值属性180 唯一属性181 表的存储与尺寸的调整183 用一个已经存在的表来建表184 ALTER TABLE 语句185 DROP TABLE 语句186 DROP DATABASE语句187 总结188 问与答188 校练场189 练习190 第10 天 创建视图和索引191 目标191 使用视图192 列的重命名196 SQL 对视图的处理过程197 在SELECT 语句使用约束201 在视图中修改数据201 在视图中修改数据的几个问题203 通用应用程序的视图204 删除视图语句207 使用索引207 什么是索引207 使用索引的技巧212 对更多的字段进行索引212 在创建索引时使用UNIQUE 关键字214 索引与归并216 群集簇的使用217 总结218 问与答219 校练场219 练习220 第11 天事务处理控制221 目标221 事务控制221 银行应用程序222 开始事务处理223 结束事务处理225 取消事务处理228 在事务中使用保存点231 总结234 问与答234 校练场235 练习235 第12 天数据库安全236 前提数据库管理员236 流行的数据库产品与安全237 如何让一个数据库变得安全237 Personal Oracle7 与安全238 创建用户238 创建角色240 用户权限242 为安全的目的而使用视图247 总结251 问与答252 校练场252 练习253 第13 天 高级SQL254 目标254 临时表254 Title 257 游标259 创建游标260 打开游标260 使用游标来进行翻阅261 测试游标的状态262 关闭游标263 游标的适用范围264 创建和使用存贮过程265 在存贮过程中使用参数267 删除一个存贮过程269 存贮过程的嵌套270 设计和使用触发机制272 触发机制与事务处理273 使用触发机制时的限制275 触发机制的嵌套275 在选择语句中使用更新和删除275 在执行前测试选择语句276 嵌入型SQL277 静态SQL 与动态SQL277 使用SQL 来编程279 总结280 问与答280 校练场280 练习281 第14 天动态使用SQL 282 目标282 快速入门282 ODBC 282 Personal Oracle 7283 InterBase SQL ISQL 283 Visual C++ 284 Delphi284 设置284 创建数据库285 使用MS QUERY 来完成链接290 将VISUAL C++与SQL 结合使用292 将DELPHI 与SQL 结合使用296 总结302 问与答303 校练场303 练习303 第二周回顾304 第三周概貌305 应用你对SQL 的知识305 第15 天对SQL 语句优化以提高其性能306 目标306 让你的SQL 语句更易读307 全表扫描308 加入一个新的索引309 在查询中各个元素的布局309 过程311 避免使用OR311 OLAP 与OLTP 的比较313 OLTP 的调试313 OLAP 的调试314 批量载入与事务处理进程314 删除索引以优化数据的载入316 经常使用COMMIT 来让DBA 走开316 在动态环境中重新生成表和索引317 数据库的调整319 性能的障碍322 内置的调整工具323 总结323 问与答324 校练场324 练习324 第16 天用视图从数据字典中获得信息326 目标326 数据字典简介326 用户的数据字典327 数据字典中的内容327 Oracle 的数据字典328 Sybase 的数据字典328 ORACLE 数据字典的内部结构328 用户视图328 系统数据库管理员视图336 数据库对象339 数据库的生长343 动态执行视图347 总结349 问与答349 校练场350 练习350 第17 天使用SQL 来生成SQL 语句351 目标351 使用SQL 来生成SQL 语句的目的351 几个SQL*PLUS 命令352 SET ECHO ON/OFF353 SET FEEDBACK ON/OFF353 SET HEADING ON/OFF 353 SPOOL FILENAME/OFF353 START FILENAME354 ED FILENAME354 计算所有的表中的行数354 为多个用户赋予系统权限359 将你的表的权限赋予其它的用户361 在载入数据时解除对数的约束363 一次创建多个同义字364 为你的表创建视图368 在一个计划中清除其所有的表的内容369 使用SQL 来生成SHELL 脚本371 再建表和索引372 总结373 问与答373 校练场373 练习374 第18 天PL/SQL 简介376 目标376 入门376 在PL/SQL 中的数据类型377 字符串类型377 数值数据类型378 二进制数据类型378 日期数据类型378 逻辑数据类型378 ROWID379 PL/SQL 块的结构379 注释380 DECLARE 部分380 变量声明380 常量定义381 指针定义381 %TYPE 属性382 %ROWTYPE 属性382 %ROWCOUNT 属性383 Procdure 部分383 BEGIN … … END383 指针控制命令384 条件语句386 LOOPS 循环387 EXCEPTION 部分390 激活EXCEPTION 异常390 异常的处理391 将输入返回给用户392 在PL/SQL 中的事务控制393 让所有的事在一起工作394 示例表及数据394 一个简单的PL/SQL 语句块395 又一个程序398 存储过程包和触发机制403 总结406 问与答407 校练场407 练习407 第19 天TRANSACT-SQL 简介408 目标408 TRANSACT-SQL 概貌408 对ANSI SQL 的扩展408 谁需要使用TRANSACT-SQL409 TRANSACT-SQL 的基本组件409 数据类型409 使用TRANSACT-SQL 来访问数据库411 BASEBALL 数据库411 定义局部变量414 定义全局变量414 使用变量415 PRINT 命令417 流控制417 BEGIN … … END 语句418 IF … … ELSE 语句418 EXIST 条件421 WHILE 循环422 使用WHILE 循环在表中翻阅424 TRANSACT-SQL 中的通配符426 使用COMPUTE 来生成摘要报告426 日期转换427 SQL SERVER 的诊断工具— — SET 命令427 总结428 问与答428 校练场429 练习429 第20 天SQL*PLUS 430 目标430 简介430 SQL*PLUS 缓存430 DESCRIBE 命令435 SHOW 命令436 文件命令438 SAVE GET EDIT 命令438 运行一个文件439 查询的假脱机输出440 SET 命令442 LOGINSQL 文件445 CLEAR 命令446 将你的输出格式化446 TTITLE 与BTITLE446 格式化列COLUMN HEADING FORMAT 447 报表与分类汇总449 BREAK ON449 COMPUTE450 在SQL*PLUS 中使用变量453 DEFINE 454 ACCEPT 455 NEW_VALUE457 DUAL 表458 DECODE 函数459 日期转换462 运行一系列的SQL 文件465 在你的SQL 脚本中加入注释466 高级报表467 总结469 问与答469 校练场469 练习470 第21 天常见的SQL 错误及解决方法471 目标471 介绍471 常见的错误471 Table or View Does Not Exist471 Invalid Username or Password 472 FROM Keyword Not Specified473 Group Function Is Not Allowed Here 474 Invalid Column Name475 Missing Keyword 475 Missing Left Parenthesis 476 Missing Right Parenthesis 477 Missing Comma478 Column Ambiguously Defined 478 Not Enough Arguments for Function480 Not Enough Values481 Integrity Constraint Violated--Parent Key Not Found 482 Oracle Not Available 483 Inserted Value Too Large for Column 483 TNS:listener Could Not Resolve SID Given in Connect Descriptor 484 Insufficient Privileges During Grants484 Escape Character in Your Statement--Invalid Character 485 Cannot Create Operating System File 485 Common Logical Mistakes485 Using Reserved Words in Your SQL statement 486 The Use of DISTINCT When Selecting Multiple Columns487 Dropping an Unqualified Table 487 The Use of Public Synonyms in a Multischema Database488 The Dreaded Cartesian Product 488 Failure to Enforce File System Structure Conventions 489 Allowing Large Tables to Take Default Storage Parameters489 Placing Objects in the System Tablespace490 Failure to Compress Large Backup Files 491 Failure to Budget System Resources 491 Preventing Problems with Your Data491 Searching for Duplicate Records in Your Database491 总结491 校练场492 练习492 第三周回顾494 附件A 在SQL 中的常见术语495 ALTER DATABASE495 ALTER USER495 BEGIN TRANSACTION 495 CLOSE CURSOR495 COMMIT TRANSACTION496 CREATE DATABASE496 CREATE INDEX496 CREATE PROCEDURE496 CREATE TABLE497 CREATE TRIGGER497 CREATE USER497 CREATE VIEW497 DEALLOCATE CURSOR498 DROP DATABASE498 DROP INDEX498 DROP PROCEDURE498 DROP TABLE498 DROP TRIGGER 499 DROP VIEW499 EXECUTE499 FETCH499 FROM499 GRANT500 GROUP BY500 HAVING500 INTERSECT500 ORDER BY500 ROLLBACK TRANSACTION 500 REVOKE500 SELECT501 SET TRANSACTION501 UNION501 WHERE501 *501 附件B 在第14 天中的C++源代码清单502 附件 C 第14 天中的Delphi 源代码清单521 附件D 参考内容524 书524 Developing Sybase Applications 524 Sybase Developer's Guide 524 Microsoft SQL Server 65 Unleashed, 2E 524 Teach Yourself Delphi in 21 Days 524 Delphi Developer's Guide 524 Delphi Programming Unleashed 525 Essential Oracle 72 525 Developing Personal Oracle7 for Windows 95 Applications 525 Teach Yourself C++ Programming in 21 Days 525 Teach Yourself Tansact-SQL in 21 Days 525 Teach Yourself PL/SQL in 21 Days 525 杂志526 DBMS526 Oracle Magazine526 SQL 的互联网资源526 附件E ACSLL 码表527 附件F 问题与练习答案533 第一天SQL 简介533 问题答案533 练习答案533 第二天查询— — SELECT 语句的使用533 问题答案533 练习答案534 第三天表达式条件语句与运算535 问题答案535 练习答案535 第四天函数对获得数据的进一步处理536 问题答案536 练习答案537 第五天SQL 中的子句538 问题答案538 练习答案538 第六天表的联接540 问题答案540 练习答案541 第7 天子查询内嵌的SELECT 语句542 问题答案542 练习答案544 第八天操作数据544 问题答案544 练习答案546 第九天创建和操作表546 问题答案546 练习答案548 第10 天 创建视图和索引549 问题答案549 练习答案550 第11 天事务处理控制550 问题答案550 练习答案551 第12 天数据库安全552 问题答案552 练习答案552 第13 天 高级SQL553 问题答案553 练习答案553 第14 天动态使用SQL 554 问题答案554 练习答案554 第15 天对SQL 语句优化以提高其性能555 问题答案555 练习答案555 第16 天用视图从数据字典中获得信息557 问题答案557 练习答案557 第17 天使用SQL 来生成SQL 语句558 问题答案558 练习答案560 第18 天PL/SQL 简介561 问题答案561 练习答案561 第19 天TRANSACT-SQL 简介562 问题答案562 练习答案562 第20 天SQL*PLUS 563 问题答案563 练习答案563 第21 天常见的SQL 错误及解决方法564 问题答案564 练习答案565

2,596

社区成员

发帖
与我相关
我的任务
社区描述
Sybase相关技术讨论区
社区管理员
  • Sybase社区
加入社区
  • 近7日
  • 近30日
  • 至今
社区公告
暂无公告

试试用AI创作助手写篇文章吧