Syntax error, insert "}" to complete Block

mrwc123 2012-05-16 01:56:23

这是出现错误的代码,说是缺少},是不是在控制器代码中?
...全文
4326 12 打赏 收藏 转发到动态 举报
写回复
用AI写文章
12 条回复
切换为时间正序
请发表友善的回复…
发表回复
manongliangzai 2014-11-08
  • 打赏
  • 举报
回复
楼主解决了么问题?
liujie616 2012-05-17
  • 打赏
  • 举报
回复
没关系。
五哥 2012-05-17
  • 打赏
  • 举报
回复
adminAdd 这个换个名字试试?
_jerrytiger 2012-05-17
  • 打赏
  • 举报
回复
代码没问题 ,运行试试。
mrwc123 2012-05-16
  • 打赏
  • 举报
回复
[Quote=引用 4 楼 的回复:]

有时候他需要一个空格 在后面打一个空格试试
[/Quote]
[Quote=引用 7 楼 的回复:]

<input标签不需要 /。
不过加了也没问题。。

你试着把文件关了,再开。估计就好了
[/Quote]
这两个都不行,6楼这个不修改没关系吗?
rainsilence 2012-05-16
  • 打赏
  • 举报
回复
<input标签不需要 /。
不过加了也没问题。。

你试着把文件关了,再开。估计就好了
liujie616 2012-05-16
  • 打赏
  • 举报
回复
不用管。没关系的
wflyxiaonian 2012-05-16
  • 打赏
  • 举报
回复
工具问题 代码没有问题
l_jb0516 2012-05-16
  • 打赏
  • 举报
回复
有时候他需要一个空格 在后面打一个空格试试
mrwc123 2012-05-16
  • 打赏
  • 举报
回复
[Quote=引用 2 楼 的回复:]

在adminAdd()这个函数里
[/Quote]
public void adminAdd(HttpServletRequest req,HttpServletResponse res)
{
String userName=req.getParameter("userName");
String userPw=req.getParameter("userPw");
String sql="insert into t_admin values(?,?)";
Object[] params={userName,userPw};
DB mydb=new DB();
mydb.doPstm(sql, params);
mydb.closed();

req.setAttribute("message", "操作成功");
req.setAttribute("path", "admin?type=adminMana");

String targetURL = "/common/success.jsp";
dispatch(targetURL, req, res);
}
你看一下
dqsweet 2012-05-16
  • 打赏
  • 举报
回复
在adminAdd()这个函数里
hugo000002020 2012-05-16
  • 打赏
  • 举报
回复
eclipse的编译时常会有这种问题的,你把代码整个剪切保存一下再拷回去试试
Contents Overview 1 Lesson 1: Concepts – Locks and Lock Manager 3 Lesson 2: Concepts – Batch and Transaction 31 Lesson 3: Concepts – Locks and Applications 51 Lesson 4: Information Collection and Analysis 63 Lesson 5: Concepts – Formulating and Implementing Resolution 81 Module 4: Troubleshooting Locking and Blocking Overview At the end of this module, you will be able to:  Discuss how lock manager uses lock mode, lock resources, and lock compatibility to achieve transaction isolation.  Describe the various transaction types and how transactions differ from batches.  Describe how to troubleshoot blocking and locking issues.  Analyze the output of blocking scripts and Microsoft® SQL Server™ Profiler to troubleshoot locking and blocking issues.  Formulate hypothesis to resolve locking and blocking issues. Lesson 1: Concepts – Locks and Lock Manager This lesson outlines some of the common causes that contribute to the perception of a slow server. What You Will Learn After completing this lesson, you will be able to:  Describe locking architecture used by SQL Server.  Identify the various lock modes used by SQL Server.  Discuss lock compatibility and concurrent access.  Identify different types of lock resources.  Discuss dynamic locking and lock escalation.  Differentiate locks, latches, and other SQL Server internal “locking” mechanism such as spinlocks and other synchronization objects. Recommended Reading  Chapter 14 “Locking”, Inside SQL Server 2000 by Kalen Delaney  SOX000821700049 – SQL 7.0 How to interpret lock resource Ids  SOX000925700237 – TITLE: Lock escalation in SQL 7.0  SOX001109700040 – INF: Queries with PREFETCH in the plan hold lock until the end of transaction Locking Concepts Delivery Tip Prior to delivering this material, test the class to see if they fully understand the different isolation levels. If the class is not confident in their understanding, review appendix A04_Locking and its accompanying PowerPoint® file. Transactions in SQL Server provide the ACID properties: Atomicity A transaction either commits or aborts. If a transaction commits, all of its effects remain. If it aborts, all of its effects are undone. It is an “all or nothing” operation. Consistency An application should maintain the consistency of a database. For example, if you defer constraint checking, it is your responsibility to ensure that the database is consistent. Isolation Concurrent transactions are isolated from the updates of other incomplete transactions. These updates do not constitute a consistent state. This property is often called serializability. For example, a second transaction traversing the doubly linked list mentioned above would see the list before or after the insert, but it will see only complete changes. Durability After a transaction commits, its effects will persist even if there are system failures. Consistency and isolation are the most important in describing SQL Server’s locking model. It is up to the application to define what consistency means, and isolation in some form is needed to achieve consistent results. SQL Server uses locking to achieve isolation. Definition of Dependency: A set of transactions can run concurrently if their outputs are disjoint from the union of one another’s input and output sets. For example, if T1 writes some object that is in T2’s input or output set, there is a dependency between T1 and T2. Bad Dependencies These include lost updates, dirty reads, non-repeatable reads, and phantoms. ANSI SQL Isolation Levels An isolation level determines the degree to which data is isolated for use by one process and guarded against interference from other processes. Prior to SQL Server 7.0, REPEATABLE READ and SERIALIZABLE isolation levels were synonymous. There was no way to prevent non-repeatable reads while not preventing phantoms. By default, SQL Server 2000 operates at an isolation level of READ COMMITTED. To make use of either more or less strict isolation levels in applications, locking can be customized for an entire session by setting the isolation level of the session with the SET TRANSACTION ISOLATION LEVEL statement. To determine the transaction isolation level currently set, use the DBCC USEROPTIONS statement, for example: USE pubs GO SET TRANSACTION ISOLATION LEVEL REPEATABLE READ GO DBCC USEROPTIONS GO Multigranular Locking Multigranular Locking In our example, if one transaction (T1) holds an exclusive lock at the table level, and another transaction (T2) holds an exclusive lock at the row level, each of the transactions believe they have exclusive access to the resource. In this scenario, since T1 believes it locks the entire table, it might inadvertently make changes to the same row that T2 thought it has locked exclusively. In a multigranular locking environment, there must be a way to effectively overcome this scenario. Intent lock is the answer to this problem. Intent Lock Intent Lock is the term used to mean placing a marker in a higher-level lock queue. The type of intent lock can also be called the multigranular lock mode. An intent lock indicates that SQL Server wants to acquire a shared (S) lock or exclusive (X) lock on some of the resources lower down in the hierarchy. For example, a shared intent lock placed at the table level means that a transaction intends on placing shared (S) locks on pages or rows within that table. Setting an intent lock at the table level prevents another transaction from subsequently acquiring an exclusive (X) lock on the table containing that page. Intent locks improve performance because SQL Server examines intent locks only at the table level to determine whether a transaction can safely acquire a lock on that table. This removes the requirement to examine every row or page lock on the table to determine whether a transaction can lock the entire table. Lock Mode The code shown in the slide represents how the lock mode is stored internally. You can see these codes by querying the master.dbo.spt_values table: SELECT * FROM master.dbo.spt_values WHERE type = N'L' However, the req_mode column of master.dbo.syslockinfo has lock mode code that is one less than the code values shown here. For example, value of req_mode = 3 represents the Shared lock mode rather than the Schema Modification lock mode. Lock Compatibility These locks can apply at any coarser level of granularity. If a row is locked, SQL Server will apply intent locks at both the page and the table level. If a page is locked, SQL Server will apply an intent lock at the table level. SIX locks imply that we have shared access to a resource and we have also placed X locks at a lower level in the hierarchy. SQL Server never asks for SIX locks directly, they are always the result of a conversion. For example, suppose a transaction scanned a page using an S lock and then subsequently decided to perform a row level update. The row would obtain an X lock, but now the page would require an IX lock. The resultant mode on the page would be SIX. Another type of table lock is a schema stability lock (Sch-S) and is compatible with all table locks except the schema modification lock (Sch-M). The schema modification lock (Sch-M) is incompatible with all table locks. Locking Resources Delivery Tip Note the differences between Key and Key Range locks. Key Range locks will be covered in a couple of slides. SQL Server can lock these resources: Item Description DB A database. File A database file Index An entire index of a table. Table An entire table, including all data and indexes. Extent A contiguous group of data pages or index pages. Page An 8-KB data page or index page. Key Row lock within an index. Key-range A key-range. Used to lock ranges between records in a table to prevent phantom insertions or deletions into a set of records. Ensures serializable transactions. RID A Row Identifier. Used to individually lock a single row within a table. Application A lock resource defined by an application. The lock manager knows nothing about the resource format. It simply compares the 'strings' representing the lock resources to determine whether it has found a match. If a match is found, it knows that resource is already locked. Some of the resources have “sub-resources.” The followings are sub-resources displayed by the sp_lock output: Database Lock Sub-Resources: Full Database Lock (default) [BULK-OP-DB] – Bulk Operation Lock for Database [BULK-OP-LOG] – Bulk Operation Lock for Log Table Lock Sub-Resources: Full Table Lock (default) [UPD-STATS] – Update statistics Lock [COMPILE] – Compile Lock Index Lock sub-Resources: Full Index Lock (default) [INDEX_ID] – Index ID Lock [INDEX_NAME] – Index Name Lock [BULK_ALLOC] – Bulk Allocation Lock [DEFRAG] – Defragmentation Lock For more information, see also… SOX000821700049 SQL 7.0 How to interpret lock resource Ids Lock Resource Block The resource type has the following resource block format: Resource Type (Code) Content DB (2) Data 1: sub-resource; Data 2: 0; Data 3: 0 File (3) Data 1: File ID; Data 2: 0; Data 3: 0 Index (4) Data 1: Object ID; Data 2: sub-resource; Data 3: Index ID Table (5) Data 1: Object ID; Data 2: sub-resource; Data 3: 0. Page (6) Data 1: Page Number; Data 3: 0. Key (7) Data 1: Object ID; Data 2: Index ID; Data 3: Hashed Key Extent (8) Data 1: Extent ID; Data 3: 0. RID (9) Data 1: RID; Data 3: 0. Application (10) Data 1: Application resource name The rsc_bin column of master..syslockinfo contains the resource block in hexadecimal format. For an example of how to decode value from this column using the information above, let us assume we have the following value: 0x000705001F83D775010002014F0BEC4E With byte swapping within each field, this can be decoded as: Byte 0: Flag – 0x00 Byte 1: Resource Type – 0x07 (Key) Byte 2-3: DBID – 0x0005 Byte 4-7: ObjectID – 0x 75D7831F (1977058079) Byte 8-9: IndexID – 0x0001 Byte 10-16: Hash Key value – 0x 02014F0BEC4E For more information about how to decode this value, see also… Inside SQL Server 2000, pages 803 and 806. Key Range Locking Key Range Locking To support SERIALIZABLE transaction semantics, SQL Server needs to lock sets of rows specified by a predicate, such as WHERE salary BETWEEN 30000 AND 50000 SQL Server needs to lock data that does not exist! If no rows satisfy the WHERE condition the first time the range is scanned, no rows should be returned on any subsequent scans. Key range locks are similar to row locks on index keys (whether clustered or not). The locks are placed on individual keys rather than at the node level. The hash value consists of all the key components and the locator. So, for a nonclustered index over a heap, where columns c1 and c2 where indexed, the hash would contain contributions from c1, c2 and the RID. A key range lock applied to a particular key means that all keys between the value locked and the next value would be locked for all data modification. Key range locks can lock a slightly larger range than that implied by the WHERE clause. Suppose the following select was executed in a transaction with isolation level SERIALIZABLE: SELECT * FROM members WHERE first_name between ‘Al’ and ‘Carl’ If 'Al', 'Bob', and 'Dave' are index keys in the table, the first two of these would acquire key range locks. Although this would prevent anyone from inserting either 'Alex' or 'Ben', it would also prevent someone from inserting 'Dan', which is not within the range of the WHERE clause. Prior to SQL Server 7.0, page locking was used to prevent phantoms by locking the entire set of pages on which the phantom would exist. This can be too conservative. Key Range locking lets SQL Server lock only a much more restrictive area of the table. Impact Key-range locking ensures that these scenarios are SERIALIZABLE:  Range scan query  Singleton fetch of nonexistent row  Delete operation  Insert operation However, the following conditions must be satisfied before key-range locking can occur:  The transaction-isolation level must be set to SERIALIZABLE.  The operation performed on the data must use an index range access. Range locking is activated only when query processing (such as the optimizer) chooses an index path to access the data. Key Range Lock Mode Again, the req_mode column of master.dbo.syslockinfo has lock mode code that is one less than the code values shown here. Dynamic Locking When modifying individual rows, SQL Server typically would take row locks to maximize concurrency (for example, OLTP, order-entry application). When scanning larger volumes of data, it would be more appropriate to take page or table locks to minimize the cost of acquiring locks (for example, DSS, data warehouse, reporting). Locking Decision The decision about which unit to lock is made dynamically, taking many factors into account, including other activity on the system. For example, if there are multiple transactions currently accessing a table, SQL Server will tend to favor row locking more so than it otherwise would. It may mean the difference between scanning the table now and paying a bit more in locking cost, or having to wait to acquire a more coarse lock. A preliminary locking decision is made during query optimization, but that decision can be adjusted when the query is actually executed. Lock Escalation When the lock count for the transaction exceeds and is a multiple of ESCALATION_THRESHOLD (1250), the Lock Manager attempts to escalate. For example, when a transaction acquired 1250 locks, lock manager will try to escalate. The number of locks held may continue to increase after the escalation attempt (for example, because new tables are accessed, or the previous lock escalation attempts failed due to incompatible locks held by another spid). If the lock count for this transaction reaches 2500 (1250 * 2), Lock Manager will attempt escalation again. The Lock Manager looks at the lock memory it is using and if it is more than 40 percent of SQL Server’s allocated buffer pool memory, it tries to find a scan (SDES) where no escalation has already been performed. It then repeats the search operation until all scans have been escalated or until the memory used drops under the MEMORY_LOAD_ESCALATION_THRESHOLD (40%) value. If lock escalation is not possible or fails to significantly reduce lock memory footprint, SQL Server can continue to acquire locks until the total lock memory reaches 60 percent of the buffer pool (MAX_LOCK_RESOURCE_MEMORY_PERCENTAGE=60). Lock escalation may be also done when a single scan (SDES) holds more than LOCK_ESCALATION_THRESHOLD (765) locks. There is no lock escalation on temporary tables or system tables. Trace Flag 1211 disables lock escalation. Important Do not relay this to the customer without careful consideration. Lock escalation is a necessary feature, not something to be avoided completely. Trace flags are global and disabling lock escalation could lead to out of memory situations, extremely poor performing queries, or other problems. Lock escalation tracing can be seen using the Profiler or with the general locking trace flag, -T1200. However, Trace Flag 1200 shows all lock activity so it should not be usable on a production system. For more information, see also… SOX000925700237 “TITLE: SQL 7.0 Lock escalation in SQL 7.0” Lock Timeout Application Lock Timeout An application can set lock timeout for a session with the SET option: SET LOCK_TIMEOUT N where N is a number of milliseconds. A value of -1 means that there will be no timeout, which is equivalent to the version 6.5 behavior. A value of 0 means that there will be no waiting; if a process finds a resource locked, it will generate error message 1222 and continue with the next statement. The current value of LOCK_TIMEOUT is stored in the global variable @@lock_timeout. Note After a lock timeout any transaction containing the statement, is rolled back or canceled by SQL Server 2000 (bug#352640 was filed). This behavior is different from that of SQL Server 7.0. With SQL Server 7.0, the application must have an error handler that can trap error 1222 and if an application does not trap the error, it can proceed unaware that an individual statement within a transaction has been canceled, and errors can occur because statements later in the transaction may depend on the statement that was never executed. Bug#352640 is fixed in hotfix build 8.00.266 whereby a lock timeout will only Internal Lock Timeout At time, internal operations within SQL Server will attempt to acquire locks via lock manager. Typically, these lock requests are issued with “no waiting.” For example, the ghost record processing might try to clean up rows on a particular page, and before it can do that, it needs to lock the page. Thus, the ghost record manager will request a page lock with no wait so that if it cannot lock the page, it will just move on to other pages; it can always come back to this page later. If you look at SQL Profiler Lock: Timeout events, internal lock timeout typically have a duration value of zero. Lock Duration Lock Mode and Transaction Isolation Level For REPEATABLE READ transaction isolation level, update locks are held until data is read and processed, unless promoted to exclusive locks. "Data is processed" means that we have decided whether the row in question matched the search criteria; if not then the update lock is released, otherwise, we get an exclusive lock and make the modification. Consider the following query: use northwind go dbcc traceon(3604, 1200, 1211) -- turn on lock tracing -- and disable escalation go set transaction isolation level repeatable read begin tran update dbo.[order details] set discount = convert (real, discount) where discount = 0.0 exec sp_lock Update locks are promoted to exclusive locks when there is a match; otherwise, the update lock is released. The sp_lock output verifies that the SPID does not hold any update locks or shared locks at the end of the query. Lock escalation is turned off so that exclusive table lock is not held at the end. Warning Do not use trace flag 1200 in a production environment because it produces a lot of output and slows down the server. Trace flag 1211 should not be used unless you have done extensive study to make sure it helps with performance. These trace flags are used here for illustration and learning purposes only. Lock Ownership Most of the locking discussion in this lesson relates to locks owned by “transactions.” In addition to transaction, cursor and session can be owners of locks and they both affect how long locks are held. For every row that is fetched, when SCROLL_LOCKS option is used, regardless of the state of a transaction, a cursor lock is held until the next row is fetched or when the cursor is closed. Locks owned by session are outside the scope of a transaction. The duration of these locks are bounded by the connection and the process will continue to hold these locks until the process disconnects. A typical lock owned by session is the database (DB) lock. Locking – Read Committed Scan Under read committed isolation level, when database pages are scanned, shared locks are held when the page is read and processed. The shared locks are released “behind” the scan and allow other transactions to update rows. It is important to note that the shared lock currently acquired will not be released until shared lock for the next page is successfully acquired (this is commonly know as “crabbing”). If the same pages are scanned again, rows may be modified or deleted by other transactions. Locking – Repeatable Read Scan Under repeatable read isolation level, when database pages are scanned, shared locks are held when the page is read and processed. SQL Server continues to hold these shared locks, thus preventing other transactions to update rows. If the same pages are scanned again, previously scanned rows will not change but new rows may be added by other transactions. Locking – Serializable Read Scan Under serializable read isolation level, when database pages are scanned, shared locks are held not only on rows but also on scanned key range. SQL Server continues to hold these shared locks until the end of transaction. Because key range locks are held, not only will this prevent other transactions from modifying the rows, no new rows can be inserted. Prefetch and Isolation Level Prefetch and Locking Behavior The prefetch feature is available for use with SQL Server 7.0 and SQL Server 2000. When searching for data using a nonclustered index, the index is searched for a particular value. When that value is found, the index points to the disk address. The traditional approach would be to immediately issue an I/O for that row, given the disk address. The result is one synchronous I/O per row and, at most, one disk at a time working to evaluate the query. This does not take advantage of striped disk sets. The prefetch feature takes a different approach. It continues looking for more record pointers in the nonclustered index. When it has collected a number of them, it provides the storage engine with prefetch hints. These hints tell the storage engine that the query processor will need these particular records soon. The storage engine can now issue several I/Os simultaneously, taking advantage of striped disk sets to execute multiple operations simultaneously. For example, if the engine is scanning a nonclustered index to determine which rows qualify but will eventually need to visit the data page as well to access columns that are not in the index, it may decide to submit asynchronous page read requests for a group of qualifying rows. The prefetch data pages are then revisited later to avoid waiting for each individual page read to complete in a serial fashion. This data access path requires that a lock be held between the prefetch request and the row lookup to stabilize the row on the page so it is not to be moved by a page split or clustered key update. For our example, the isolation level of the query is escalated to REPEATABLE READ, overriding the transaction isolation level. With SQL Server 7.0 and SQL Server 2000, portions of a transaction can execute at a different transaction isolation level than the entire transaction itself. This is implemented as lock classes. Lock classes are used to control lock lifetime when portions of a transaction need to execute at a stricter isolation level than the underlying transaction. Unfortunately, in SQL Server 7.0 and SQL Server 2000, the lock class is created at the topmost operator of the query and hence released only at the end of the query. Currently there is no support to release the lock (lock class) after the row has been discarded or fetched by the filter or join operator. This is because isolation level can be set at the query level via a lock class, but no lower. Because of this, locks acquired during the query will not be released until the query completes. If prefetch is occurring you may see a single SPID that holds hundreds of Shared KEY or PAG locks even though the connection’s isolation level is READ COMMITTED. Isolation level can be determined from DBCC PSS output. For details about this behavior see “SOX001109700040 INF: Queries with PREFETCH in the plan hold lock until the end of transaction”. Other Locking Mechanism Lock manager does not manage latches and spinlocks. Latches Latches are internal mechanisms used to protect pages while doing operations such as placing a row physically on a page, compressing space on a page, or retrieving rows from a page. Latches can roughly be divided into I/O latches and non-I/O latches. If you see a high number of non-I/O related latches, SQL Server is usually doing a large number of hash or sort operations in tempdb. You can monitor latch activities via DBCC SQLPERF(‘WAITSTATS’) command. Spinlock A spinlock is an internal data structure that is used to protect vital information that is shared within SQL Server. On a multi-processor machine, when SQL Server tries to access a particular resource protected by a spinlock, it must first acquire the spinlock. If it fails, it executes a loop that will check to see if the lock is available and if not, decrements a counter. If the counter reaches zero, it yields the processor to another thread and goes into a “sleep” (wait) state for a pre-determined amount of time. When it wakes, hopefully, the lock is free and available. If not, the loop starts again and it is terminated only when the lock is acquired. The reason for implementing a spinlock is that it is probably less costly to “spin” for a short time rather than yielding the processor. Yielding the processor will force an expensive context switch where:  The old thread’s state must be saved  The new thread’s state must be reloaded  The data stored in the L1 and L2 cache are useless to the processor On a single-processor computer, the loop is not useful because no other thread can be running and thus, no one can release the spinlock for the currently executing thread to acquire. In this situation, the thread yields the processor immediately. Lesson 2: Concepts – Batch and Transaction This lesson outlines some of the common causes that contribute to the perception of a slow server. What You Will Learn After completing this lesson, you will be able to:  Review batch processing and error checking.  Review explicit, implicit and autocommit transactions and transaction nesting level.  Discuss how commit and rollback transaction done in stored procedure and trigger affects transaction nesting level.  Discuss various transaction isolation level and their impact on locking.  Discuss the difference between aborting a statement, a transaction, and a batch.  Describe how @@error, @@transcount, and @@rowcount can be used for error checking and handling. Recommended Reading  Charter 12 “Transactions and Triggers”, Inside SQL Server 2000 by Kalen Delaney Batch Definition SQL Profiler Statements and Batches To help further your understanding of what is a batch and what is a statement, you can use SQL Profiler to study the definition of batch and statement.  Try This: Using SQL Profiler to Analyze Batch 1. Log on to a server with Query Analyzer 2. Startup the SQL Profiler against the same server 3. Start a trace using the “StandardSQLProfiler” template 4. Execute the following using Query Analyzer: SELECT @@VERSION SELECT @@SPID The ‘SQL:BatchCompleted’ event is captured by the trace. It shows both the statements as a single batch. 5. Now execute the following using Query Analyzer {call sp_who()} What shows up? The ‘RPC:Completed’ with the sp_who information. RPC is simply another entry point to the SQL Server to call stored procedures with native data types. This allows one to avoid parsing. The ‘RPC:Completed’ event should be considered the same as a batch for the purposes of this discussion. Stop the current trace and start a new trace using the “SQLProfilerTSQL_SPs” template. Issue the same command as outlines in step 5 above. Looking at the output, not only can you see the batch markers but each statement as executed within the batch. Autocommit, Explicit, and Implicit Transaction Autocommit Transaction Mode (Default) Autocommit mode is the default transaction management mode of SQL Server. Every Transact-SQL statement, whether it is a standalone statement or part of a batch, is committed or rolled back when it completes. If a statement completes successfully, it is committed; if it encounters any error, it is rolled back. A SQL Server connection operates in autocommit mode whenever this default mode has not been overridden by either explicit or implicit transactions. Autocommit mode is also the default mode for ADO, OLE DB, ODBC, and DB-Library. A SQL Server connection operates in autocommit mode until a BEGIN TRANSACTION statement starts an explicit transaction, or implicit transaction mode is set on. When the explicit transaction is committed or rolled back, or when implicit transaction mode is turned off, SQL Server returns to autocommit mode. Explicit Transaction Mode An explicit transaction is a transaction that starts with a BEGIN TRANSACTION statement. An explicit transaction can contain one or more statements and must be terminated by either a COMMIT TRANSACTION or a ROLLBACK TRANSACTION statement. Implicit Transaction Mode SQL Server can automatically or, more precisely, implicitly start a transaction for you if a SET IMPLICIT_TRANSACTIONS ON statement is run or if the implicit transaction option is turned on globally by running sp_configure ‘user options’ 2. (Actually, the bit mask 0x2 must be turned on for the user option so you might have to perform an ‘OR’ operation with the existing user option value.) See SQL Server 2000 Books Online on how to turn on implicit transaction under ODBC and OLE DB (acdata.chm::/ac_8_md_06_2g6r.htm). Transaction Nesting Explicit transactions can be nested. Committing inner transactions is ignored by SQL Server other than to decrements @@TRANCOUNT. The transaction is either committed or rolled back based on the action taken at the end of the outermost transaction. If the outer transaction is committed, the inner nested transactions are also committed. If the outer transaction is rolled back, then all inner transactions are also rolled back, regardless of whether the inner transactions were individually committed. Each call to COMMIT TRANSACTION applies to the last executed BEGIN TRANSACTION. If the BEGIN TRANSACTION statements are nested, then a COMMIT statement applies only to the last nested transaction, which is the innermost transaction. Even if a COMMIT TRANSACTION transaction_name statement within a nested transaction refers to the transaction name of the outer transaction, the commit applies only to the innermost transaction. If a ROLLBACK TRANSACTION statement without a transaction_name parameter is executed at any level of a set of nested transaction, it rolls back all the nested transactions, including the outermost transaction. The @@TRANCOUNT function records the current transaction nesting level. Each BEGIN TRANSACTION statement increments @@TRANCOUNT by one. Each COMMIT TRANSACTION statement decrements @@TRANCOUNT by one. A ROLLBACK TRANSACTION statement that does not have a transaction name rolls back all nested transactions and decrements @@TRANCOUNT to 0. A ROLLBACK TRANSACTION that uses the transaction name of the outermost transaction in a set of nested transactions rolls back all the nested transactions and decrements @@TRANCOUNT to 0. When you are unsure if you are already in a transaction, SELECT @@TRANCOUNT to determine whether it is 1 or more. If @@TRANCOUNT is 0 you are not in a transaction. You can also find the transaction nesting level by checking the sysprocess.open_tran column. See SQL Server 2000 Books Online topic “Nesting Transactions” (acdata.chm::/ac_8_md_06_66nq.htm) for more information. Statement, Transaction, and Batch Abort One batch can have many statements and one transaction can have multiple statements, also. One transaction can span multiple batches and one batch can have multiple transactions. Statement Abort Currently executing statement is aborted. This can be a bit confusing when you start talking about statements in a trigger or stored procedure. Let us look closely at the following trigger: CREATE TRIGGER TRG8134 ON TBL8134 AFTER INSERT AS BEGIN SELECT 1/0 SELECT 'Next command in trigger' END To fire the INSERT trigger, the batch could be as simple as ‘INSERT INTO TBL8134 VALUES(1)’. However, the trigger contains two statements that must be executed as part of the batch to satisfy the clients insert request. When the ‘SELECT 1/0’ causes the divide by zero error, a statement abort is issued for the ‘SELECT 1/0’ statement. Batch and Transaction Abort On SQL Server 2000 (and SQL Server 7.0) whenever a non-informational error is encountered in a trigger, the statement abort is promoted to a batch and transactional abort. Thus, in the example the statement abort for ‘select 1/0’ promotion results in an entire batch abort. No further statements in the trigger or batch will be executed and a rollback is issued. On SQL Server 6.5, the statement aborts immediately and results in a transaction abort. However, the rest of the statements within the trigger are executed. This trigger could return ‘Next command in trigger’ as a result set. Once the trigger completes the batch abort promotion takes effect. Conversely, submitting a similar set of statements in a standalone batch can result in different behavior. SELECT 1/0 SELECT 'Next command in batch' Not considering the set option possibilities, a divide by zero error generally results in a statement abort. Since it is not in a trigger, the promotion to a batch abort is avoided and subsequent SELECT statement can execute. The programmer should add an “if @@ERROR” check immediately after the ‘select 1/0’ to T-SQL execution to control the flow correctly. Aborting and Set Options ARITHABORT If SET ARITHABORT is ON, these error conditions cause the query or batch to terminate. If the errors occur in a transaction, the transaction is rolled back. If SET ARITHABORT is OFF and one of these errors occurs, a warning message is displayed, and NULL is assigned to the result of the arithmetic operation. When an INSERT, DELETE, or UPDATE statement encounters an arithmetic error (overflow, divide-by-zero, or a domain error) during expression evaluation when SET ARITHABORT is OFF, SQL Server inserts or updates a NULL value. If the target column is not nullable, the insert or update action fails and the user receives an error. XACT_ABORT When SET XACT_ABORT is ON, if a Transact-SQL statement raises a run-time error, the entire transaction is terminated and rolled back. When OFF, only the Transact-SQL statement that raised the error is rolled back and the transaction continues processing. Compile errors, such as syntax errors, are not affected by SET XACT_ABORT. For example: CREATE TABLE t1 (a int PRIMARY KEY) CREATE TABLE t2 (a int REFERENCES t1(a)) GO INSERT INTO t1 VALUES (1) INSERT INTO t1 VALUES (3) INSERT INTO t1 VALUES (4) INSERT INTO t1 VALUES (6) GO SET XACT_ABORT OFF GO BEGIN TRAN INSERT INTO t2 VALUES (1) INSERT INTO t2 VALUES (2) /* Foreign key error */ INSERT INTO t2 VALUES (3) COMMIT TRAN SELECT 'Continue running batch 1...' GO SET XACT_ABORT ON GO BEGIN TRAN INSERT INTO t2 VALUES (4) INSERT INTO t2 VALUES (5) /* Foreign key error */ INSERT INTO t2 VALUES (6) COMMIT TRAN SELECT 'Continue running batch 2...' GO /* Select shows only keys 1 and 3 added. Key 2 insert failed and was rolled back, but XACT_ABORT was OFF and rest of transaction succeeded. Key 5 insert error with XACT_ABORT ON caused all of the second transaction to roll back. Also note that 'Continue running batch 2...' is not Returned to indicate that the batch is aborted. */ SELECT * FROM t2 GO DROP TABLE t2 DROP TABLE t1 GO Compile and Run-time Errors Compile Errors Compile errors are encountered during syntax checks, security checks, and other general operations to prepare the batch for execution. These errors can prevent the optimization of the query and thus lead to immediate abort. The statement is not run and the batch is aborted. The transaction state is generally left untouched. For example, assume there are four statements in a particular batch. If the third statement has a syntax error, none of the statements in the batch is executed. Optimization Errors Optimization errors would include rare situations where the statement encounters a problem when attempting to build an optimal execution plan. Example: “too many tables referenced in the query” error is reported because a “work table” was added to the plan. Runtime Errors Runtime errors are those that are encountered during the execution of the query. Consider the following batch: SELECT * FROM pubs.dbo.titles UPDATE pubs.dbo.authors SET au_lname = au_lname SELECT * FROM foo UPDATE pubs.dbo.authors SET au_lname = au_lname If you run the above statements in a batch, the first two statements will be executed, the third statement will fail because table foo does not exist, and the batch will terminate. Deferred Name Resolution is the feature that allows this batch to start executing before resolving the object foo. This feature allows SQL Server to delay object resolution and place a “placeholder” in the query’s execution. The object referenced by the placeholder is resolved until the query is executed. In our example, the execution of the statement “SELECT * FROM foo” will trigger another compile process to resolve the name again. This time, error message 208 is returned. Error: 208, Level 16, State 1, Line 1 Invalid object name 'foo'. Message 208 can be encountered as a runtime or compile error depending on whether the Deferred Name Resolution feature is available. In SQL Server 6.5 this would be considered a compile error and on SQL Server 2000 (and SQL Server7.0) as a runtime error due to Deferred Name Resolution. In the following example, if a trigger referenced authors2, the error is detected as SQL Server attempts to execute the trigger. However, under SQL Server 6.5 the create trigger statement fails because authors2 does not exist at compile time. When errors are encountered in a trigger, generally, the statement, batch, and transaction are aborted. You should be able to observe this by running the following script in pubs database: Create table tblTest(iID int) go create trigger trgInsert on tblTest for INSERT as begin select * from authors select * from authors2 select * from titles end go begin tran select 'Before' insert into tblTest values(1) select 'After' go select @@TRANCOUNT go When run in a batch, the statement and the batch are aborted but the transaction remains active. The follow script illustrates this: begin tran select 'Before' select * from authors2 select 'After' go select @@TRANCOUNT go One other factor in a compile versus runtime error is implicit data type conversions. If you were to run the following statements on SQL Server 6.5 and SQL Server 2000 (and SQL Server 7.0): create table tblData(dtData datetime) go select 1 insert into tblData values(12/13/99) go On SQL Server 6.5, you get an error before execution of the batch begins so no statements are executed and the batch is aborted. Error: 206, Level 16, State 2, Line 2 Operand type clash: int is incompatible with datetime On SQL Server 2000, you get the default value (1900-01-01 00:00:00.000) inserted into the table. SQL Server 2000 implicit data type conversion treats this as integer division. The integer division of 12/13/99 is 0, so the default date and time value is inserted, no error returned. To correct the problem on either version is to wrap the date string with quotes. See Bug #56118 (sqlbug_70) for more details about this situation. Another example of a runtime error is a 605 message. Error: 605 Attempt to fetch logical page %S_PGID in database '%.*ls' belongs to object '%.*ls', not to object '%.*ls'. A 605 error is always a runtime error. However, depending on the transaction isolation level, (e.g. using the NOLOCK lock hint), established by the SPID the handling of the error can vary. Specifically, a 605 error is considered an ACCESS error. Errors associated with buffer and page access are found in the 600 series of errors. When the error is encountered, the isolation level of the SPID is examined to determine proper handling based on information or fatal error level. Transaction Error Checking Not all errors cause transactions to automatically rollback. Although it is difficult to determine exactly which errors will rollback transactions and which errors will not, the main idea here is that programmers must perform error checking and handle errors appropriately. Error Handling Raiserror Details Raiserror seems to be a source of confusion but is really rather simple. Raiserror with severity levels of 20 or higher will terminate the connection. Of course, when the connection is terminated a full rollback of any open transaction will immediately be instantiated by the SQL Server (except distributed transaction with DTC involved). Severity levels lower than 20 will simply result in the error message being returned to the client. They do not affect the transaction scope of the connection. Consider the following batch: use pubs begin tran update authors set au_lname = 'smith' raiserror ('This is bad', 19, 1) with log select @@trancount With severity set at 19, the 'select @@trancount' will be executed after the raiserror statement and will return a value of 1. If severity is changed to 20, then the select statement will not run and the connection is broken. Important Error handling must occur not only in T-SQL batches and stored procedures, but also in application program code. Transactions and Triggers (1 of 2) Basic behavior assumes the implicit transactions setting is set to OFF. This behavior makes it possible to identify business logic errors in a trigger, raise an error, rollback the action, and add an audit table entry. Logically, the insert to the audit table cannot take place before the ROLLBACK action and you would not want to build in the audit table insert into every applications error handler that violated the business rule of the trigger. For more information, see also… SQL Server 2000 Books Online topic “Rollbacks in stored procedure and triggers“ (acdata.chm::/ac_8_md_06_4qcz.htm) IMPLICIT_TRANSACTIONS ON Behavior The behavior of firing other triggers on the same table can be tricky. Say you added a trigger that checks the CODE field. Read only versions of the rows contain the code ‘RO’ and read/write versions use ‘RW.’ Whenever someone tries to delete a row with a code ‘RO’ the trigger issues the rollback and logs an audit table entry. However, you also have a second trigger that is responsible for cascading delete operations. One client could issue the delete without implicit transactions on and only the current trigger would execute and then terminate the batch. However, a second client with implicit transactions on could issue the same delete and the secondary trigger would fire. You end up with a situation in which the cascading delete operations can take place (are committed) but the initial row remains in the table because of the rollback operation. None of the delete operations should be allowed but because the transaction scope was restarted because of the implicit transactions setting, they did. Transactions and Triggers (2 of 2) It is extremely difficult to determine the execution state of a trigger when using explicit rollback statements in combination with implicit transactions. The RETURN statement is not allowed to return a value. The only way I have found to set the @@ERROR is using a ‘raiserror’ as the last execution statement in the last trigger to execute. If you modify the example, this following RAISERROR statement will set @@ERROR to 50000: CREATE TRIGGER trgTest on tblTest for INSERT AS BEGIN ROLLBACK INSERT INTO tblAudit VALUES (1) RAISERROR('This is bad', 14,1) END However, this value does not carry over to a secondary trigger for the same table. If you raise an error at the end of the first trigger and then look at @@ERROR in the secondary trigger the @@ERROR remains 0. Carrying Forward an Active/Open Transaction It is possible to exit from a trigger and carry forward an open transaction by issuing a BEGIN TRAN or by setting implicit transaction on and doing INSERT, UPDATE, or DELETE. Warning It is never recommended that a trigger call BEGIN TRANSACTION. By doing this you increment the transaction count. Invalid code logic, not calling commit transaction, can lead to a situation where the transaction count remains elevated upon exit of the trigger. Transaction Count The behavior is better explained by understanding how the server works. It does not matter whether you are in a transaction, when a modification takes place the transaction count is incremented. So, in the simplest form, during the processing of an insert the transaction count is 1. On completion of the insert, the server will commit (and thus decrement the transaction count). If the commit identifies the transaction count has returned to 0, the actual commit processing is completed. Issuing a commit when the transaction count is greater than 1 simply decrements the nested transaction counter. Thus, when we enter a trigger, the transaction count is 1. At the completion of the trigger, the transaction count will be 0 due to the commit issued at the end of the modification statement (insert). In our example, if the connection was already in a transaction and called the second INSERT, since implicit transaction is ON, the transaction count in the trigger will be 2 as long as the ROLLBACK is not executed. At the end of the insert, the commit is again issued to decrement the transaction reference count to 1. However, the value does not return to 0 so the transaction remains open/active. Subsequent triggers are only fired if the transaction count at the end of the trigger remains greater than or equal to 1. The key to continuation of secondary triggers and the batch is the transaction count at the end of a trigger execution. If the trigger that performs a rollback has done an explicit begin transaction or uses implicit transactions, subsequent triggers and the batch will continue. If the transaction count is not 1 or greater, subsequent triggers and the batch will not execute. Warning Forcing the transaction count after issuing a rollback is dangerous because you can easily loose track of your transaction nesting level. When performing an explicit rollback in a trigger, you should immediately issue a return statement to maintain consistent behavior between a connection with and without implicit transaction settings. This will force the trigger(s) and batch to terminate immediately. One of the methods of dealing with this issue is to run ‘SET IMPLICIT_TRANSACTIONS OFF’ as the first statement of any trigger. Other methods may entails checking @@TRANCOUNT at the end of the trigger and continue to COMMIT the transaction as long as @@TRANCOUNT is greater than 1. Examples The following examples are based on this table: create table tbl50000Insert (iID int NOT NULL) go Note If more than one trigger is used, to guarantee the trigger firing sequence, the sp_settriggerorder command should be used. This command is omitted in these examples to simplify the complexity of the statements. First Example In the first example, the second trigger was never fired and the batch, starting with the insert statement, was aborted. Thus, the print statement was never issued. print('Trigger issues rollback - cancels batch') go create trigger trg50000Insert on tbl50000Insert for INSERT as begin select 'Inserted', * from inserted rollback tran select 'End of trigger', @@TRANCOUNT as 'TRANCOUNT' end go create trigger trg50000Insert2 on tbl50000Insert for INSERT as begin select 'In Trigger2' select 'Trigger 2 Inserted', * from inserted end go insert into tbl50000Insert values(1) print('---------------------- In same batch') select * from tbl50000Insert go -- Cleanup drop trigger trg50000Insert drop trigger trg50000Insert2 go delete from tbl50000Insert Second Example The next example shows that since a new transaction is started, the second trigger will be fired and the print statement in the batch will be executed. Note that the insert is rolled back. print('Trigger issues rollback - increases tran count to continue batch') go create trigger trg50000Insert on tbl50000Insert for INSERT as begin select 'Inserted', * from inserted rollback tran begin tran end go create trigger trg50000Insert2 on tbl50000Insert for INSERT as begin select 'In Trigger2' select 'Trigger 2 Inserted', * from inserted end go insert into tbl50000Insert values(2) print('---------------------- In same batch') select * from tbl50000Insert go -- Cleanup drop trigger trg50000Insert drop trigger trg50000Insert2 go delete from tbl50000Insert Third Example In the third example, the raiserror statement is used to set the @@ERROR value and the BEGIN TRAN statement is used in the trigger to allow the batch to continue to run. print('Trigger issues rollback - uses raiserror to set @@ERROR') go create trigger trg50000Insert on tbl50000Insert for INSERT as begin select 'Inserted', * from inserted rollback tran begin tran -- Increase @@trancount to allow -- batch to continue select @@trancount as ‘Trancount’ raiserror('This is from the trigger', 14,1) end go insert into tbl50000Insert values(3) select @@ERROR as 'ERROR', @@TRANCOUNT as 'Trancount' go -- Cleanup drop trigger trg50000Insert go delete from tbl50000Insert Fourth Example For the fourth example, a second trigger is added to illustrate the fact that @@ERROR value set in the first trigger will not be seen in the second trigger nor will it show up in the batch after the second trigger is fired. print('Trigger issues rollback - uses raiserror to set @@ERROR, not seen in second trigger and cleared in batch') go create trigger trg50000Insert on tbl50000Insert for INSERT as begin select 'Inserted', * from inserted rollback begin tran -- Increase @@trancount to -- allow batch to continue select @@TRANCOUNT as 'Trancount' raiserror('This is from the trigger', 14,1) end go create trigger trg50000Insert2 on tbl50000Insert for INSERT as begin select @@ERROR as 'ERROR', @@TRANCOUNT as 'Trancount' end go insert into tbl50000Insert values(4) select @@ERROR as 'ERROR', @@TRANCOUNT as 'Trancount' go -- Cleanup drop trigger trg50000Insert drop trigger trg50000Insert2 go delete from tbl50000Insert Lesson 3: Concepts – Locks and Applications This lesson outlines some of the common causes that contribute to the perception of a slow server. What You Will Learn After completing this lesson, you will be able to:  Explain how lock hints are used and their impact.  Discuss the effect on locking when an application uses Microsoft Transaction Server.  Identify the different kinds of deadlocks including distributed deadlock. Recommended Reading  Charter 14 “Locking”, Inside SQL Server 2000 by Kalen Delaney  Charter 16 “Query Tuning”, Inside SQL Server 2000 by Kalen Delaney Q239753 – Deadlock Situation Not Detected by SQL Server Q288752 – Blocked SPID Not Participating in Deadlock May Incorrectly be Chosen as victim Locking Hints UPDLOCK If update locks are used instead of shared locks while reading a table, the locks are held until the end of the statement or transaction. UPDLOCK has the advantage of allowing you to read data (without blocking other readers) and update it later with the assurance that the data has not changed since you last read it. READPAST READPAST is an optimizer hint for use with SELECT statements. When this hint is used, SQL Server will read past locked rows. For example, assume table T1 contains a single integer column with the values of 1, 2, 3, 4, and 5. If transaction A changes the value of 3 to 8 but has not yet committed, a SELECT * FROM T1 (READPAST) yields values 1, 2, 4, 5. Tip READPAST only applies to transactions operating at READ COMMITTED isolation and only reads past row-level locks. This lock hint can be used to implement a work queue on a SQL Server table. For example, assume there are many external work requests being thrown into a table and they should be serviced in approximate insertion order but they do not have to be completely FIFO. If you have 4 worker threads consuming work items from the queue they could each pick up a record using read past locking and then delete the entry from the queue and commit when they're done. If they fail, they could rollback, leaving the entry on the queue for the next worker thread to pick up. Caution The READPAST hint is not compatible with HOLDLOCK.  Try This: Using Locking Hints 1. Open a Query Window and connect to the pubs database. 2. Execute the following statements (--Conn 1 is optional to help you keep track of each connection): BEGIN TRANSACTION -- Conn 1 UPDATE titles SET price = price * 0.9 WHERE title_id = 'BU1032' 3. Open a second connection and execute the following statements: SELECT @@lock_timeout -- Conn 2 GO SELECT * FROM titles SELECT * FROM authors 4. Open a third connection and execute the following statements: SET LOCK_TIMEOUT 0 -- Conn 3 SELECT * FROM titles SELECT * FROM authors 5. Open a fourth connection and execute the following statement: SELECT * FROM titles (READPAST) -- Conn 4 WHERE title_ID < 'C' SELECT * FROM authors How many records were returned? 3 6. Open a fifth connection and execute the following statement: SELECT * FROM titles (NOLOCK) -- Conn 5 WHERE title_ID 0 the lock manager also checks for deadlocks every time a SPID gets blocked. So a single deadlock will trigger 20 seconds of more immediate deadlock detection, but if no additional deadlocks occur in that 20 seconds, the lock manager no longer checks for deadlocks at each block and detection again only happens every 5 seconds. Although normally not needed, you may use trace flag -T1205 to trace the deadlock detection process. Note Please note the distinction between application lock and other locks’ deadlock detection. For application lock, we do not rollback the transaction of the deadlock victim but simply return a -3 to sp_getapplock, which the application needs to handle itself. Deadlock Resolution How is a deadlock resolved? SQL Server picks one of the connections as a deadlock victim. The victim is chosen based on either which is the least expensive transaction (calculated using the number and size of the log records) to roll back or in which process “SET DEADLOCK_PRIORITY LOW” is specified. The victim’s transaction is rolled back, held locks are released, and SQL Server sends error 1205 to the victim’s client application to notify it that it was chosen as a victim. The other process can then obtain access to the resource it was waiting on and continue. Error 1205: Your transaction (process ID #%d) was deadlocked with another process and has been chosen as the deadlock victim. Rerun your transaction. Symptoms of deadlocking Error 1205 usually is not written to the SQL Server errorlog. Unfortunately, you cannot use sp_altermessage to cause 1205 to be written to the errorlog. If the client application does not capture and display error 1205, some of the symptoms of deadlock occurring are:  Clients complain of mysteriously canceled queries when using certain features of an application.  May be accompanied by excessive blocking. Lock contention increases the chances that a deadlock will occur. Triggers and Deadlock Triggers promote the deadlock priority of the SPID for the life of the trigger execution when the DEADLOCK PRIORITY is not set to low. When a statement in a trigger causes a deadlock to occur, the SPID executing the trigger is given preferential treatment and will not become the victim. Warning Bug 235794 is filed against SQL Server 2000 where a blocked SPID that is not a participant of a deadlock may incorrectly be chosen as a deadlock victim if the SPID is blocked by one of the deadlock participants and the SPID has the least amount of transaction logging. See KB article Q288752: “Blocked Spid Not Participating in Deadlock May Incorrectly be Chosen as victim” for more information. Distributed Deadlock – Scenario 1 Distributed Deadlocks The term distributed deadlock is ambiguous. There are many types of distributed deadlocks. Scenario 1 Client application opens connection A, begins a transaction, acquires some locks, opens connection B, connection B gets blocked by A but the application is designed to not commit A’s transaction until B completes. Note SQL Server has no way of knowing that connection A is somehow dependent on B – they are two distinct connections with two distinct transactions. This situation is discussed in scenario #4 in “Q224453 INF: Understanding and Resolving SQL Server 7.0 Blocking Problems”. Distributed Deadlock – Scenario 2 Scenario 2 Distributed deadlock involving bound connections. Two connections can be bound into a single transaction context with sp_getbindtoken/sp_bindsession or via DTC. Spid 60 enlists in a transaction with spid 61. A third spid 62 is blocked by spid 60, but spid 61 is blocked by spid 62. Because they are doing work in the same transaction, spid 60 cannot commit until spid 61 finishes his work, but spid 61 is blocked by 62 who is blocked by 60. This scenario is described in article “Q239753 - Deadlock Situation Not Detected by SQL Server.” Note SQL Server 6.5 and 7.0 do not detect this deadlock. The SQL Server 2000 deadlock detection algorithm has been enhanced to detect this type of distributed deadlock. The diagram in the slide illustrates this situation. Resources locked by a spid are below that spid (in a box). Arrows indicate blocking and are drawn from the blocked spid to the resource that the spid requires. A circle represents a transaction; spids in the same transaction are shown in the same circle. Distributed Deadlock – Scenario 3 Scenario 3 Distributed deadlock involving linked servers or server-to-server RPC. Spid 60 on Server 1 executes a stored procedure on Server 2 via linked server. This stored procedure does a loopback linked server query against a table on Server 1, and this connection is blocked by a lock held by Spid 60. Note No version of SQL Server is currently designed to detect this distributed deadlock. Lesson 4: Information Collection and Analysis This lesson outlines some of the common causes that contribute to the perception of a slow server. What You Will Learn After completing this lesson, you will be able to:  Identify specific information needed for troubleshooting issues.  Locate and collect information needed for troubleshooting issues.  Analyze output of DBCC Inputbuffer, DBCC PSS, and DBCC Page commands.  Review information collected from master.dbo.sysprocesses table.  Review information collected from master.dbo.syslockinfo table.  Review output of sp_who, sp_who2, sp_lock.  Analyze Profiler log for query usage pattern.  Review output of trace flags to help troubleshoot deadlocks. Recommended Reading Q244455 - INF: Definition of Sysprocesses Waittype and Lastwaittype Fields Q244456 - INF: Description of DBCC PSS Command for SQL Server 7.0 Q271509 - INF: How to Monitor SQL Server 2000 Blocking Q251004 - How to Monitor SQL Server 7.0 Blocking Q224453 - Understanding and Resolving SQL Server 7.0 Blocking Problem Q282749 – BUG: Deadlock information reported with SQL Server 2000 Profiler Locking and Blocking  Try This: Examine Blocked Processes 1. Open a Query Window and connect to the pubs database. Execute the following statements: BEGIN TRAN -- connection 1 UPDATE titles SET price = price + 1 2. Open another connection and execute the following statement: SELECT * FROM titles-- connection 2 3. Open a third connection and execute sp_who; note the process id (spid) of the blocked process. (Connection 3) 4. In the same connection, execute the following: SELECT spid, cmd, waittype FROM master..sysprocesses WHERE waittype 0 -- connection 3 5. Do not close any of the connections! What was the wait type of the blocked process?  Try This: Look at locks held Assumes all your connections are still open from the previous exercise. • Execute sp_lock -- Connection 3 What locks is the process from the previous example holding? Make sure you run ROLLBACK TRAN in Connection 1 to clean up your transaction. Collecting Information See Module 2 for more about how to gather this information using various tools. Recognizing Blocking Problems How to Recognize Blocking Problems  Users complain about poor performance at a certain time of day, or after a certain number of users connect.  SELECT * FROM sysprocesses or sp_who2 shows non-zero values in the blocked or BlkBy column.  More severe blocking incidents will have long blocking chains or large sysprocesses.waittime values for blocked spids.  Possibl
v3.17 * updated libFLAC to version 1.2.1 * added a flush after every log line to help GUIs * "eac3to some.mpls" now also works if the stream files aren't there, anymore * fixed: number of subtitles was not appended to demuxed subtitles' file name * fixed: dialnorm removal (for Nero decoder) failed with some 2.0 TrueHD files v3.16 * added undocumented "-no2ndpass" switch to turn off 2nd pass processing * fixed: two pass processing sometimes produced superfluous sup files * fixed: MPG/EVO/VOB audio tracks with "PES extension 2" were not detected * fixed: very small W64/RF64 files were not detected correctly * fixed: when processing was aborted, log file was sometimes not created * fixed: sometimes specifying a title number addressed the wrong HD DVD title v3.15 * "24.975" is now interpreted as "25.000/1.001" * Blu-Ray "sup" are demuxed with DTS set to 0 again, proper fix will come later * fixed: error code not set for "source file format could not be detected" * fixed: audio resampling from/to 24.975 didn't work properly * fixed: WAV files beginning with lots of zeroes were sometimes not accepted v3.14 * WAV reading was broken for all but very small files (introduced in v3.13) v3.13 * fields and frames are counted and displayed separately now * added DIRAC bitstream parser * added support for "-24.975" and "-changeto24.975" * Blu-Ray subtitle demuxing: PTS value is now written to both PTS + DTS * joining MKV files is now declined with a proper error message * last chapter is now removed, if it's less than 10 seconds from end of movie * fixed: "-normalize" didn't work with stdout, anymore * fixed: audio delay was incorrect when 1st m2ts part contained no audio data * fixed: very small WAV files were not detected correctly * fixed: "eac3to source.eac3 dest.dts -core" crashed v3.12 * fixed: track languages for HD DVD discs were not shown * fixed: MLP channel order was wrong for some specific channel configurations * fixed: "DirectShow reported 255 channels" happened sometimes v3.11 * fixed: MKV subtitle track language wasn't shown v3.10 * Blu-Ray title listing now includes chapter information * fixed: v3.09 didn't show track languages for Blu-Rays v3.09 * added support for MKV "SRT/UTF8", "SRT/ASCII", "ASS" and "SSA" subtitles * increased some internal buffers to avoid AC3 overflow in the "thd ac3 joiner" * fixed: frame counting didn't work for MKV video tracks * fixed: video track FPS change was sometimes declined * fixed: video tracks with "strange" FPS were sometimes handled incorrectly * fixed: clipping removal 2nd pass was executed even for "stdout" * fixed: "eac3to -test" displayed an outdated Nero download link * fixed: specifying a specific playlist still used default playlist's chapters v3.08 * fixed: reading physical disc speed was abysmal (introduced in v3.07) * fixed: read error from physical drive resulted in crash v3.07 * added support for MKV video tracks without sequence headers in bitstream * added support for old style MKV AAC tracks * added support for various MKV "A_MS/ACM" audio formats * added support for various MKV "V_MS/VFW/FOURCC" video formats * added warning for tracks where bitstream parsing failed * demuxing a video track now also complains about video gaps/overlaps * the "-check" option now also complains about video gaps/overlaps * optimized memory allocation * fixed: adding subtitle caption count to filenames sometimes didn't work * fixed: subtitle caption counts in log sometimes had wrong track numbers * fixed: all non-supported MKV tracks shared the same description * fixed: incorrect framerate mismatch complaint was shown for pulldown sources * fixed: FLAC tracks in MKV files don't slow down detection, anymore * fixed: source file detection read 300MB from every source file v3.06 * added MKV reading/parsing support * added demux support for MKV (E-)AC3, DTS(-HD), AAC, MPx, FLAC and WAV tracks * added demux support for MKV "modern style" MPEG2, VC-1 and h264/AVC tracks * reading from (HD) DVD and Blu-Ray drives uses different reading APIs now * empty tracks in TS/m2ts container are not listed, anymore * for 24.000 fps video tracks a little warning is displayed now * when demuxing subtitle files, the number of captions is added to the filename * timestamp derived FPS is used for gap checking instead of video bitstream FPS * fixed: 44.1khz AC3 encoding was still broken * fixed: zero byte stripping pass was done for true 24bit TrueHD tracks * fixed: downconverting WAV files with 0x3f channel mask didn't work * fixed: log output "remaining delay [...]" was sometimes wrong for AC3 tracks * fixed: silent frame creation was tried for E-AC3 although it can't work v3.05 * warning is shown if h264 video bitstream contains "full range" flag * h264 video bitstream "full range" flag is automatically removed * you can disable removal of the "full range" flag by doing "-keepFullRange" * added reader for external DVD, HD DVD and Blu-Ray SUP files * external SUP files can be delayed now * number of HD DVD and DVD subtitles in SUP track is counted and displayed * number of forced and non-forced Blu-Ray subtitles in SUP track is displayed * "-check" option now also works for demuxed audio, video and subtitle tracks * when reading from physical disc drive, 2KB (instead of 1MB) blocks are read * improved automatic skipping over damaged first 5MB of TS/m2ts files * fixed: resampling and Surcode encoding didn't work in one step * fixed: TRP detection crashed * fixed: track listing sometimes contained tracks without description * fixed: h264 with missing framerate in 1st sequence header made eac3to crash * fixed: some AC3WAV files were not detected correctly * fixed: video frame count was not displayed when 2nd pass was executed v3.04 * video track framerates are now shown with up to 3 decimals, if necessary * m2ts/TS framerate is determined by interpreting video track timestamps * m2ts/TS framerate is displayed in the format description (if available) * warning is shown if container timestamps don't match video framerate * warning is shown if video bitstream has a non-standard framerate * video without framerate information: container framerate is used * video without framerate information: framerate can be set (e.g. "-23.976") * video without framerate information: new framerate is written to bitstream * remaining non-fixed audio delay is now shown in log * command prompt colors are restored after eac3to has run through * fixed: 2-pass processing for stripping zero bytes sometimes crashed * fixed: CA (Conditional Access) tracks were shown as "Unknown audio track" v3.03 * fixed: MPEG2 1088 to 1080 cropping was still incomplete v3.02 * fixed: VC-1 stream handling was broken * fixed: destination file extension "*.lpcm" didn't work with 2pass processing * fixed: MPEG2 1088 to 1080 cropping was incomplete * fixed: no log was being created when "temp file could not be interpreted" v3.01 * fixed: m2ts LCPM demuxing didn't work with v3.00 * fixed: TrueHD -> TrueHD+AC3 conversion didn't work with v3.00 v3.00 * broken AC3, DTS, AAC and MPx streams are now automatically repaired * errors in TS/m2ts files are now reported (with runtime) and ignored * damaged first max 5MB and max 5% of a TS/m2ts file are automatically skipped * video/audio tracks which can't be parsed, are now demuxed in raw form * added support for "line 21" closed captions in ATSC/NTSC broadcasts and DVDs * added reading of movie / network name from "line 21" XDS information * for gaps, edits & repairs > 1000ms eac3to now inserts silence by default * for gaps, edits & repairs < 1000ms eac3to now loops audio by default * option "-silence" forces eac3to to insert silence instead of looping audio * option "-loop" forces eac3to to loop audio instead of inserting silence * newly encoded AC3 frame is now used for "silence" instead of file's 1st frame * increased reading block size (might improve reading performance) * optimized TS/m2ts demuxing performance * optimized MPEG2, VC-1 and h264 parsing performance * command line output is colored now (e.g. errors drawn in red) * MPEG2 1920x1088 bitstream is now automatically patched/cropped to 1920x1080 * log file now contains "" and "<ERROR>" indicators * workaround for movie playlists which want the same m2ts file played twice * added version check for eac3to (doh!) * when a read error occurs, reading is tried again up to 3 times * (E-)AC3 frames with -0db dialnorm are now automatically patched to -31db * updated to newer libAften build -> fixes 44.1khz encoding * fixed: sometimes "The last DTS frame is incomplete" was a false alarm * fixed: mkvtoolnix version check didn't work, anymore * fixed: errors were meant to be output to stderr, but they weren't * fixed: automatic gap/overlap fixing with AAC targets aborted processing * fixed: positive edit began a bit too early * fixed: two ID3 tags after each other made eac3to fail detecting the format * fixed: some VOB files were not detected properly v2.87 * fixed: negative edit was done too late (introduced in v2.86) v2.86 * fixed: "1:some.ac3" instead of "1: some.ac3" failed for 2 digit track numbers * fixed: "eac3to source movie.mkv" demuxed video instead of muxing to MKV * negative edit now begins at the specified runtime instead of ending there v2.85 * using "eac3to source video.h264" doesn't demux audio/subtitle tracks, anymore * using "eac3to source movie.*" demuxes video, audio and subtitle tracks * using "eac3to source 1: video.* 2: audio.*" demuxes the specified tracks * AC3 and E-AC3 dialnorm removal now uses "-31db" instead of "-0db" * workaround for DTS files where last byte is missing in each audio frame * fixed: v2.84 sometimes crashed when parsing HD DVD XML files * fixed: v2.84 sometimes chose incorrect XML file * fixed: v2.84 sometimes chose wrong m2ts playlist file * fixed: some actions were eventually applied twice when "-2pass" was used * fixed: AAC encoding quality "quality=0.0x" was passed to Nero as "0.x" v2.84 * fixed: 2nd pass gap removal was tried (and failed) for TrueHD+AC3 targets * fixed: processing aborted when trying to fix gaps in PCM destination files * fixed: more than one RAW/PCM overlaps resulted in lost sync (since v2.81) * fixed: demuxing TrueHD+AC3 stream by title number didn't renew the AC3 part * new option for removing or looping audio data, e.g. "-edit=0:20:47,-100ms" * title sorting criteria changed: resolution is more important than runtime * new option "-lowPriority" sets eac3to to background/idle priority * libav warnings are now assigned to the affected audio track * fixed: "lossless check failed" false alarms for seamless branching movies * fixed: spike removal filter was not active for the very last overlap/gap * improved muxing h264 streams which begin with double sequence headers * source files are now opened with "share read + write access" * destination files are now opened with "share read access" v2.83 * fixed: gap/overlap correction didn't work for FLAC and WAV files * fixed: when clipping was detected, 2nd pass was not always executed correctly v2.82 * fixed: sometimes eac3to stalled before processing (introduced in v2.81) v2.81 * audio gap/overlap fixing is now automatically done in a 2nd pass * option "-normalize" maximizes the volume of the audio data, needs 2 passes * audio clipping is detected and automatically removed in a 2nd pass * "-2pass" activates 2 pass mode (can speedup seamless branching processing) * superfluous zero bytes are now automatically removed in 2nd pass * "-phaseShift" shifts surround channel phase by 90?for DPL II downmixing * spike removal post processing filter now always produces 16bit samples * empty channels are now reported by the bitdepth analyzer as "no audio data" * option "-shutdown" shuts the PC down automatically after processing is done * the HD DVD XPL with the longest title is now loaded instead of VPLST000.XPL * eac3to can now open selected XPL files (e.g. "eac3to ADV_OBJ\VPLS002.XPL") * eac3to can now open selected mpls files (e.g. "eac3to PLAYLIST\00002.mpls") * fixed: TrueHD streams starting with a non-major header failed to decode * fixed: WAV files created by eac3to with empty channels had incorrect header * fixed: RAW/PCM gap/overlap remover sometimes didn't work correctly v2.80 * fixed: FLAC files with missing runtime information were not accepted * gone back to old VOB/EVO auto delay calculation method, more reliable for me * improved TS broadcast audio delay detection * added support for constant bitrate AAC encoding * added support for AAC encoding 0.00 and 1.00 quality v2.79 * improved m2ts file joining overlap detection (mainly for interlaced video) * vob/evo audio delay detection now uses "vobu start presentation time" * program streams which are neither VOB nor EVO are now reported as "MPG" * resampling is now automatically activated for AC3/DTS encoding, if necessary * "Mersenne Twister" random number generator is used for dithering now * zero padded DTS tracks are now displayed as such * fixed: 32bit PCM conversion to floating point was broken * fixed: with some (rare) movies first subtitle began after 50 minutes runtime * only plugins with the extension *.dll are loaded now v2.78 * fixed: h264 interlaced muxing to MKV could result in too long runtime * fixed: transcoding DTS-HD/E-AC3 core sometimes failed to work correctly * improved TS/m2ts audio delay detection * added filter to remove spikes when fixing gaps/overlaps in RAW/PCM audio * each eac3to instance has its own log file now * playlist output now also works with "-log" option * default bitrate for mono & stereo AC3 encodes lowered to 448kbps * default bitrate for mono & stereo DTS encodes lowered to 768kbps * it should be possible to handle TsSplitter splitted TS files via "+" now v2.77 * pcm/raw audio delay is now applied before resampling and fps change * parsing of command line with multiple sources files sometimes failed v2.76 * "-slowdown" now works to convert 24.000 movies to 23.976 * "-speedup" now works to convert 24.000 movies to 25.000 * option "-xx.xxx" (e.g. "-24.000") sets the FPS of the source track * option "-changeToXx.xxx" (e.g. "-changeTo23.976") changes video/audio FPS * modified FPS information is written to video bitstream (VC-1, MPEG2, h264) * demuxing with FPS change option now activates audio track transcoding * SSRC resampling parameters modified slightly to reduce steepness and ringing * fixed incorrect h264 movie slowdown gap/overlap complaints * fixed DTS-HD High Resolution bitrate calculation * dithering is now done differently per channel v2.75 * added (E-)AC3 5.1 "EX" detection * added (E-)AC3 2.0 "Surround" detection * added (E-)AC3 2.0 "Headphone" detection * NeroAacEnc is now fed with up to 32bit float (if available) * resampling option "-quality=low|high|ultra" not supported, anymore * new option "-fast" switches SSRC resampler to fast, but low quality mode * new option "-r8brain" forces use of r8brain resampler instead of SSRC * added support for AES3 PCM streams in TS container * started working on encoder plugin interface v2.74 * "-demux" failed to work for DTS-HD and "TrueHD/AC3" tracks in v2.73 * fixed: DTS-HD tracks could make processing abort at the very end of the movie v2.73 * changed TS demuxing logic to make the broken (!) new SkyHD broadcasts work * DTS core and "TrueHD/AC3" AC3 parameters are displayed separately now * when using "-core" option, eac3to now bases its decisions on core parameters * added WAV/W64/RF64 read/write support for 32bit PCM and 32/64 bit float * option "-full" allows WAV/W64/RF64 output to be native (default <= 24bit PCM) * Surcode DTS encoding is now done with up to 32bit float (if available) * Aften AC3 encoding is now done with up to 64bit float (if available) v2.72 * fixed: per channel bitdepth analyzation didn't work correctly v2.71 * fixed: v2.70 detected Blu-Rays as "TS" without chapters and track languages * fixed: TrueHD downmixing to 2.0 didn't work v2.70 * added floating point support to the complete audio processing chain * added gain functionality, e.g. "-3db" or "+1db" * bitdepth analyzation is now done separately for each channel * fixed: when decoding lossy audio with libav, peaks were clipped incorrectly * fixed: libav MP1/2/3 decoder output was cut down to 24bit * fixed: with some EVO sources the AC3 track was not listed * fixed: if no key frame was found, h264 track in m2(ts) was not listed * fixed: video/audio data before first PAT/PMT was discarded * Blu-Ray chapters now don't contain link points, anymore, unless necessary * added 10db boost to LFE channel, when "-down2" and "-mixlfe" are used * ArcSoft output can now be overwritten to "-2", "-6", "-7" or "-8" channels v2.69 * added high precision SSRC resampler * resampling "-quality" now allows "low", "high" (SSRC) or "ultra" (r8brain) * resampling quality now defaults to "high" (SSRC) * bitdepth is now analyzed separately for original vs. processed data * fixed: downmixing 16 bit DTS tracks to 5.1 or 2.0 didn't work * fixed: Sonic Decoder was incorrectly assumed to decode XXCh DTS files to 6.1 * for movies the Haali Muxer can't handle "-seekToIFrames" is suggested now v2.68 * fixed crash when transcoding Blu-Ray/HD DVD track to FLAC v2.67 * information about HDCD and real bitdepth is now stored into FLAC metadata * information about real bitdepth is now read from FLAC metadata * PTS break: PTS is increased by 1 frame (fixes some false overlap warnings) * fixed: video gap log text was sometimes not correct (runtime information) * added undocumented switch "-neroaacenc="c:\whatever\neroaacenc.exe"" * error log messages are now output to stderr instead of stdout * improved "which mkvtoolnix is currently installed?" check * fixed: mkvtoolnix version check "Oct" date was not interpreted correctly v2.66 * changed eac3to to allow AAC encoding with 7.1 channels (for new Nero encoder) * fixed AGM creation for files bigger than 4GB * added support for Nero's new AAC Encoder download URL * lowered volume of error/success sounds * when there are 2 similar playlists the one with less chapters is ignored now v2.65 * automatic channel remapping for 6.1 tracks with wrong channel mask * automatic channel remapping for ArcSoft DTS decoder 6.1 tracks * fixed: TrueHD -> Surcode encoding didn't work, anymore * fixed: MPEG2 + h264 video gap/overlap removal didn't work properly v2.64 * added channel mask reading support to Blu-Ray PCM track parser * added channel mask reading support to TrueHD parser * added channel mask reading & writing support to FLAC decoder / encoder * changed 5.x channel mask from $03x to $60x * changed 6.x channel mask from $13x to $70x * mono wavs output now creates correct names for some channel masks * when transcoding 6.1 sources to PCM, 7 channel doubling is activated now * fixed: DTS channelmask detection was incorrect for very strange configs * fixed: sometimes the h264 video stream of a Blu-Ray m2ts was not detected v2.63 * fixed: incorrect detection of 6.0 DTS tracks as 5.0 * fixed: incorrect libav DTS channel remapping for 6.x or 7.x tracks * fixed: incorrect ArcSoft DTS channel remapping for "6.0" and "2/2.1" tracks * fixed: v2.61+62 incorrectly decoded 16bit TrueHD tracks to 24bit FLAC/WAV/RAW * fixed: some DTSWAV files made HDCD decoder crash * fixed: DTSWAV and AC3WAV samplerate and bitdepth were reported incorrectly * improved DirectShow channel configuration reporting * undocumented option -progressnumbers now outputs "analyze:" and "process:" v2.62 * fixed: downmixing 16 bit 7.1 DTS tracks to 5.1 stopped working in v2.61 v2.61 * option "-no7doubling" is not supported anymore * option "-double7" added which upconverts 6.1 to 7.1 * added read/write support for Sony wave64 (*.w64) format * added read/write support for RF64 wave64 (*.rf64) format * added write support for AGM format * true bitdepth (e.g. 18 bits) is written to extensible wav header now * when reading 16/24 (true/storage) WAV files, zero bytes are stripped now * added HDCD detection for WAV and FLAC files * added HDCD detection for PCM tracks in VOB/EVO/m2ts containers * added HDCD decoder written by Christopher Key * added new option "-decodeHdcd" to decode HDCD information * HDCD track -> lossy format: HDCD decoding is automatically activated * when DTS-MA and TrueHD tracks are decoded, a check for HDCD is done * fixed some incorrect DTS channel masks * added automatic libav DTS channel remapping * added automatic ArcSoft DTS channel remapping * added channel map manipulation to make funny DTS tracks decode with Sonic * added channel map manipulation to make funny DTS tracks decode with ArcSoft * added channel volume modification to undo ArcSoft mono surround splitting * for TrueHD+AC3 creation AC3 delay and gap correction are disabled now * fixed: DTSWAV and DTSAC3 readers reported too long runtime * fixed: sometimes processing aborted with a "bitdepth reducer" complaint v2.60 * fixed: in v2.59 "-analyzeBitdepth" stopped working for Blu-Ray TrueHD tracks v2.59 * extension ".thd+ac3" is supported now to define destination format * TrueHD tracks without AC3 core can be converted to TrueHD/AC3 now * demuxing a single-part Blu-Ray title keeps the original "TrueHD/AC3" data * demuxing a multi-part Blu-Ray title automatically redoes the AC3 substream * added workaround for Blu-Ray playlists with multiple last "invalid" parts * fixed: "-check" didn't work for LPCM tracks v2.58 * h264 parser rewritten: framerate, pulldown etc is detected reliably now * h264 pulldown is automatically removed from progressive movie sources now * h264 pulldown removal can be disabled by using "-keepPulldown" * h264 muxing now fully supports streams with mixed 23.976 and 29.970 content * h264 1920x1088 bitstream is now automatically patched/cropped to 1920x1080 * h264 filler data is now already removed during demuxing * h264 sources with funny framerates (e.g. Luxe.tv HD) are patched to 25fps now * mixed video/movie h264 streams are now always muxed with 29.970 timestamps * speedup/slowdown now changes framerate information in the h264 bitstream * options "-24p", "-60i" and "-30p" are no longer supported * fixed Blu-Ray seamless branching subtitle remuxing * added workaround for Blu-Ray playlists with a last small "invalid" m2ts part * bitdepth analyzation is now done for decoded FLAC, WAV, PCM, DTS MA, too * bitrate is now also reported for FLAC, WAV and PCM tracks * when encoding AC3, DTS or AAC, the encoding bitrate is reported * fixed: v2.57 incorrectly decoded 16bit TrueHD tracks to 24bit FLAC/WAV/RAW * (M2)TS discontinuities before the first unit start are ignored now * new option "-progressnumbers" replaces progress bar with percentage numbers v2.57 * added automated support for Nero AAC command line encoder * added "quality=0.xx" (0.00 - 0.99) parameter to control AAC encoder quality * added Nero AAC encoder check to the "-test" list * "-test" checks whether a new Haali Matroska Muxer version is available * "-test" checks whether a new MkvToolnix release build is available * "-test" checks whether a new MkvToolnix beta build is available * "-test" checks whether a new Nero AAC encoder version is available * added TRP container support (TS files without PMT/PAT) * parameter "-extensible" is no longer supported (it's default now) * new parameter "-simple" can be used to disable the "-extensible" wav header * decoded TrueHD tracks: bitdepth is now automatically analyzed in more detail * option "-analyzeBitdepth" manually activates extended bitdepth analyzation * DVB subtitle tracks are listed now - can't be demuxed, though * option "-check" doesn't fail on DTS Express tracks, anymore v2.56 * fixed: processing aborted when a VC-1 sequence end code was found v2.55 * AAC bitstream parser added * AAC auto detection added * AAC bitstream delay added * AAC bitstream gap/overlap correction added * AAC decoding (Nero & Sonic) added * old MP2 parser now "officially" and properly supports MP1, MP2 and MP3 * MP3 decoding (libav & Nero) added * added support for MPEG Audio version 2 and version 2.5 * added (limited) support for ID3, APE and LYRICS tags in MP3 and AAC tracks * improved VOB/EVO audio delay detection algorithm * detection and automatic skipping of invalid vob units * options "-60i" and "-24p" are no longer supported for MPEG2 video * improved detection of MPEG2 framerate / pulldown state / mode * improved MPEG2 muxing warnings * several bugs in MPEG2 video muxing fixed * fixed interlaced VC-1 muxing with user data (Nine Inch Nails) v2.54 * VC-1 pulldown removal rewritten (comparable to vc1conv 0.4, but faster) * VC-1 pulldown removal is activated by default * VC-1 pulldown removal can be manually deactivated by "-keepPulldown" option * VC-1 pulldown removal is also available and activated when muxing to MKV now * fixed Blu-Ray subtitle demuxing for seamless branching movies * better task separation when doing multiple operations with an audio track v2.53 * Blu-Ray PGS subtitle demuxing support added * added support for EVO/VOB subtitles which begin very late in the file * MPEG2 video muxing doesn't rely on GOP headers, anymore * all (M2)TS discontinuities are now reported with exact file position * fixed: reading language information from TS files didn't work correctly v2.52 * fixed muxing of MPEG2 broadcasts where "temporal_reference" overruns * MPEG2 bitstream headers are now updated correctly when speedup is performed * MPEG2 bitstream headers are now updated correctly when slowdown is performed * MPEG2 bitstream headers are now updated correctly when pulldown is removed * pulldown removal is now automatically disabled for MPEG2 broadcasts * AC3WAV (SPDIF formatted) support added v2.51 * DTS Express bitstream parser added * DTS Express auto detection added * DTS Express bitstream delay added * DTS Express bitstream gap/overlap correction added * DTS Express decoding (Nero & ArcSoft) added * fixed: 6.1 -> 7.1 channel doubling resulted in wrong channel order * added (undocum.) option "-no7doubling" to disable 6.1 -> 7.1 channel doubling * DTS tracks with funny speaker settings are displayed as "7.1 (strange setup)" * warning is displayed when decoding "7.1 (strange setup)" tracks with ArcSoft v2.50 * ArcSoft DTS Decoder DLL is now directly accessed instead of using DirectShow v2.49 * DTS parser sets correct channel mask now * DTS-HD parser now properly detects format, channels and samplerate * added support for ArcSoft DTS(-HD) Decoder * added several tweaks to make ArcSoft Decoder behave correctly * added ArcSoft test to the "-test" processing * made ArcSoft Decoder default for DTS and DTS-HD decoding v2.48 * 96kHz LPCM tracks in (M2)TS and EVO/VOB containers didn't work correctly * "Applying (E-)AC3 delay" now only shows if the bitstream is actually modified * fixed crash in MP2 reader when checking some PCM tracks * added support for MLP formats 13 - 16 * improved/corrected MLP channel descriptions * MLP parser sets correct channel mask * added proper channel remaps for libav MLP decoding of "funny" channel formats * added proper channel remaps for Nero MLP decoding of "funny" channel formats * added proper channel remaps for Nero AC3 decoding of "funny" channel formats * when doubling 7th channel the channel mask is set correctly now * channel mask is corrected if a decoder doesn't output all channels * channel mask is corrected if channel downmixing is performed v2.47 * improved detection of AC3/DTS tracks in TS/M2TS container * added support for Blu-Ray style LPCM tracks in TS container * fixed 44.1kHz AC3 tracks * fixed crazy audio delay values when no video track was detected * sometimes video/audio tracks were not properly detected in (M2)TS container * MPEG2 demuxing/remuxing incorrectly output the first sequence headers twice * sequence end codes are removed when demuxing video now, too * MPEG2 pulldown removal is automatically activated only for EVO HD sources now * MPEG2 pulldown removal can be manually activated by using "-stripPulldown" * MPEG2 pulldown removal can be disabled by using "-keepPulldown" v2.46 * MPEG2 muxing now fully supports streams with mixed 23.976 and 29.970 content * mixed video/movie MPEG2 streams are now always muxed with 29.970 timestamps * if a movie MPEG2 stream goes video, processing is automatically restarted * MPEG2 pulldown is now automatically removed whenever an MPEG2 stream is read * new option "-keepPulldown" can be used to disable MPEG2 pulldown removal * corrected default WAV channel masks for 4.0, 6.1 and 7.1 * added proper channel remaps for libav AC3 decoding of "funny" channel formats * added general channel mask support * WAV parser reads channel mask from extensible header * (E-)AC3 parser sets correct channel mask v2.45 * Blu-Ray angles are now reported as separate titles * duplicate playlists are not listed in the "folder view", anymore * reduced TrueHD and RAW/PCM gap/overlap threshold to 7ms * reduced (E-)AC3 gap/overlap threshold to 60% of the runtime of one audio frame * reduced MP2 gap/overlap threshold to 60% of the runtime of one MP2 frame * reduced DTS threshold to 60% of the runtime of one DTS frame, but at least 7ms * fixed: Blu-Ray chapter export sometimes wrote incorrect "00:00:00.000" items * improved handling of MPEG2 streams (changes from interlaced to progressive) * video information now shows "with pulldown flags", if applicable * removed "-ignoreDiscon" from help; hint is shown when a discontinuity occurs * added "-ignoreEncrypt" option; hint is shown when a source is encrypted * new option "-extensible" creates WAV files with a slightly different header * fixed some smaller bugs v2.44 * libav is now automatically used when Nero/Sonic decoders are not working * gap/overlap correction of RAW/PCM tracks sometimes aborted * rerunning de/remuxing to correct gaps/overlaps ignored RAW/PCM tracks * "lossless check failed" messages are surpressed on join points now v2.43 * added automatic Blu-Ray playlist parsing * added support for multi part (e.g. seamless branching) Blu-Ray titles * audio gap/overlap detection rewrite completed * added audio gap/overlap correction functionality * added Blu-Ray chapter support * log lines are now prefixed with a track identifier * RAW/PCM delay is used instead of bitstream delay, if possible * fixed: video framecount was missing v2.42 * added support for 16bit DTSWAV files * fixed: Blu-Ray TrueHD support was broken v2.41 * added full MP2 (MPEG2 audio) support including decoding + bitstream delay * added TS/M2TS runtime detection * improved VOB/EVO runtime detection * added TrueHD gap/overlap detection * audio gap/overlap detection logic rewritten (not complete yet) * fixed: log file option didn't work correctly * fixed: some DTS tracks in PAL TS broadcasts weren't detected correctly * fixed: some E-AC3 tracks in PAL TS broadcasts weren't detected correctly v2.40 * video framecount is now also shown for TS/M2TS demuxing/remuxing * "-check" option added to check container for corruption * TS/M2TS: discontinuity check sometimes fired false alarms * HD DVD subtitle language/description was not always correct * title listing is only shown if there are at least 2 titles * if there is only one title, the title is automatically selected * TS/M2TS audio delay detection was broken * improved audio delay detection for broadcasts and badly mastered discs * TS/M2TS video demuxing could eventually add some invalid data * new option "log=c:\whatever\log.txt" specifies the log file path/name v2.39 * simple audio transcoding was broken v2.38 * fixed file path handling bug v2.37 * added HD DVD chapter support * added HD DVD subtitle demuxing support * added pre-freeze detection for Haali Matroska Muxer bug * invalid characters are removed from file names now * log file is copied to destination path (of first destination file) v2.36 * TS/M2TS: discontinuity is only checked for tracks which are de- or remuxed * TS/M2TS: "-demux" creates both a "thd" and an "ac3" file for "thd/ac3" tracks * TS/M2TS: "eac3to source.m2ts movie.mkv" transcodes "thd/ac3" tracks to FLAC * M2TS: track language is displayed (if the file "xxxxx.clpi" is available) * TS: track language is displayed (if the source file contains this info) * video gaps/overlaps in the last 5 seconds of the movie are ignored now v2.35 * fixed broken EVO support v2.34 * TS/M2TS: fixed PAT/PMT reading bug * TS/M2TS: new "-ignoreDiscon" option makes eac3to ignore discontinuity errors v2.33 * added full TS and M2TS support (file joining not supported yet, though) * further improved "-demux" file names * help text and HD DVD track listing is now also written to the log v2.32 * added automatic "VPLST000.XPL" and "HVA00001.VTI" parsing * "eac3to" or "eac3to ." inside of a HD DVD folder lists all title sets * "eac3to someHdDvdMovieFolder" lists all title sets * "eac3to someHdDvdMovieFolder whatever.mkv" converts the longest title set * "eac3to someHdDvdMovieFolder x) whatever.mkv" converts the selected title set * EVO report now contains the EVO display name (if "VPLST000.XPL" is available) * added language to EVO audio track listing (if "VPLST000.XPL" is available) * added EVO audio track display names (if "VPLST000.XPL" is available) * sequence end codes are stripped from VC-1, MPEG2 and h264/AVC * put "-stripPulldown" option back in on request * option "-demux" now writes to "current directory" instead of source directory * option "-demux" now creates files with meaningful names * doing "eac3to src.evo dst.mkv" now creates audio files with meaningful names * doing "eac3to src.evo dst.mkv" writes the audio files to same path as the MKV * after successful (erroneous) processing "success.wav" (error.wav) is played v2.31 * DTSWAV input support added * fixed bitstream delaying of 96khz DTS tracks * improved DTS runtime calculation * fixed DTS audio gap/overlap correction for strange DTS formats * fixed E-AC3 audio gap/overlap correction for strange bitrates * fixed incorrect MKV "default duration" when using "-24p" or "-30p" * fixed incorrect MKV "default duration" when using "-slowdown" or "-speedup" * improved support for "open bitrate" DTS files * slightly improved automatic (E-)AC3 delaying exactness v2.30 * fixed wrong MPEG2 framerate (bug introduced in v2.29) v2.29 * added automatic audio gap/overlap correction for (E-)AC3, DTS(-HD) and LPCM * options "-slowdown" and "-speedup" can now also be used for video muxing * added support for muxing of EVO's secondary video track to MKV * added "-24p", "-30p" and "-60i" options to overwrite detected h264 framerate * fixed some MPEG2 muxing problems * temporarily disabled "-stripPulldown" because vc1conv 0.3 is better v2.28 * new "-seekToIFrames" switch makes Basic Instinct (h264) muxing work v2.27 * fixed h264/AVC muxing crash with some movies (due to too high RAM usage) * fixed missing frames at the end of the movie when doing h264/AVC muxing * fixed non-working "eac3to -test" v2.26 * Haali Splitter replaced with internal splitter for EVO h264/AVC tracks * external raw h264/AVC tracks can now be muxed directly to Matroska * timestamps for h264/AVC MKV videos don't need to be rewritten, anymore * gaps/overlaps in h264/AVC track of EVO files are detected now * h264 aspect ratio is detected and written into MKV now * Haali Media Splitter is not being used at all, anymore * mkvtoolnix is not being used at all, anymore * added detection for MPEG2 interlaced -> progressive mode change * workaround for eacGui bug v2.25 * fixed MPEG2 muxing for interlaced content v2.24 * Haali Splitter replaced with internal splitter for EVO MPEG2 tracks * external raw MPEG2 tracks can now be muxed directly to Matroska * timestamps for MPEG2 MKV videos don't need to be rewritten, anymore * gaps/overlaps in MPEG2 track of EVO files are detected now * VC-1 and MPEG2 aspect ratios are detected and written into MKV now * fixed bug with "-down2" option v2.23 * fixed bug which made some DTS tracks appear dirty although they weren't * fixed extremely big gap detection with Fantastic Four 2 * fixed non cleaned up gaps file bug v2.22 * gap/overlap logic changed completely (optional two pass muxing now) * "-ignoreGaps" parameter is gone v2.21 * latest libav MLP/TrueHD decoder fixes "lossless check failed" bug * latest libav MLP/TrueHD decoder supports & decodes 7.1 TrueHD tracks * Matroska muxing speed dramatically improved * eac3to now detects and handles E-AC3 7.1 tracks correctly * option "-core" extracts 5.1 core from E-AC3 7.1 tracks * added support for small DTS files (< 300kb) v2.20 * changed VC-1 muxing method to fix problems with several movies, e.g. - Unforgiven - Phantom of the Opera - Million Dollar Baby - Fantastic Four 2 * fps value is now also added to MKV header when muxing raw VC-1 stream * added new "-skip" option to skip corruption in the beginning of an EVO file * added extra handling which fixes some EVO authoring bugs v2.19 * fixed h264 bitstream parsing of framerate information format * fixed (again) muxing of some rare VC-1 titles like e.g. POTO USA v2.18 * fixed bug which stopped eac3to v2.15-17 from working on some PCs * fixed h264 bitstream parsing bug (Sum of all Fears) * fps value is added to MKV header now * relaxed VC-1 gap detection once more * TrueHD decoding to stdout fixed (always output as 24 bit now) v2.17 * fixed VC-1 pulldown removal * VC-1 pulldown removal must now be activated by the new option "-stripPulldown" * improved VC-1 gap/overlap detection * new option "-ignoreGaps" disables VC-1 gap/overlap detection * libav E-AC3 decoder background decoding removed again v2.16 * fixed "eac3to -test" crash * fixed "eac3to some.ddp some.wav" crash * made video gap/overlap detection a little more relaxed * WAV header is initialized to 4GB instead of 0GB (for stdout) * fixed incorrect "primary/secondary" text v2.15 * Haali Splitter replaced with internal splitter for EVO VC-1 tracks * external raw VC-1 tracks can now be muxed directly to Matroska * timestamps for VC-1 MKV videos don't need to be rewritten, anymore * some problematic VC-1 movies should mux fine to MKV now (e.g. POTO USA) * gaps/overlaps in VC-1 track of EVO files are detected and displayed now * pulldown can be removed from external raw VC-1 tracks now * pulldown is automatically removed when demuxing EVO VC-1 tracks now * updated to the latest revision of the libav E-AC3 decoder * some minor changes and bugfixes v2.14 * libav TrueHD decoder "end of stream" bug should be fixed now * fixed libav DTS decoder - subwoofer channels is properly decoded now, too * patched libav DTS decoder to output full 24 bit * updated to the latest revision of the libav E-AC3 decoder * when decoding E-AC3 with Nero, libav decoding is also executed at the same time v2.13 * added option to downmix multi channel audio to stereo * added support for VC-1 custom aspect ratios * added stdout output support v2.12 (thanks to Ron/drmpeg for all his help) * video resolution, framerate and mode (progressive/interlaced) are displayed * rewriting timestamps should now always write the correct framerate * after a full EVO/VOB processing the number of video frames is shown * EVO 16 bit and 24 bit LPCM demuxing supported now (need samples for 20 bit) * (E-)AC3 bitstream can be delayed now (similar to delaycut) * DTS bitstream can be delayed now (similar to delaycut) * DTS-HD High-Res and Master Audio bitstream can be delayed now * when demuxing bitstream audio tracks from EVO delay is automatically applied * some little bugs fixed v2.11 * libav E-AC3 decoding is without DRC now * libav AC3 decoding added (without DRC) * libav E-AC3 and AC3 decoding hacked to return full 24 bit * fixed: delay was not applied for lossless audio tracks * fixed crash when parsing PCM files without doing any conversion * TrueHD dialnorm was displayed incorrectly * changed 23.976 to 24/1.001 * fixed some more minor bugs v2.10 * fixed crash which occurred when doing "EVO/VOB -> Surcode DTS encoding" * "eac3to source.evo movie.mkv" syntax replaces "-auto" option * "eac3to 1.evo+2.evo movie.evo" syntax supported now for simple EVO/VOB joining v2.09 * EVO demuxing added with proper delays for all audio tracks * EVO file joining/rebuilding added * automated EVO video remuxing (Matroska) added * automated rewriting of Matroska timestamps to 24p via mkvtoolnix added * multiple operations on the source file can now be run at the same time * switch "-test" tests all external DirectShow filters and tools * latest ffmpeg/libav TrueHD and E-AC3 decoder patches included * latest libAften build included * libav TrueHD decoder is now the default decoder for TrueHD/MLP * support for libav DTS decoding added * fixed a whole lot of bugs (and might have added a few new ones) v2.08 * fixed: bitdepth reducer sometimes crashed when being fed a PCM file * fixed: FLAC encoder sometimes crashed when delay was applied * fixed: some TrueHD files were dithered/processed by Nero when they shouldn't * fixed: Surcode 1.0.29 encoding automation * fixed: source file was deleted when source and dest file names were identical * eac3to output is now always written to "log.txt" * when a crash occurs, "log.txt" is added to the bug report * improved help text + hints slightly * undocumented switch "-check16bit" added * undocumented switch "-mono" added v2.07 * fixed libAV MLP decoding support * added automatic MLP ID20 channel remapping * Surcode 1.0.29 (or newer) home directory detection added v2.06 * doing FLAC -> FLAC now copies metadata from source to destination file * MLP files are correctly decoded now (by both Nero and libav/ffmpeg) * runtime for padded DTS files is shown correctly now v2.05 * added support for libav/ffmpeg decoding of TrueHD/MLP and E-AC3 * added "-libav" switch to force libav decoding v2.04 * don't need dtsac3source.ax, anymore * don't need Nero Splitter, anymore * don't need Sonic HD Demuxer, anymore * replaced hacked DirectShow feeding with a cleaner approach * added support for DTS-HD Master Audio 7.1 tracks (only 5.1 decoding) * little performance boost for PAL speedup/down on DualCore CPUs * fixed some bugs v2.03 * new "-debug" switch added v2.02 * fixed: automatic registering of the dtsac3source filter crashed v2.01 * fixed: AC3 encoding sometimes crashed when being fed 24 bit audio data * fixed: AC3 encoded files were invalid when being fed 24 bit audio data * eac3toGUI didn't work with eac3to v2.0 * "eac3to source.ac3 dest.ac3 -slowdown" didn't do anything useful * when a crash occurs, the bug report is automatically copied to clipboard now * some minor cosmetic improvements v2.00 totally new features * AC3 decoding support (Nero's decoder without DRC/dialnorm) * resampling to 44.1/48/96 kHz (by using "r8brain") * apply/reverse PAL speedup (by using "r8brain") * "eac3to sourceFile" will print out source file details strongly enhanced features * dramatically improved performance (no intermediate files, anymore!) * proper 6.1/7.1 downmixing to 5.1 instead of just dropping the back surround channels * RAW/PCM file detection now auto detects channels, bitdepth and endian * WAV is now fully supported as source file format * destination file extension "PCM" creates Blu-Ray style LPCM tracks * bitdepth can be reduced to anything between 14 bits and 23 bits DTS related improvements/changes * DTS-96/24 support added * "open bitrate" support added * strange channel configuration support added * removal of zero padding from DTS files added * eac3to can fix broken DTS-ES files (they decode to 5.1 instead of 6.1 without the fix) * dialog normalization can be removed without removing the additional DTS-HD data now * core extraction must be specifically asked for now (see "-core" switch) AC3 related improvements * did I mention that eac3to can decode AC3 now? * strange channel configuration support added TrueHD related improvements * delay problem (hopefully) solved * fixed: sometimes some audio data in the middle of a track was lost * TrueHD/AC3 interweaved file can be stripped to TrueHD only now various minor improvements/changes * progress bar added * eac3to detects file format independently of file extension * multiple input files can be treated as one big file * "sox" is not needed, anymore * "dump" filter not needed, anymore * "aften.exe" replaced by "libAften.dll" * "flac.exe" replaced by "libFlac.dll" * DTS/DD+/AC3 source filter ships with eac3to now * 8bit support added * crash analyzer and bug reporting added v1.23 * bugfix: sometimes TrueHD decoding resulted in incorrect sampling rate v1.22 * 6.1 -> 7.1 channel doubling was sometimes incorrectly skipped * OS speaker settings now don't have to be 7.1, anymore * added detection of 5.1 output when 6.1 was expected * DTS and DTS-ES files are now forcefully patched to 24 bit by eac3to (workaround for Sonic decoder) * Sonic Audio Decoder is now always used by default for DTS decoding v1.21 * bugfix: 2 channel DTS files were not accepted * added: DTS-ES 6.1 support * added: DTS-HD High Resolution Matrix 5.1 support * added: DTS-HD Master Audio 6.1 support v1.20 * bugfix: some Blu-Ray TrueHD tracks were not accepted * change: eac3to output text slightly improved v1.19 * bugfix: still some TrueHD files were not accepted ("The source file format is unknown") * added: FLAC supported as source/input file format now * added: full delay functionality v1.18 * bugfix: some TrueHD files were not accepted ("The source file format is unknown") * change: EVO files are not accepted as source files, anymore * added: detection and repacking of 16 bit TrueHD tracks * added: proper detection of "DTS-HD Master Audio" and "DTS-HD High Resolution" tracks * added: runtime information for "DTS-HD High Resolution" tracks * bugfix: bitrate information for "DTS-HD High Resolution" tracks * added: decoding of "DTS-HD Master Audio" tracks (Sonic) * added: decoding of "DTS-HD High Resolution" tracks (Sonic) * added: decoding of conventional DTS tracks (Sonic/Nero) v1.17 * TrueHD dialog normalization removal added v1.16 * added decoding support for Blu-Ray TrueHD files v1.15 * bugfixes v1.14 * DTS dialog normalization can be removed now * DTS core can be extracted from DTS-HD track now v1.13 * "eac3to src.ac3 dst.ac3" removes dialog normalization from AC3 files * "eac3to src.eac3 dst.eac3" removes dialog normalization from E-AC3 files * "eac3to src.thd dst.ac3" extracts the AC3 frames from a Blu-Ray TrueHD track and removes dialog normalization v1.12 * tools "flac.exe", "aften.exe" and "sox.exe" are now distributed in the eac3to zip * correct channel mapping for 7.1 LPCM tracks is default now * new option "-down6" allows downconverting of 7.1 tracks to 5.1 * modded "flac.exe" ships with eac3to now, which has no problems with 2GB file output, anymore v1.11 * bugfix: (L)PCM -> DTS encoding automation failed when source and destination folders differed * added: new "-allowDnr" switch allows Nero's audio decoder to apply DNR * added: new "-keepDialnorm" switch disables removal of E-AC3 dialnorm information v1.10 * E-AC3 dialog normalization detection and removal * DRC turned off for Nero E-AC3 decoder * Surcode automation improved * Nero is now the default E-AC3 and TrueHD decoder * the flag "/nero" is no more * there is a flag "/sonic" now to force the use of the Sonic filters v1.09 * multi channel mono wav output added * automated SurCode DTS encoding added * 24bit PCM handling works now (was buggy before) * "-blu-ray" option removed * with PCM input files "bigendian" is default now * with 5.1 PCM input blu-ray style channel remapping is default now * switches "-16" and "-24" are valid for both TrueHD and PCM input now * eac3to now creates the WAV files on its own instead of using sox * target extension ".wavs" results in one mono wav for each channel being created * SurCode DVD DTS encoding automation added * new options "-768" and "-1536" for DTS encoding * TrueHD output is not downconverted to 16bit by default, anymore * new option "-down16" downconverts the raw data from 24 -> 16 bit (not limited to TrueHD input) v1.08 * added PCM input support * automatic detection of PCM bitdepth added (16bit or 24bit) * "-blu-ray" switch remaps PCM channels correctly v1.07 * added "-8" switch for 8 channel support v1.06 * mono E-AC3 support added v1.05 * support for 5.1 TrueHD audio tracks added v1.04 * E-AC3 files bigger than 4GB are supported now v1.03 * AC3 files bigger than 2GB are supported now v1.02 * FLAC encoding works now without any input/output size limits v1.01 * support for FLAC encoding added * bitrate can be specified via command line parameter * ffdshow removed from the filter chain * "ddp" and "ec3" file extensions are accepted now, too * fix: "dd+" file extension didn't work correctly. v1.00 * initial release * can convert a 2.0 or 5.1 channel E-AC3 file to AC3.

81,092

社区成员

发帖
与我相关
我的任务
社区描述
Java Web 开发
社区管理员
  • Web 开发社区
加入社区
  • 近7日
  • 近30日
  • 至今
社区公告
暂无公告

试试用AI创作助手写篇文章吧