ls: cannot access Input/output error

zhangqiang1981 2011-11-30 09:58:17
我原来装的系统是centos5.5,这几天下载了redhat enterprise 6.0,装了上去。装好后问题来了。我有个硬盘分了三个区,fdisk -结果如下
Device Boot Start End Blocks Id System
/dev/sdb2 1 59525 30000568+ 83 Linux
/dev/sdb3 59526 99209 20000736 83 Linux
/dev/sdb4 99210 158816 30041928 83 Linux

/dev/sdb2可以挂载,能正常使用,里面的文件读写删除都没有问题,/dev/sdb3可以挂载,但部分文件出现如下情况:
ls: cannot access /disk2/3/桌面: Input/output error
ls: cannot access /disk2/3/mathematic1: Input/output error
ls: cannot access /disk2/3/CD3: Input/output error
ls: cannot access /disk2/3/opera-11.52-1100.i386.linux.tar.bz2: Input/output error
/dev/sdb4无法挂载:mount: you must specify the filesystem type
之后,我又装了centos5.5,以上问题又不会出现,/dev/sdb2 ,/dev/sdb3,/dev/sdb4。用了很多办法都解决不了。求助。
...全文
2652 6 打赏 收藏 转发到动态 举报
写回复
用AI写文章
6 条回复
切换为时间正序
请发表友善的回复…
发表回复
向良玉 2011-12-03
  • 打赏
  • 举报
回复
换硬盘吧
askandstudy 2011-12-03
  • 打赏
  • 举报
回复
用e2fsck修复试试看呢?
http://www.jb51.net/os/RedHat/1328.html
zhangqiang1981 2011-12-02
  • 打赏
  • 举报
回复
这个硬盘有一段时间不能用了,后来,我从箱底里翻出来,进行了处理,把相当部分划了出去不再使用,能用的相当于现在的31G。
zhangqiang1981 2011-12-02
  • 打赏
  • 举报
回复
#
# /etc/fstab
# Created by anaconda on Fri Nov 25 15:50:56 2011
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
UUID=2badf589-d1f8-4540-811d-49692198edc9 / ext3 defaults 1 1
UUID=b19e924d-9f2a-4372-bcc3-192f68738cb6 /boot ext4 defaults 1 2
UUID=54e849a4-6970-4f03-82fc-82dc568aeaef swap swap defaults 0 0
tmpfs /dev/shm tmpfs defaults 0 0
devpts /dev/pts devpts gid=5,mode=620 0 0
sysfs /sys sysfs defaults 0 0
proc /proc proc defaults 0 0


我这个硬盘80G,ide口,在winods系统上只能识别31G,无意中挂接到centos5.5发现能识别到80G,且能用。在redhat enterprise6.0上又只能识别到31G,出现帖子所说的情况。在contos5.5中同样是/ ext3;/boot ext4
用过testdisk、dumpe2fs、mke2fs等处理过都不行。同样在centos5.5中
askandstudy 2011-12-01
  • 打赏
  • 举报
回复
出问题时有没有看过一下/etc/fstab文件?
ljc007 2011-12-01
  • 打赏
  • 举报
回复
你用过哪些办法?
包含如下操作系统版本 FreeBSD Linux Solaris Windows 分别对应如下目录 MegaCLI for DOS MegaCLI for Linux MegaCLI for Solaris MegaCLI for FreeBSD MegaCLI for Windows ********************************************* LSI Corp. MegaRAID MegaCLI Release ********************************************* Release Date: 01/20/14 ======================== Supported Controllers ================== MegaRAID SAS 9270-8i MegaRAID SAS 9271-4i MegaRAID SAS 9271-8i MegaRAID SAS 9271-8iCC MegaRAID SAS 9286-8e MegaRAID SAS 9286CV-8e MegaRAID SAS 9286CV-8eCC MegaRAID SAS 9265-8i MegaRAID SAS 9285-8e MegaRAID SAS 9240-4i MegaRAID SAS 9240-8i MegaRAID SAS 9260-4i MegaRAID SAS 9260CV-4i MegaRAID SAS 9260-8i MegaRAID SAS 9260CV-8i MegaRAID SAS 9260DE-8i MegaRAID SAS 9261-8i MegaRAID SAS 9280-4i4e MegaRAID SAS 9280-8e MegaRAID SAS 9280DE-8e MegaRAID SAS 9280-24i4e MegaRAID SAS 9280-16i4e MegaRAID SAS 9260-16i MegaRAID SAS 9266-4i MegaRAID SAS 9266-8i MegaRAID SAS 9285CV-8e MegaRAID SAS 8704ELP MegaRAID SAS 8704EM2 MegaRAID SAS 8708ELP MegaRAID SAS 8708EM2 MegaRAID SAS 8880EM2 MegaRAID SAS 8888ELP MegaRAID SAS 8308ELP* MegaRAID SAS 8344ELP* MegaRAID SAS 84016E* MegaRAID SAS 8408E* MegaRAID SAS 8480E* MegaRAID SATA 300-8ELP* *These older controllers should work but have not been tested. Component: ========= SAS MegaRAID MegaCLI Release Date: 01/20/14 Version Numbers: MegaCLI =============== =========== Current Version 8.07.14 Previous Version 8.07.07 Contents: ========= This package contains MegaCLI for the following OSes: DOS FreeBSD Linux Solaris Windows Use the MegaCLI components from the folder that matches your OS. Enhancements and Bug Fixes ========================== SCGCQ00393585 (DFCT) - VD creation from MegaCli fails on Solaris Sparc 10u9 operating system. SCGCQ00413883 (DFCT) - "megacli -version -pd -a0" Segmentation Faults if PDs are missing SCGCQ00445356 (CSET) - Megacli crashes after invoking any command in SGI system with one 9280-8e and 2 quad port qlogic FC cards. SCGCQ
Contents Overview 1 Lesson 1: Concepts – Locks and Lock Manager 3 Lesson 2: Concepts – Batch and Transaction 31 Lesson 3: Concepts – Locks and Applications 51 Lesson 4: Information Collection and Analysis 63 Lesson 5: Concepts – Formulating and Implementing Resolution 81 Module 4: Troubleshooting Locking and Blocking Overview At the end of this module, you will be able to:  Discuss how lock manager uses lock mode, lock resources, and lock compatibility to achieve transaction isolation.  Describe the various transaction types and how transactions differ from batches.  Describe how to troubleshoot blocking and locking issues.  Analyze the output of blocking scripts and Microsoft® SQL Server™ Profiler to troubleshoot locking and blocking issues.  Formulate hypothesis to resolve locking and blocking issues. Lesson 1: Concepts – Locks and Lock Manager This lesson outlines some of the common causes that contribute to the perception of a slow server. What You Will Learn After completing this lesson, you will be able to:  Describe locking architecture used by SQL Server.  Identify the various lock modes used by SQL Server.  Discuss lock compatibility and concurrent access.  Identify different types of lock resources.  Discuss dynamic locking and lock escalation.  Differentiate locks, latches, and other SQL Server internal “locking” mechanism such as spinlocks and other synchronization objects. Recommended Reading  Chapter 14 “Locking”, Inside SQL Server 2000 by Kalen Delaney  SOX000821700049 – SQL 7.0 How to interpret lock resource Ids  SOX000925700237 – TITLE: Lock escalation in SQL 7.0  SOX001109700040 – INF: Queries with PREFETCH in the plan hold lock until the end of transaction Locking Concepts Delivery Tip Prior to delivering this material, test the class to see if they fully understand the different isolation levels. If the class is not confident in their understanding, review appendix A04_Locking and its accompanying PowerPoint® file. Transactions in SQL Server provide the ACID properties: Atomicity A transaction either commits or aborts. If a transaction commits, all of its effects remain. If it aborts, all of its effects are undone. It is an “all or nothing” operation. Consistency An application should maintain the consistency of a database. For example, if you defer constraint checking, it is your responsibility to ensure that the database is consistent. Isolation Concurrent transactions are isolated from the updates of other incomplete transactions. These updates do not constitute a consistent state. This property is often called serializability. For example, a second transaction traversing the doubly linked list mentioned above would see the list before or after the insert, but it will see only complete changes. Durability After a transaction commits, its effects will persist even if there are system failures. Consistency and isolation are the most important in describing SQL Server’s locking model. It is up to the application to define what consistency means, and isolation in some form is needed to achieve consistent results. SQL Server uses locking to achieve isolation. Definition of Dependency: A set of transactions can run concurrently if their outputs are disjoint from the union of one another’s input and output sets. For example, if T1 writes some object that is in T2’s input or output set, there is a dependency between T1 and T2. Bad Dependencies These include lost updates, dirty reads, non-repeatable reads, and phantoms. ANSI SQL Isolation Levels An isolation level determines the degree to which data is isolated for use by one process and guarded against interference from other processes. Prior to SQL Server 7.0, REPEATABLE READ and SERIALIZABLE isolation levels were synonymous. There was no way to prevent non-repeatable reads while not preventing phantoms. By default, SQL Server 2000 operates at an isolation level of READ COMMITTED. To make use of either more or less strict isolation levels in applications, locking can be customized for an entire session by setting the isolation level of the session with the SET TRANSACTION ISOLATION LEVEL statement. To determine the transaction isolation level currently set, use the DBCC USEROPTIONS statement, for example: USE pubs GO SET TRANSACTION ISOLATION LEVEL REPEATABLE READ GO DBCC USEROPTIONS GO Multigranular Locking Multigranular Locking In our example, if one transaction (T1) holds an exclusive lock at the table level, and another transaction (T2) holds an exclusive lock at the row level, each of the transactions believe they have exclusive access to the resource. In this scenario, since T1 believes it locks the entire table, it might inadvertently make changes to the same row that T2 thought it has locked exclusively. In a multigranular locking environment, there must be a way to effectively overcome this scenario. Intent lock is the answer to this problem. Intent Lock Intent Lock is the term used to mean placing a marker in a higher-level lock queue. The type of intent lock can also be called the multigranular lock mode. An intent lock indicates that SQL Server wants to acquire a shared (S) lock or exclusive (X) lock on some of the resources lower down in the hierarchy. For example, a shared intent lock placed at the table level means that a transaction intends on placing shared (S) locks on pages or rows within that table. Setting an intent lock at the table level prevents another transaction from subsequently acquiring an exclusive (X) lock on the table containing that page. Intent locks improve performance because SQL Server examines intent locks only at the table level to determine whether a transaction can safely acquire a lock on that table. This removes the requirement to examine every row or page lock on the table to determine whether a transaction can lock the entire table. Lock Mode The code shown in the slide represents how the lock mode is stored internally. You can see these codes by querying the master.dbo.spt_values table: SELECT * FROM master.dbo.spt_values WHERE type = N'L' However, the req_mode column of master.dbo.syslockinfo has lock mode code that is one less than the code values shown here. For example, value of req_mode = 3 represents the Shared lock mode rather than the Schema Modification lock mode. Lock Compatibility These locks can apply at any coarser level of granularity. If a row is locked, SQL Server will apply intent locks at both the page and the table level. If a page is locked, SQL Server will apply an intent lock at the table level. SIX locks imply that we have shared access to a resource and we have also placed X locks at a lower level in the hierarchy. SQL Server never asks for SIX locks directly, they are always the result of a conversion. For example, suppose a transaction scanned a page using an S lock and then subsequently decided to perform a row level update. The row would obtain an X lock, but now the page would require an IX lock. The resultant mode on the page would be SIX. Another type of table lock is a schema stability lock (Sch-S) and is compatible with all table locks except the schema modification lock (Sch-M). The schema modification lock (Sch-M) is incompatible with all table locks. Locking Resources Delivery Tip Note the differences between Key and Key Range locks. Key Range locks will be covered in a couple of slides. SQL Server can lock these resources: Item Description DB A database. File A database file Index An entire index of a table. Table An entire table, including all data and indexes. Extent A contiguous group of data pages or index pages. Page An 8-KB data page or index page. Key Row lock within an index. Key-range A key-range. Used to lock ranges between records in a table to prevent phantom insertions or deletions into a set of records. Ensures serializable transactions. RID A Row Identifier. Used to individually lock a single row within a table. Application A lock resource defined by an application. The lock manager knows nothing about the resource format. It simply compares the 'strings' representing the lock resources to determine whether it has found a match. If a match is found, it knows that resource is already locked. Some of the resources have “sub-resources.” The followings are sub-resources displayed by the sp_lock output: Database Lock Sub-Resources: Full Database Lock (default) [BULK-OP-DB] – Bulk Operation Lock for Database [BULK-OP-LOG] – Bulk Operation Lock for Log Table Lock Sub-Resources: Full Table Lock (default) [UPD-STATS] – Update statistics Lock [COMPILE] – Compile Lock Index Lock sub-Resources: Full Index Lock (default) [INDEX_ID] – Index ID Lock [INDEX_NAME] – Index Name Lock [BULK_ALLOC] – Bulk Allocation Lock [DEFRAG] – Defragmentation Lock For more information, see also… SOX000821700049 SQL 7.0 How to interpret lock resource Ids Lock Resource Block The resource type has the following resource block format: Resource Type (Code) Content DB (2) Data 1: sub-resource; Data 2: 0; Data 3: 0 File (3) Data 1: File ID; Data 2: 0; Data 3: 0 Index (4) Data 1: Object ID; Data 2: sub-resource; Data 3: Index ID Table (5) Data 1: Object ID; Data 2: sub-resource; Data 3: 0. Page (6) Data 1: Page Number; Data 3: 0. Key (7) Data 1: Object ID; Data 2: Index ID; Data 3: Hashed Key Extent (8) Data 1: Extent ID; Data 3: 0. RID (9) Data 1: RID; Data 3: 0. Application (10) Data 1: Application resource name The rsc_bin column of master..syslockinfo contains the resource block in hexadecimal format. For an example of how to decode value from this column using the information above, let us assume we have the following value: 0x000705001F83D775010002014F0BEC4E With byte swapping within each field, this can be decoded as: Byte 0: Flag – 0x00 Byte 1: Resource Type – 0x07 (Key) Byte 2-3: DBID – 0x0005 Byte 4-7: ObjectID – 0x 75D7831F (1977058079) Byte 8-9: IndexID – 0x0001 Byte 10-16: Hash Key value – 0x 02014F0BEC4E For more information about how to decode this value, see also… Inside SQL Server 2000, pages 803 and 806. Key Range Locking Key Range Locking To support SERIALIZABLE transaction semantics, SQL Server needs to lock sets of rows specified by a predicate, such as WHERE salary BETWEEN 30000 AND 50000 SQL Server needs to lock data that does not exist! If no rows satisfy the WHERE condition the first time the range is scanned, no rows should be returned on any subsequent scans. Key range locks are similar to row locks on index keys (whether clustered or not). The locks are placed on individual keys rather than at the node level. The hash value consists of all the key components and the locator. So, for a nonclustered index over a heap, where columns c1 and c2 where indexed, the hash would contain contributions from c1, c2 and the RID. A key range lock applied to a particular key means that all keys between the value locked and the next value would be locked for all data modification. Key range locks can lock a slightly larger range than that implied by the WHERE clause. Suppose the following select was executed in a transaction with isolation level SERIALIZABLE: SELECT * FROM members WHERE first_name between ‘Al’ and ‘Carl’ If 'Al', 'Bob', and 'Dave' are index keys in the table, the first two of these would acquire key range locks. Although this would prevent anyone from inserting either 'Alex' or 'Ben', it would also prevent someone from inserting 'Dan', which is not within the range of the WHERE clause. Prior to SQL Server 7.0, page locking was used to prevent phantoms by locking the entire set of pages on which the phantom would exist. This can be too conservative. Key Range locking lets SQL Server lock only a much more restrictive area of the table. Impact Key-range locking ensures that these scenarios are SERIALIZABLE:  Range scan query  Singleton fetch of nonexistent row  Delete operation  Insert operation However, the following conditions must be satisfied before key-range locking can occur:  The transaction-isolation level must be set to SERIALIZABLE.  The operation performed on the data must use an index range access. Range locking is activated only when query processing (such as the optimizer) chooses an index path to access the data. Key Range Lock Mode Again, the req_mode column of master.dbo.syslockinfo has lock mode code that is one less than the code values shown here. Dynamic Locking When modifying individual rows, SQL Server typically would take row locks to maximize concurrency (for example, OLTP, order-entry application). When scanning larger volumes of data, it would be more appropriate to take page or table locks to minimize the cost of acquiring locks (for example, DSS, data warehouse, reporting). Locking Decision The decision about which unit to lock is made dynamically, taking many factors into account, including other activity on the system. For example, if there are multiple transactions currently accessing a table, SQL Server will tend to favor row locking more so than it otherwise would. It may mean the difference between scanning the table now and paying a bit more in locking cost, or having to wait to acquire a more coarse lock. A preliminary locking decision is made during query optimization, but that decision can be adjusted when the query is actually executed. Lock Escalation When the lock count for the transaction exceeds and is a multiple of ESCALATION_THRESHOLD (1250), the Lock Manager attempts to escalate. For example, when a transaction acquired 1250 locks, lock manager will try to escalate. The number of locks held may continue to increase after the escalation attempt (for example, because new tables are accessed, or the previous lock escalation attempts failed due to incompatible locks held by another spid). If the lock count for this transaction reaches 2500 (1250 * 2), Lock Manager will attempt escalation again. The Lock Manager looks at the lock memory it is using and if it is more than 40 percent of SQL Server’s allocated buffer pool memory, it tries to find a scan (SDES) where no escalation has already been performed. It then repeats the search operation until all scans have been escalated or until the memory used drops under the MEMORY_LOAD_ESCALATION_THRESHOLD (40%) value. If lock escalation is not possible or fails to significantly reduce lock memory footprint, SQL Server can continue to acquire locks until the total lock memory reaches 60 percent of the buffer pool (MAX_LOCK_RESOURCE_MEMORY_PERCENTAGE=60). Lock escalation may be also done when a single scan (SDES) holds more than LOCK_ESCALATION_THRESHOLD (765) locks. There is no lock escalation on temporary tables or system tables. Trace Flag 1211 disables lock escalation. Important Do not relay this to the customer without careful consideration. Lock escalation is a necessary feature, not something to be avoided completely. Trace flags are global and disabling lock escalation could lead to out of memory situations, extremely poor performing queries, or other problems. Lock escalation tracing can be seen using the Profiler or with the general locking trace flag, -T1200. However, Trace Flag 1200 shows all lock activity so it should not be usable on a production system. For more information, see also… SOX000925700237 “TITLE: SQL 7.0 Lock escalation in SQL 7.0” Lock Timeout Application Lock Timeout An application can set lock timeout for a session with the SET option: SET LOCK_TIMEOUT N where N is a number of milliseconds. A value of -1 means that there will be no timeout, which is equivalent to the version 6.5 behavior. A value of 0 means that there will be no waiting; if a process finds a resource locked, it will generate error message 1222 and continue with the next statement. The current value of LOCK_TIMEOUT is stored in the global variable @@lock_timeout. Note After a lock timeout any transaction containing the statement, is rolled back or canceled by SQL Server 2000 (bug#352640 was filed). This behavior is different from that of SQL Server 7.0. With SQL Server 7.0, the application must have an error handler that can trap error 1222 and if an application does not trap the error, it can proceed unaware that an individual statement within a transaction has been canceled, and errors can occur because statements later in the transaction may depend on the statement that was never executed. Bug#352640 is fixed in hotfix build 8.00.266 whereby a lock timeout will only Internal Lock Timeout At time, internal operations within SQL Server will attempt to acquire locks via lock manager. Typically, these lock requests are issued with “no waiting.” For example, the ghost record processing might try to clean up rows on a particular page, and before it can do that, it needs to lock the page. Thus, the ghost record manager will request a page lock with no wait so that if it cannot lock the page, it will just move on to other pages; it can always come back to this page later. If you look at SQL Profiler Lock: Timeout events, internal lock timeout typically have a duration value of zero. Lock Duration Lock Mode and Transaction Isolation Level For REPEATABLE READ transaction isolation level, update locks are held until data is read and processed, unless promoted to exclusive locks. "Data is processed" means that we have decided whether the row in question matched the search criteria; if not then the update lock is released, otherwise, we get an exclusive lock and make the modification. Consider the following query: use northwind go dbcc traceon(3604, 1200, 1211) -- turn on lock tracing -- and disable escalation go set transaction isolation level repeatable read begin tran update dbo.[order details] set discount = convert (real, discount) where discount = 0.0 exec sp_lock Update locks are promoted to exclusive locks when there is a match; otherwise, the update lock is released. The sp_lock output verifies that the SPID does not hold any update locks or shared locks at the end of the query. Lock escalation is turned off so that exclusive table lock is not held at the end. Warning Do not use trace flag 1200 in a production environment because it produces a lot of output and slows down the server. Trace flag 1211 should not be used unless you have done extensive study to make sure it helps with performance. These trace flags are used here for illustration and learning purposes only. Lock Ownership Most of the locking discussion in this lesson relates to locks owned by “transactions.” In addition to transaction, cursor and session can be owners of locks and they both affect how long locks are held. For every row that is fetched, when SCROLL_LOCKS option is used, regardless of the state of a transaction, a cursor lock is held until the next row is fetched or when the cursor is closed. Locks owned by session are outside the scope of a transaction. The duration of these locks are bounded by the connection and the process will continue to hold these locks until the process disconnects. A typical lock owned by session is the database (DB) lock. Locking – Read Committed Scan Under read committed isolation level, when database pages are scanned, shared locks are held when the page is read and processed. The shared locks are released “behind” the scan and allow other transactions to update rows. It is important to note that the shared lock currently acquired will not be released until shared lock for the next page is successfully acquired (this is commonly know as “crabbing”). If the same pages are scanned again, rows may be modified or deleted by other transactions. Locking – Repeatable Read Scan Under repeatable read isolation level, when database pages are scanned, shared locks are held when the page is read and processed. SQL Server continues to hold these shared locks, thus preventing other transactions to update rows. If the same pages are scanned again, previously scanned rows will not change but new rows may be added by other transactions. Locking – Serializable Read Scan Under serializable read isolation level, when database pages are scanned, shared locks are held not only on rows but also on scanned key range. SQL Server continues to hold these shared locks until the end of transaction. Because key range locks are held, not only will this prevent other transactions from modifying the rows, no new rows can be inserted. Prefetch and Isolation Level Prefetch and Locking Behavior The prefetch feature is available for use with SQL Server 7.0 and SQL Server 2000. When searching for data using a nonclustered index, the index is searched for a particular value. When that value is found, the index points to the disk address. The traditional approach would be to immediately issue an I/O for that row, given the disk address. The result is one synchronous I/O per row and, at most, one disk at a time working to evaluate the query. This does not take advantage of striped disk sets. The prefetch feature takes a different approach. It continues looking for more record pointers in the nonclustered index. When it has collected a number of them, it provides the storage engine with prefetch hints. These hints tell the storage engine that the query processor will need these particular records soon. The storage engine can now issue several I/Os simultaneously, taking advantage of striped disk sets to execute multiple operations simultaneously. For example, if the engine is scanning a nonclustered index to determine which rows qualify but will eventually need to visit the data page as well to access columns that are not in the index, it may decide to submit asynchronous page read requests for a group of qualifying rows. The prefetch data pages are then revisited later to avoid waiting for each individual page read to complete in a serial fashion. This data access path requires that a lock be held between the prefetch request and the row lookup to stabilize the row on the page so it is not to be moved by a page split or clustered key update. For our example, the isolation level of the query is escalated to REPEATABLE READ, overriding the transaction isolation level. With SQL Server 7.0 and SQL Server 2000, portions of a transaction can execute at a different transaction isolation level than the entire transaction itself. This is implemented as lock classes. Lock classes are used to control lock lifetime when portions of a transaction need to execute at a stricter isolation level than the underlying transaction. Unfortunately, in SQL Server 7.0 and SQL Server 2000, the lock class is created at the topmost operator of the query and hence released only at the end of the query. Currently there is no support to release the lock (lock class) after the row has been discarded or fetched by the filter or join operator. This is because isolation level can be set at the query level via a lock class, but no lower. Because of this, locks acquired during the query will not be released until the query completes. If prefetch is occurring you may see a single SPID that holds hundreds of Shared KEY or PAG locks even though the connection’s isolation level is READ COMMITTED. Isolation level can be determined from DBCC PSS output. For details about this behavior see “SOX001109700040 INF: Queries with PREFETCH in the plan hold lock until the end of transaction”. Other Locking Mechanism Lock manager does not manage latches and spinlocks. Latches Latches are internal mechanisms used to protect pages while doing operations such as placing a row physically on a page, compressing space on a page, or retrieving rows from a page. Latches can roughly be divided into I/O latches and non-I/O latches. If you see a high number of non-I/O related latches, SQL Server is usually doing a large number of hash or sort operations in tempdb. You can monitor latch activities via DBCC SQLPERF(‘WAITSTATS’) command. Spinlock A spinlock is an internal data structure that is used to protect vital information that is shared within SQL Server. On a multi-processor machine, when SQL Server tries to access a particular resource protected by a spinlock, it must first acquire the spinlock. If it fails, it executes a loop that will check to see if the lock is available and if not, decrements a counter. If the counter reaches zero, it yields the processor to another thread and goes into a “sleep” (wait) state for a pre-determined amount of time. When it wakes, hopefully, the lock is free and available. If not, the loop starts again and it is terminated only when the lock is acquired. The reason for implementing a spinlock is that it is probably less costly to “spin” for a short time rather than yielding the processor. Yielding the processor will force an expensive context switch where:  The old thread’s state must be saved  The new thread’s state must be reloaded  The data stored in the L1 and L2 cache are useless to the processor On a single-processor computer, the loop is not useful because no other thread can be running and thus, no one can release the spinlock for the currently executing thread to acquire. In this situation, the thread yields the processor immediately. Lesson 2: Concepts – Batch and Transaction This lesson outlines some of the common causes that contribute to the perception of a slow server. What You Will Learn After completing this lesson, you will be able to:  Review batch processing and error checking.  Review explicit, implicit and autocommit transactions and transaction nesting level.  Discuss how commit and rollback transaction done in stored procedure and trigger affects transaction nesting level.  Discuss various transaction isolation level and their impact on locking.  Discuss the difference between aborting a statement, a transaction, and a batch.  Describe how @@error, @@transcount, and @@rowcount can be used for error checking and handling. Recommended Reading  Charter 12 “Transactions and Triggers”, Inside SQL Server 2000 by Kalen Delaney Batch Definition SQL Profiler Statements and Batches To help further your understanding of what is a batch and what is a statement, you can use SQL Profiler to study the definition of batch and statement.  Try This: Using SQL Profiler to Analyze Batch 1. Log on to a server with Query Analyzer 2. Startup the SQL Profiler against the same server 3. Start a trace using the “StandardSQLProfiler” template 4. Execute the following using Query Analyzer: SELECT @@VERSION SELECT @@SPID The ‘SQL:BatchCompleted’ event is captured by the trace. It shows both the statements as a single batch. 5. Now execute the following using Query Analyzer {call sp_who()} What shows up? The ‘RPC:Completed’ with the sp_who information. RPC is simply another entry point to the SQL Server to call stored procedures with native data types. This allows one to avoid parsing. The ‘RPC:Completed’ event should be considered the same as a batch for the purposes of this discussion. Stop the current trace and start a new trace using the “SQLProfilerTSQL_SPs” template. Issue the same command as outlines in step 5 above. Looking at the output, not only can you see the batch markers but each statement as executed within the batch. Autocommit, Explicit, and Implicit Transaction Autocommit Transaction Mode (Default) Autocommit mode is the default transaction management mode of SQL Server. Every Transact-SQL statement, whether it is a standalone statement or part of a batch, is committed or rolled back when it completes. If a statement completes successfully, it is committed; if it encounters any error, it is rolled back. A SQL Server connection operates in autocommit mode whenever this default mode has not been overridden by either explicit or implicit transactions. Autocommit mode is also the default mode for ADO, OLE DB, ODBC, and DB-Library. A SQL Server connection operates in autocommit mode until a BEGIN TRANSACTION statement starts an explicit transaction, or implicit transaction mode is set on. When the explicit transaction is committed or rolled back, or when implicit transaction mode is turned off, SQL Server returns to autocommit mode. Explicit Transaction Mode An explicit transaction is a transaction that starts with a BEGIN TRANSACTION statement. An explicit transaction can contain one or more statements and must be terminated by either a COMMIT TRANSACTION or a ROLLBACK TRANSACTION statement. Implicit Transaction Mode SQL Server can automatically or, more precisely, implicitly start a transaction for you if a SET IMPLICIT_TRANSACTIONS ON statement is run or if the implicit transaction option is turned on globally by running sp_configure ‘user options’ 2. (Actually, the bit mask 0x2 must be turned on for the user option so you might have to perform an ‘OR’ operation with the existing user option value.) See SQL Server 2000 Books Online on how to turn on implicit transaction under ODBC and OLE DB (acdata.chm::/ac_8_md_06_2g6r.htm). Transaction Nesting Explicit transactions can be nested. Committing inner transactions is ignored by SQL Server other than to decrements @@TRANCOUNT. The transaction is either committed or rolled back based on the action taken at the end of the outermost transaction. If the outer transaction is committed, the inner nested transactions are also committed. If the outer transaction is rolled back, then all inner transactions are also rolled back, regardless of whether the inner transactions were individually committed. Each call to COMMIT TRANSACTION applies to the last executed BEGIN TRANSACTION. If the BEGIN TRANSACTION statements are nested, then a COMMIT statement applies only to the last nested transaction, which is the innermost transaction. Even if a COMMIT TRANSACTION transaction_name statement within a nested transaction refers to the transaction name of the outer transaction, the commit applies only to the innermost transaction. If a ROLLBACK TRANSACTION statement without a transaction_name parameter is executed at any level of a set of nested transaction, it rolls back all the nested transactions, including the outermost transaction. The @@TRANCOUNT function records the current transaction nesting level. Each BEGIN TRANSACTION statement increments @@TRANCOUNT by one. Each COMMIT TRANSACTION statement decrements @@TRANCOUNT by one. A ROLLBACK TRANSACTION statement that does not have a transaction name rolls back all nested transactions and decrements @@TRANCOUNT to 0. A ROLLBACK TRANSACTION that uses the transaction name of the outermost transaction in a set of nested transactions rolls back all the nested transactions and decrements @@TRANCOUNT to 0. When you are unsure if you are already in a transaction, SELECT @@TRANCOUNT to determine whether it is 1 or more. If @@TRANCOUNT is 0 you are not in a transaction. You can also find the transaction nesting level by checking the sysprocess.open_tran column. See SQL Server 2000 Books Online topic “Nesting Transactions” (acdata.chm::/ac_8_md_06_66nq.htm) for more information. Statement, Transaction, and Batch Abort One batch can have many statements and one transaction can have multiple statements, also. One transaction can span multiple batches and one batch can have multiple transactions. Statement Abort Currently executing statement is aborted. This can be a bit confusing when you start talking about statements in a trigger or stored procedure. Let us look closely at the following trigger: CREATE TRIGGER TRG8134 ON TBL8134 AFTER INSERT AS BEGIN SELECT 1/0 SELECT 'Next command in trigger' END To fire the INSERT trigger, the batch could be as simple as ‘INSERT INTO TBL8134 VALUES(1)’. However, the trigger contains two statements that must be executed as part of the batch to satisfy the clients insert request. When the ‘SELECT 1/0’ causes the divide by zero error, a statement abort is issued for the ‘SELECT 1/0’ statement. Batch and Transaction Abort On SQL Server 2000 (and SQL Server 7.0) whenever a non-informational error is encountered in a trigger, the statement abort is promoted to a batch and transactional abort. Thus, in the example the statement abort for ‘select 1/0’ promotion results in an entire batch abort. No further statements in the trigger or batch will be executed and a rollback is issued. On SQL Server 6.5, the statement aborts immediately and results in a transaction abort. However, the rest of the statements within the trigger are executed. This trigger could return ‘Next command in trigger’ as a result set. Once the trigger completes the batch abort promotion takes effect. Conversely, submitting a similar set of statements in a standalone batch can result in different behavior. SELECT 1/0 SELECT 'Next command in batch' Not considering the set option possibilities, a divide by zero error generally results in a statement abort. Since it is not in a trigger, the promotion to a batch abort is avoided and subsequent SELECT statement can execute. The programmer should add an “if @@ERROR” check immediately after the ‘select 1/0’ to T-SQL execution to control the flow correctly. Aborting and Set Options ARITHABORT If SET ARITHABORT is ON, these error conditions cause the query or batch to terminate. If the errors occur in a transaction, the transaction is rolled back. If SET ARITHABORT is OFF and one of these errors occurs, a warning message is displayed, and NULL is assigned to the result of the arithmetic operation. When an INSERT, DELETE, or UPDATE statement encounters an arithmetic error (overflow, divide-by-zero, or a domain error) during expression evaluation when SET ARITHABORT is OFF, SQL Server inserts or updates a NULL value. If the target column is not nullable, the insert or update action fails and the user receives an error. XACT_ABORT When SET XACT_ABORT is ON, if a Transact-SQL statement raises a run-time error, the entire transaction is terminated and rolled back. When OFF, only the Transact-SQL statement that raised the error is rolled back and the transaction continues processing. Compile errors, such as syntax errors, are not affected by SET XACT_ABORT. For example: CREATE TABLE t1 (a int PRIMARY KEY) CREATE TABLE t2 (a int REFERENCES t1(a)) GO INSERT INTO t1 VALUES (1) INSERT INTO t1 VALUES (3) INSERT INTO t1 VALUES (4) INSERT INTO t1 VALUES (6) GO SET XACT_ABORT OFF GO BEGIN TRAN INSERT INTO t2 VALUES (1) INSERT INTO t2 VALUES (2) /* Foreign key error */ INSERT INTO t2 VALUES (3) COMMIT TRAN SELECT 'Continue running batch 1...' GO SET XACT_ABORT ON GO BEGIN TRAN INSERT INTO t2 VALUES (4) INSERT INTO t2 VALUES (5) /* Foreign key error */ INSERT INTO t2 VALUES (6) COMMIT TRAN SELECT 'Continue running batch 2...' GO /* Select shows only keys 1 and 3 added. Key 2 insert failed and was rolled back, but XACT_ABORT was OFF and rest of transaction succeeded. Key 5 insert error with XACT_ABORT ON caused all of the second transaction to roll back. Also note that 'Continue running batch 2...' is not Returned to indicate that the batch is aborted. */ SELECT * FROM t2 GO DROP TABLE t2 DROP TABLE t1 GO Compile and Run-time Errors Compile Errors Compile errors are encountered during syntax checks, security checks, and other general operations to prepare the batch for execution. These errors can prevent the optimization of the query and thus lead to immediate abort. The statement is not run and the batch is aborted. The transaction state is generally left untouched. For example, assume there are four statements in a particular batch. If the third statement has a syntax error, none of the statements in the batch is executed. Optimization Errors Optimization errors would include rare situations where the statement encounters a problem when attempting to build an optimal execution plan. Example: “too many tables referenced in the query” error is reported because a “work table” was added to the plan. Runtime Errors Runtime errors are those that are encountered during the execution of the query. Consider the following batch: SELECT * FROM pubs.dbo.titles UPDATE pubs.dbo.authors SET au_lname = au_lname SELECT * FROM foo UPDATE pubs.dbo.authors SET au_lname = au_lname If you run the above statements in a batch, the first two statements will be executed, the third statement will fail because table foo does not exist, and the batch will terminate. Deferred Name Resolution is the feature that allows this batch to start executing before resolving the object foo. This feature allows SQL Server to delay object resolution and place a “placeholder” in the query’s execution. The object referenced by the placeholder is resolved until the query is executed. In our example, the execution of the statement “SELECT * FROM foo” will trigger another compile process to resolve the name again. This time, error message 208 is returned. Error: 208, Level 16, State 1, Line 1 Invalid object name 'foo'. Message 208 can be encountered as a runtime or compile error depending on whether the Deferred Name Resolution feature is available. In SQL Server 6.5 this would be considered a compile error and on SQL Server 2000 (and SQL Server7.0) as a runtime error due to Deferred Name Resolution. In the following example, if a trigger referenced authors2, the error is detected as SQL Server attempts to execute the trigger. However, under SQL Server 6.5 the create trigger statement fails because authors2 does not exist at compile time. When errors are encountered in a trigger, generally, the statement, batch, and transaction are aborted. You should be able to observe this by running the following script in pubs database: Create table tblTest(iID int) go create trigger trgInsert on tblTest for INSERT as begin select * from authors select * from authors2 select * from titles end go begin tran select 'Before' insert into tblTest values(1) select 'After' go select @@TRANCOUNT go When run in a batch, the statement and the batch are aborted but the transaction remains active. The follow script illustrates this: begin tran select 'Before' select * from authors2 select 'After' go select @@TRANCOUNT go One other factor in a compile versus runtime error is implicit data type conversions. If you were to run the following statements on SQL Server 6.5 and SQL Server 2000 (and SQL Server 7.0): create table tblData(dtData datetime) go select 1 insert into tblData values(12/13/99) go On SQL Server 6.5, you get an error before execution of the batch begins so no statements are executed and the batch is aborted. Error: 206, Level 16, State 2, Line 2 Operand type clash: int is incompatible with datetime On SQL Server 2000, you get the default value (1900-01-01 00:00:00.000) inserted into the table. SQL Server 2000 implicit data type conversion treats this as integer division. The integer division of 12/13/99 is 0, so the default date and time value is inserted, no error returned. To correct the problem on either version is to wrap the date string with quotes. See Bug #56118 (sqlbug_70) for more details about this situation. Another example of a runtime error is a 605 message. Error: 605 Attempt to fetch logical page %S_PGID in database '%.*ls' belongs to object '%.*ls', not to object '%.*ls'. A 605 error is always a runtime error. However, depending on the transaction isolation level, (e.g. using the NOLOCK lock hint), established by the SPID the handling of the error can vary. Specifically, a 605 error is considered an ACCESS error. Errors associated with buffer and page access are found in the 600 series of errors. When the error is encountered, the isolation level of the SPID is examined to determine proper handling based on information or fatal error level. Transaction Error Checking Not all errors cause transactions to automatically rollback. Although it is difficult to determine exactly which errors will rollback transactions and which errors will not, the main idea here is that programmers must perform error checking and handle errors appropriately. Error Handling Raiserror Details Raiserror seems to be a source of confusion but is really rather simple. Raiserror with severity levels of 20 or higher will terminate the connection. Of course, when the connection is terminated a full rollback of any open transaction will immediately be instantiated by the SQL Server (except distributed transaction with DTC involved). Severity levels lower than 20 will simply result in the error message being returned to the client. They do not affect the transaction scope of the connection. Consider the following batch: use pubs begin tran update authors set au_lname = 'smith' raiserror ('This is bad', 19, 1) with log select @@trancount With severity set at 19, the 'select @@trancount' will be executed after the raiserror statement and will return a value of 1. If severity is changed to 20, then the select statement will not run and the connection is broken. Important Error handling must occur not only in T-SQL batches and stored procedures, but also in application program code. Transactions and Triggers (1 of 2) Basic behavior assumes the implicit transactions setting is set to OFF. This behavior makes it possible to identify business logic errors in a trigger, raise an error, rollback the action, and add an audit table entry. Logically, the insert to the audit table cannot take place before the ROLLBACK action and you would not want to build in the audit table insert into every applications error handler that violated the business rule of the trigger. For more information, see also… SQL Server 2000 Books Online topic “Rollbacks in stored procedure and triggers“ (acdata.chm::/ac_8_md_06_4qcz.htm) IMPLICIT_TRANSACTIONS ON Behavior The behavior of firing other triggers on the same table can be tricky. Say you added a trigger that checks the CODE field. Read only versions of the rows contain the code ‘RO’ and read/write versions use ‘RW.’ Whenever someone tries to delete a row with a code ‘RO’ the trigger issues the rollback and logs an audit table entry. However, you also have a second trigger that is responsible for cascading delete operations. One client could issue the delete without implicit transactions on and only the current trigger would execute and then terminate the batch. However, a second client with implicit transactions on could issue the same delete and the secondary trigger would fire. You end up with a situation in which the cascading delete operations can take place (are committed) but the initial row remains in the table because of the rollback operation. None of the delete operations should be allowed but because the transaction scope was restarted because of the implicit transactions setting, they did. Transactions and Triggers (2 of 2) It is extremely difficult to determine the execution state of a trigger when using explicit rollback statements in combination with implicit transactions. The RETURN statement is not allowed to return a value. The only way I have found to set the @@ERROR is using a ‘raiserror’ as the last execution statement in the last trigger to execute. If you modify the example, this following RAISERROR statement will set @@ERROR to 50000: CREATE TRIGGER trgTest on tblTest for INSERT AS BEGIN ROLLBACK INSERT INTO tblAudit VALUES (1) RAISERROR('This is bad', 14,1) END However, this value does not carry over to a secondary trigger for the same table. If you raise an error at the end of the first trigger and then look at @@ERROR in the secondary trigger the @@ERROR remains 0. Carrying Forward an Active/Open Transaction It is possible to exit from a trigger and carry forward an open transaction by issuing a BEGIN TRAN or by setting implicit transaction on and doing INSERT, UPDATE, or DELETE. Warning It is never recommended that a trigger call BEGIN TRANSACTION. By doing this you increment the transaction count. Invalid code logic, not calling commit transaction, can lead to a situation where the transaction count remains elevated upon exit of the trigger. Transaction Count The behavior is better explained by understanding how the server works. It does not matter whether you are in a transaction, when a modification takes place the transaction count is incremented. So, in the simplest form, during the processing of an insert the transaction count is 1. On completion of the insert, the server will commit (and thus decrement the transaction count). If the commit identifies the transaction count has returned to 0, the actual commit processing is completed. Issuing a commit when the transaction count is greater than 1 simply decrements the nested transaction counter. Thus, when we enter a trigger, the transaction count is 1. At the completion of the trigger, the transaction count will be 0 due to the commit issued at the end of the modification statement (insert). In our example, if the connection was already in a transaction and called the second INSERT, since implicit transaction is ON, the transaction count in the trigger will be 2 as long as the ROLLBACK is not executed. At the end of the insert, the commit is again issued to decrement the transaction reference count to 1. However, the value does not return to 0 so the transaction remains open/active. Subsequent triggers are only fired if the transaction count at the end of the trigger remains greater than or equal to 1. The key to continuation of secondary triggers and the batch is the transaction count at the end of a trigger execution. If the trigger that performs a rollback has done an explicit begin transaction or uses implicit transactions, subsequent triggers and the batch will continue. If the transaction count is not 1 or greater, subsequent triggers and the batch will not execute. Warning Forcing the transaction count after issuing a rollback is dangerous because you can easily loose track of your transaction nesting level. When performing an explicit rollback in a trigger, you should immediately issue a return statement to maintain consistent behavior between a connection with and without implicit transaction settings. This will force the trigger(s) and batch to terminate immediately. One of the methods of dealing with this issue is to run ‘SET IMPLICIT_TRANSACTIONS OFF’ as the first statement of any trigger. Other methods may entails checking @@TRANCOUNT at the end of the trigger and continue to COMMIT the transaction as long as @@TRANCOUNT is greater than 1. Examples The following examples are based on this table: create table tbl50000Insert (iID int NOT NULL) go Note If more than one trigger is used, to guarantee the trigger firing sequence, the sp_settriggerorder command should be used. This command is omitted in these examples to simplify the complexity of the statements. First Example In the first example, the second trigger was never fired and the batch, starting with the insert statement, was aborted. Thus, the print statement was never issued. print('Trigger issues rollback - cancels batch') go create trigger trg50000Insert on tbl50000Insert for INSERT as begin select 'Inserted', * from inserted rollback tran select 'End of trigger', @@TRANCOUNT as 'TRANCOUNT' end go create trigger trg50000Insert2 on tbl50000Insert for INSERT as begin select 'In Trigger2' select 'Trigger 2 Inserted', * from inserted end go insert into tbl50000Insert values(1) print('---------------------- In same batch') select * from tbl50000Insert go -- Cleanup drop trigger trg50000Insert drop trigger trg50000Insert2 go delete from tbl50000Insert Second Example The next example shows that since a new transaction is started, the second trigger will be fired and the print statement in the batch will be executed. Note that the insert is rolled back. print('Trigger issues rollback - increases tran count to continue batch') go create trigger trg50000Insert on tbl50000Insert for INSERT as begin select 'Inserted', * from inserted rollback tran begin tran end go create trigger trg50000Insert2 on tbl50000Insert for INSERT as begin select 'In Trigger2' select 'Trigger 2 Inserted', * from inserted end go insert into tbl50000Insert values(2) print('---------------------- In same batch') select * from tbl50000Insert go -- Cleanup drop trigger trg50000Insert drop trigger trg50000Insert2 go delete from tbl50000Insert Third Example In the third example, the raiserror statement is used to set the @@ERROR value and the BEGIN TRAN statement is used in the trigger to allow the batch to continue to run. print('Trigger issues rollback - uses raiserror to set @@ERROR') go create trigger trg50000Insert on tbl50000Insert for INSERT as begin select 'Inserted', * from inserted rollback tran begin tran -- Increase @@trancount to allow -- batch to continue select @@trancount as ‘Trancount’ raiserror('This is from the trigger', 14,1) end go insert into tbl50000Insert values(3) select @@ERROR as 'ERROR', @@TRANCOUNT as 'Trancount' go -- Cleanup drop trigger trg50000Insert go delete from tbl50000Insert Fourth Example For the fourth example, a second trigger is added to illustrate the fact that @@ERROR value set in the first trigger will not be seen in the second trigger nor will it show up in the batch after the second trigger is fired. print('Trigger issues rollback - uses raiserror to set @@ERROR, not seen in second trigger and cleared in batch') go create trigger trg50000Insert on tbl50000Insert for INSERT as begin select 'Inserted', * from inserted rollback begin tran -- Increase @@trancount to -- allow batch to continue select @@TRANCOUNT as 'Trancount' raiserror('This is from the trigger', 14,1) end go create trigger trg50000Insert2 on tbl50000Insert for INSERT as begin select @@ERROR as 'ERROR', @@TRANCOUNT as 'Trancount' end go insert into tbl50000Insert values(4) select @@ERROR as 'ERROR', @@TRANCOUNT as 'Trancount' go -- Cleanup drop trigger trg50000Insert drop trigger trg50000Insert2 go delete from tbl50000Insert Lesson 3: Concepts – Locks and Applications This lesson outlines some of the common causes that contribute to the perception of a slow server. What You Will Learn After completing this lesson, you will be able to:  Explain how lock hints are used and their impact.  Discuss the effect on locking when an application uses Microsoft Transaction Server.  Identify the different kinds of deadlocks including distributed deadlock. Recommended Reading  Charter 14 “Locking”, Inside SQL Server 2000 by Kalen Delaney  Charter 16 “Query Tuning”, Inside SQL Server 2000 by Kalen Delaney Q239753 – Deadlock Situation Not Detected by SQL Server Q288752 – Blocked SPID Not Participating in Deadlock May Incorrectly be Chosen as victim Locking Hints UPDLOCK If update locks are used instead of shared locks while reading a table, the locks are held until the end of the statement or transaction. UPDLOCK has the advantage of allowing you to read data (without blocking other readers) and update it later with the assurance that the data has not changed since you last read it. READPAST READPAST is an optimizer hint for use with SELECT statements. When this hint is used, SQL Server will read past locked rows. For example, assume table T1 contains a single integer column with the values of 1, 2, 3, 4, and 5. If transaction A changes the value of 3 to 8 but has not yet committed, a SELECT * FROM T1 (READPAST) yields values 1, 2, 4, 5. Tip READPAST only applies to transactions operating at READ COMMITTED isolation and only reads past row-level locks. This lock hint can be used to implement a work queue on a SQL Server table. For example, assume there are many external work requests being thrown into a table and they should be serviced in approximate insertion order but they do not have to be completely FIFO. If you have 4 worker threads consuming work items from the queue they could each pick up a record using read past locking and then delete the entry from the queue and commit when they're done. If they fail, they could rollback, leaving the entry on the queue for the next worker thread to pick up. Caution The READPAST hint is not compatible with HOLDLOCK.  Try This: Using Locking Hints 1. Open a Query Window and connect to the pubs database. 2. Execute the following statements (--Conn 1 is optional to help you keep track of each connection): BEGIN TRANSACTION -- Conn 1 UPDATE titles SET price = price * 0.9 WHERE title_id = 'BU1032' 3. Open a second connection and execute the following statements: SELECT @@lock_timeout -- Conn 2 GO SELECT * FROM titles SELECT * FROM authors 4. Open a third connection and execute the following statements: SET LOCK_TIMEOUT 0 -- Conn 3 SELECT * FROM titles SELECT * FROM authors 5. Open a fourth connection and execute the following statement: SELECT * FROM titles (READPAST) -- Conn 4 WHERE title_ID < 'C' SELECT * FROM authors How many records were returned? 3 6. Open a fifth connection and execute the following statement: SELECT * FROM titles (NOLOCK) -- Conn 5 WHERE title_ID 0 the lock manager also checks for deadlocks every time a SPID gets blocked. So a single deadlock will trigger 20 seconds of more immediate deadlock detection, but if no additional deadlocks occur in that 20 seconds, the lock manager no longer checks for deadlocks at each block and detection again only happens every 5 seconds. Although normally not needed, you may use trace flag -T1205 to trace the deadlock detection process. Note Please note the distinction between application lock and other locks’ deadlock detection. For application lock, we do not rollback the transaction of the deadlock victim but simply return a -3 to sp_getapplock, which the application needs to handle itself. Deadlock Resolution How is a deadlock resolved? SQL Server picks one of the connections as a deadlock victim. The victim is chosen based on either which is the least expensive transaction (calculated using the number and size of the log records) to roll back or in which process “SET DEADLOCK_PRIORITY LOW” is specified. The victim’s transaction is rolled back, held locks are released, and SQL Server sends error 1205 to the victim’s client application to notify it that it was chosen as a victim. The other process can then obtain access to the resource it was waiting on and continue. Error 1205: Your transaction (process ID #%d) was deadlocked with another process and has been chosen as the deadlock victim. Rerun your transaction. Symptoms of deadlocking Error 1205 usually is not written to the SQL Server errorlog. Unfortunately, you cannot use sp_altermessage to cause 1205 to be written to the errorlog. If the client application does not capture and display error 1205, some of the symptoms of deadlock occurring are:  Clients complain of mysteriously canceled queries when using certain features of an application.  May be accompanied by excessive blocking. Lock contention increases the chances that a deadlock will occur. Triggers and Deadlock Triggers promote the deadlock priority of the SPID for the life of the trigger execution when the DEADLOCK PRIORITY is not set to low. When a statement in a trigger causes a deadlock to occur, the SPID executing the trigger is given preferential treatment and will not become the victim. Warning Bug 235794 is filed against SQL Server 2000 where a blocked SPID that is not a participant of a deadlock may incorrectly be chosen as a deadlock victim if the SPID is blocked by one of the deadlock participants and the SPID has the least amount of transaction logging. See KB article Q288752: “Blocked Spid Not Participating in Deadlock May Incorrectly be Chosen as victim” for more information. Distributed Deadlock – Scenario 1 Distributed Deadlocks The term distributed deadlock is ambiguous. There are many types of distributed deadlocks. Scenario 1 Client application opens connection A, begins a transaction, acquires some locks, opens connection B, connection B gets blocked by A but the application is designed to not commit A’s transaction until B completes. Note SQL Server has no way of knowing that connection A is somehow dependent on B – they are two distinct connections with two distinct transactions. This situation is discussed in scenario #4 in “Q224453 INF: Understanding and Resolving SQL Server 7.0 Blocking Problems”. Distributed Deadlock – Scenario 2 Scenario 2 Distributed deadlock involving bound connections. Two connections can be bound into a single transaction context with sp_getbindtoken/sp_bindsession or via DTC. Spid 60 enlists in a transaction with spid 61. A third spid 62 is blocked by spid 60, but spid 61 is blocked by spid 62. Because they are doing work in the same transaction, spid 60 cannot commit until spid 61 finishes his work, but spid 61 is blocked by 62 who is blocked by 60. This scenario is described in article “Q239753 - Deadlock Situation Not Detected by SQL Server.” Note SQL Server 6.5 and 7.0 do not detect this deadlock. The SQL Server 2000 deadlock detection algorithm has been enhanced to detect this type of distributed deadlock. The diagram in the slide illustrates this situation. Resources locked by a spid are below that spid (in a box). Arrows indicate blocking and are drawn from the blocked spid to the resource that the spid requires. A circle represents a transaction; spids in the same transaction are shown in the same circle. Distributed Deadlock – Scenario 3 Scenario 3 Distributed deadlock involving linked servers or server-to-server RPC. Spid 60 on Server 1 executes a stored procedure on Server 2 via linked server. This stored procedure does a loopback linked server query against a table on Server 1, and this connection is blocked by a lock held by Spid 60. Note No version of SQL Server is currently designed to detect this distributed deadlock. Lesson 4: Information Collection and Analysis This lesson outlines some of the common causes that contribute to the perception of a slow server. What You Will Learn After completing this lesson, you will be able to:  Identify specific information needed for troubleshooting issues.  Locate and collect information needed for troubleshooting issues.  Analyze output of DBCC Inputbuffer, DBCC PSS, and DBCC Page commands.  Review information collected from master.dbo.sysprocesses table.  Review information collected from master.dbo.syslockinfo table.  Review output of sp_who, sp_who2, sp_lock.  Analyze Profiler log for query usage pattern.  Review output of trace flags to help troubleshoot deadlocks. Recommended Reading Q244455 - INF: Definition of Sysprocesses Waittype and Lastwaittype Fields Q244456 - INF: Description of DBCC PSS Command for SQL Server 7.0 Q271509 - INF: How to Monitor SQL Server 2000 Blocking Q251004 - How to Monitor SQL Server 7.0 Blocking Q224453 - Understanding and Resolving SQL Server 7.0 Blocking Problem Q282749 – BUG: Deadlock information reported with SQL Server 2000 Profiler Locking and Blocking  Try This: Examine Blocked Processes 1. Open a Query Window and connect to the pubs database. Execute the following statements: BEGIN TRAN -- connection 1 UPDATE titles SET price = price + 1 2. Open another connection and execute the following statement: SELECT * FROM titles-- connection 2 3. Open a third connection and execute sp_who; note the process id (spid) of the blocked process. (Connection 3) 4. In the same connection, execute the following: SELECT spid, cmd, waittype FROM master..sysprocesses WHERE waittype 0 -- connection 3 5. Do not close any of the connections! What was the wait type of the blocked process?  Try This: Look at locks held Assumes all your connections are still open from the previous exercise. • Execute sp_lock -- Connection 3 What locks is the process from the previous example holding? Make sure you run ROLLBACK TRAN in Connection 1 to clean up your transaction. Collecting Information See Module 2 for more about how to gather this information using various tools. Recognizing Blocking Problems How to Recognize Blocking Problems  Users complain about poor performance at a certain time of day, or after a certain number of users connect.  SELECT * FROM sysprocesses or sp_who2 shows non-zero values in the blocked or BlkBy column.  More severe blocking incidents will have long blocking chains or large sysprocesses.waittime values for blocked spids.  Possibl
XMGR RDISK and UIDE DOS Device Drivers 1 Description XMGR RDISK and UIDE are a group of DOS device drivers for a PC system with an 80386+ CPU and using MS DOS V5 0+ or equivalent XMGR is a DOS driver which works as an "XMS manager" and provides up to 4 GB of XMS memory XMGR has direct support for V3 70+ UMBPCI by Uwe Sieber After UMBPCI enables upper memory XMGR loads there and will provide both upper and XMS memory to a DOS system XMGR uses an "I O Catcher" with UMBPCI Disk diskette I O above 640K is trapped by XMGR and done using a low memory area as UMBPCI "Shadow RAM" cannot do DMA XMGR also runs with JEMM386 or MS DOS EMM386 With EMM drivers XMGR using its B switch first boots in temporary space When upper memory gets enabled by the EMM driver XMGR loads there with no B copies all its boot data and takes over XMS work For a small XMS only system XMGR can also run entirely in low memory RDISK is a DOS RAM disk driver It creates a "fast" disk drive using 2 Megabytes to 2 GIGABYTES of XMS memory It loads as a system driver in CONFIG SYS or it can load later in AUTOEXEC BAT or by user command DOS can copy critical programs and data to the RAMdisk where they will be read or written at memory speed If loaded after CONFIG SYS RDISK files can be assigned to any free DOS drive letter using its : switch RDISK runs with V2 0 or V3 0 XMS managers 60 MB maximum for V2 0 XMS It uses only 656 to 752 bytes of upper memory depending on the system and it can also load in 640K DOS memory RDISK is a simple and small RAMdisk driver for use when resizing or other features are not needed UIDE is a DOS "Universal IDE" caching driver It intercepts "Int 13h" BIOS I O requests and caches data for up to 30 BIOS disks including A: or B: diskettes and including hard disks of any size UIDE can handle 48 bit LBA or 24 bit CHS I O calls by new or old DOS systems It will handle up to 10 "Legacy" or "Native PCI" IDE controllers UIDE "calls the BIOS" for diskettes and intercepts I O for "Int 13h" drivers loaded first thus UIDE caches ALL drives on a DOS system "ASPI" and other "non Int 13h" drivers are unsupported UIDE also detects and runs up to 8 SATA IDE and old "PIO mode" CD DVD drives It can cache CD DVD data and directories for MUCH greater speed and it will play audio CDs and handle "raw" trackwriter input audio and "raw" input is uncached UIDE caches 5 Megabytes to 4 GIGABYTES of data It can set up to four separate caches of its own "Common" User 1" "User 2" and "CD DVD" and it also permits caching requests from user drivers to "bring along" their OWN caches See the UIDE TXT file for full details UIDE uses 4816 bytes of upper DOS memory for 1 to 4 caches of any size All its data or cache tables use XMS memory A "stand alone" UIDE B switch no cache or diskettes can be used in test or diagnostic work and takes 3664 bytes of upper DOS memory If its N2 switch is given UIDE will omit all CD DVD logic saving 1744 bytes Its "CD DVD" cache can then become a 3rd user driver cache if needed UIDE"s H switch will load most of the driver into "free HMA" thus using only 928 bytes of memory 832 "stand alone" The small UHDD and UDVD2 drivers are also available for those who want only non caching drivers or a smaller size driver set for use on "boot" diskettes etc UHDD can cache 26 SATA IDE disks of any size on up to 10 controllers A: or B: diskettes included It now has all four UIDE caches takes 3280 bytes for caching and it can set a 1408 byte "stand alone" driver no cache with its B switch UHDD can put most of its code in HMA space with its H switch taking only 832 bytes 640 "stand alone" UDVD2 handles up to 6 SATA IDE or old PIO mode CD DVD drives it tests up to 10 controllers on loading and takes 2000 bytes or 144 with its H switch Caching by UHDD adds 96 bytes and UDVD2 "shares" UHDD"s I O buffer in XMS for input unsuited to UltraDMA If UHDD is not used UDVD2 will take 128K of XMS as its buffer or it handles such input in PIO mode if XMS is not available UHDD + UDVD2 require only 10K of disk file space and provide most UIDE features The small RDISKON COM program can "re enable" a DOS drive used by RDISK if a "format" command is accidentally issued to it This disables the drive on some systems Entering RDISKON L at a DOS command prompt where L is the desired drive letter A to Z will re enable the drive The small CC COM "Clear Cache" program can help verify files written by UIDE Entering CC at the DOS command prompt sends a BIOS "reset" to all disks making UIDE flush its "Common" cache Data from the disk NOT data still in cache can then be compared to the original output 2 NO Warranties XMGR RDISK and UIDE are offered at no cost "as is" "use at your own risk" and with NO warranties not even the implied warranty of FITNESS for any particular purpose nor of MERCHANTABILITY Driver questions and comments may be addressed to the E Mail of Johnson Lam <johnsonlam hk@gmail com> 3 Revision Summary 19 Oct 14 UHDD now "overlaps" cache work during UltraDMA disk output and the disk sector "gap" at I O end for greater speed UHDD M switch deleted 256 byte binary search buffer is now permanent Other drivers unchanged re dated only 27 Sep 14 UHDD now sets all 4 UIDE caches New UHDD M switch sets a 512 byte binary search buffer for more speed 26 Jan 14 UIDE error handling CD DVD media changes for "stand alone" mode is fixed UHDD offers "Common" & "CD DVD" caches 12 Jan 14 UIDE UD switch deleted many problems UIDE now offers "User 1" and "User 2" caches "Stand alone" UHDD UDVD2 re added for use as needed 12 Dec 13 UHDD UDVD2 deleted low use UIDE N2 dismisses CD DVD logic UIDE C switch added user caching improved 21 Nov 13 UHDD old style "stand alone" driver re added 14 Nov 13 UHDD UDVD2 "private" caches deleted unneeded and unused 25 Sep 13 BAD error fixed in UDVD2 re: locating UHDD MANY Thanks to Japheth for his tests and exact analysis 9 Sep 13 Possible but unlikely UHDD exit errors corrected UDVD2 UIDE now use all 32 CD DVD LBA bits in caching calls 2 Sep 13 Possible UDVD2 "media change" error fixed UHDD N1 size reduced 26 Aug 13 UHDD now has its "Common" cache and handles "private" user driver caches UDVD2 etc can now set a private cache 28 Jul 13 UHDD UIDE binary search buffer and F switch deleted 30 Apr 13 UHDD UDVD2 can now run without XMS lower speed for tests and FreeDOS "scripts" UDVD2 can now do "raw" input 15 Oct 12 UHDD UIDE again detect A: and B: diskettes from BIOS data NOT from "Int 13h" calls that FAIL with an LS 120 drive 2 Aug 12 UHDD "disk only" caching driver added UDVD2 caches CD DVD data if UHDD is also loaded UIDEJR deleted New UD switch in UDVD2 UIDE for CD DVD directory caching 9 Jul 12 UIDE UIDEJR device select error for master + slave CD DVD units on one IDE channel is corrected Many Thanks to Doug Beneway for finding this error 25 Jun 12 UIDE2 deleted: Not enough added speed complex to use 17 Jun 12 UIDE UIDE2 UIDEJR A switch init of 2 "Old IDE" channels and CD audio "Q" status data corrected Many Thanks to Japheth for his research and audio test program 29 May 12 UIDE and UIDE2 check for diskettes via Int 13h avoid DPTE tests if no PCI BIOS let the BIOS do I O for disks with bad DPTE data all re: VirtualBox BUGS 24 Feb 12 UIDE UIDE2 "64K DMA boundary error" fixed may affect only year 2000 chips or older 16 Oct 11 UIDE M switch deleted search buffer is always 512 bytes UIDE SYS back to 7 5K UIDE S dropped UIDE2 improved 7 Oct 11 All UIDE drivers updated to avoid BIOS "DPTE" ERRORS: Bad DPTE data for USB sticks Many Thanks to Daniel Nice 9 Sep 11 UIDE2 re added UIDE S and UIDE2 handle 6 CD DVD drives 22 Jul 11 UIDE E switch added for DOS emulators VirtualBox etc 20 May 11 UIDE S "short" UIDE added for systems with limited HMA 25 Apr 11 BAD "code mods" init error corrected for UIDE UIDEJR and RDISK XMGR not affected 5 Dec 10 UIDE UIDEJR R15 and R63 switches added to handle old DOS "games" Thanks Guillermo Grana Gomez 28 Nov 10 Minor updates: UIDEJR audio track number error corrected XMGR faster in protected mode Added XMGR and UIDE Z 15 Aug 10 UIDE audio track number error corrected Thanks Nagatoshi Uehara 10 Aug 10 UIDE binary search buffer added Using $ in CD DVD names fixed in UIDE UIDEJR Thanks Japheth 4 Jul 10 README file update XMGR UIDE can use "Native IDE" mode same as "Legacy" "Compatibility" for AHCI mainboards 28 Jun 10 XMGR updated for AHCI see the README sec 7 for details 10 Jun 10 UIDE now ignores "removable HARD disks" size reduced 16 Nov 09 UIDE now caches 4 GIGABYTES of data 6 Oct 09 UIDE and UIDEJR H requests HMA use "at the user"s risk" 2 Sep 09 README file updated FreeDOS users who desire full upper memory must omit UMBPCI and load JEMM386 JEMMEX only 23 Jun 09 RDISK now a COM file RDISK : switch RDISKON program added Corrected UIDE CD DVD handling of VDS errors 9 Jun 09 UIDE UIDEJR N3 switch added for no XMS memory Override of D: name by UIDE$ UIDEJR$ added for no CD DVD drives 15 May 09 Added RDISK 6 May 09 Added UIDEJR 1 May 09 Fixed XMGR "Port 92h" logic error Added XMGR PA and PN switches to control use of "Port 92h" 25 Apr 09 XMGR UIDE license and FreeDOS prohibition deleted drivers and sources are again available to all 4 Switch Options XMGR usually needs only its B switch if "booting" with an EMM driver All XMGR switch options are as follows: B Specifies "boot" mode XMGR loads in temporary memory until upper memory is enabled Without B XMGR loads stand alone in low memory or direct to upper memory with UMBPCI See the CONFIG SYS examples in section 5 Mn Specifies a temporary area for loading XMGR in "boot" mode or for UMBPCI upper memory I O before DOS posts a "workspace" buffer Values are: M1 64K M3 192K M5 320K M7 448K M2 128K M4 256K M6 384K M8 512K Without M M5 is assumed and the 320K area will be used NOTE: DOS systems may NOT load at address 0 and may leave temporary data anywhere in memory Mn helps to find a "safe" area for XMGR to use M is ignored if XMGR loads stand alone Nnn Specifies how many XMS "Handles" can be used by DOS programs The value nn may be 48 80 or 128 If N is omitted 48 "Handles" are used A big system doing much XMS work may need 80 or 128 "Handles" PA Specifies use or non use of PS 2 Port 92h logic to handle the PN system"s "A20" line PA indicates "Always" use Port 92h logic PN indicates "Never" use it and handle "A20" via normal keyboard port logic If P is omitted XMGR "asks the BIOS" if the system has Port 92h logic If not XMGR will use normal "A20" logic NOTE: If "A20" was enabled by DOS before XMGR loads XMGR does not handle it at all Tn Specifies the BIOS requests to use in getting extended memory as follows: T0 No "E820h" nor "E801h" requests T1 Memory list requests only Int 15h AX E820h T2 A dual area request only Int 15h AX E801h T3 "E820h" requests first then an "E801h" request T can usually be omitted causing T3 to be assumed In addition XMGR always uses an old 64 MB request to get T0 memory or if the requests denoted by T1 thru T3 are not successful Users may need to test T1 or T2 separately to see if their BIOS takes them A pre 1994 BIOS may not ignore T1 thru T3 correctly and may require T0 instead For old "QHIMEM" users T4 thru T7 may still be used and work the same as T0 thru T3 W Specifies use of the DOS "workspace" buffer for upper memory I O if loading with UMBPCI If W is omitted or if the DOS system does not have proper workspace logic XMGR sets its own buffer in low memory With PC DOS or EDR DOS W must be omitted Without UMBPCI W is ignored Z See Z for UIDE below RDISK uses only S size and : drive letter switches: Sn Specifies a desired RAM disk size in megabytes of XMS memory Values may be any number from 2 to 2047 S1024 or more creates a 1 to 2 GIGABYTE RAM disk If S is omitted or invalid a 25 MB RAM disk is created by default For old V2 0 XMS managers ROM DOS etc only S2 through S60 may be used See section 5 below for more details :L Specifies the DOS drive letter desired to access RDISK files L may be any available drive letter from A to Z e g :N assigns drive N: to all RDISK files If the drive letter is too high or already in use RDISK will abort and users may need "LASTDRIVE " in CONFIG SYS to set up more drives If RDISK is loaded by CONFIG SYS or if : is omitted the next free drive letter will be used UIDE usually needs only a H switch to use HMA space and a S switch to specify its cache size All UIDE switches are as follows: A Specifies ALTERNATE addressing for "legacy IDE" controllers The first legacy controller uses 01E8h 0168h addresses and a second if present uses 01F0h 0170h addresses A is only for "odd" mainboards with REVERSED addressing for the two legacy IDE controllers Without A the first legacy controller uses 01F0h 0170H and a second uses 01E8h 0168h as is normal for most PC mainboards B Requests a "basic" UltraDMA driver for disks and CDs DVDs no caching or diskette handling This may help for tests or diagnostics The B driver can request 128K of XMS as an UltraDMA I O buffer and it can load in the HMA The N2 switch can be given with B to "dismiss" all CD DVD logic Cnn Sets a separate "CD DVD" cache for higher CD DVD performance Values for nn are the same as for the S switch and permit up to 4 GB caches The "CD DVD" cache can be used by any user driver devices on systems with no SATA or IDE CD DVD drives If C is omitted data for requests addressed to the "CD DVD" cache shall go into UIDE"s "Common" cache D: Specifies the "device name" used by the CD DVD Redirector to access CD DVD drives For example: D:CDROM1 D:SANYO1 etc If D: is not given or the name following a D: is missing invalid UDVD1 is set by default If no CD DVD drives were found UIDE$ overrides any D: name for use with FreeDOS autoloader scripts E Makes the driver call the BIOS for any hard disk I O request E avoids setup trouble on some DOS emulators VirtualBox etc that do not emulate all PC hardware logic E also allows using hard disks on 1994 or older PCs which have no PCI EDD BIOS E still caches disk data unlike N1 that removes ALL disk support If B is given E is ignored NOTE Use of E on protected mode systems JEMM386 etc may run VERY slow Many BIOS programs omit DOS "VDS" support for hard disks and in protected mode they must do "PIO mode" transfers not UltraDMA If E is required a PC should be run in real mode UMBPCI etc whenever possible H Loads most of the driver in "free HMA" space UIDE will use only 928 bytes of upper DOS memory 832 when B is given H must not be used with ROM DOS which has no HMA NOTE MS DOS kernels have ERRORS in posting free HMA space which can give CRASHES Specifying H is "At the user"s risk" No such crashes are noted with other DOS systems also HMA usage by UIDE is under 4K bytes Users should still test a PC system before H is given for any serious tasks with these drivers N1 Requests NO hard disk handling by the driver N2 Requests NO CD DVD handling by the driver N2 will dismiss all CD DVD routines and save 1744 bytes N3 Requests no XMS memory N3 sets UIDE"s B "basic" driver N3 requires loading in low memory or UIDE aborts N3 can LOSE much speed as misaligned or other I O not suited to UltraDMA requires "calling the BIOS" for disks or using "PIO mode" for CD DVD drives N4 See Z below Q Awaits a "data request" before doing UltraDMA disk transfers Q is for "old" systems and may be used only if the driver loads O K but seems unable to transfer data Q must be OMITTED with SATA to IDE adapters from Sabrent and others since they may not emulate "data request" from SATA disks Q does not affect CD DVD drives R15 Sets the driver"s XMS memory at 16 or 64 MB R15 reserves R63 15 MB of XMS and R63 reserves 63 MB of XMS for DOS game programs that require XMS memory below 16 or 64 MB The drivers must be able to reserve this memory reserve their own XMS above that and "free" the 15 63 MB XMS If not the drivers display "XMS init error" and abort R15 or R63 need the drivers to load after the XMS manager XMGR HIMEMX etc so another driver cannot take any XMS first and the reserved XMS is just beyond the HMA See section 7 below for further details Snn Specifies the desired "Common" cache size in megabytes of XMS memory UIDE"s "Common" cache holds data for hard disks diskettes and CD DVD drives when C above is not given Values for S can be 5 15 25 40 50 or any number from 80 to 4093 S1024 and up sets a 1 to 4 GIGABYTE cache Suggested S values are Below 128 MB memory: Use S5 S15 S25 or S40 With 128 MB memory: Use S25 S40 S50 or S80 With 256 MB memory: Use S80 up to S127 With 512 MB memory: Use S160 up to S255 With 1 GB memory: Use S320 up to S511 With 2 GB memory: Use S640 up to S1023 With 4 GB memory: Use S1280 up to S3072 Small systems may prefer S25 or S50 which set 1600 cache blocks and are more efficient If S is omitted invalid an 80 MB cache is set Except for 25 or 50 values below 80 are cut to 40 15 or 5 MB The drivers display "XMS init error" and abort when not enough XMS memory is free If so a smaller cache must be requested For older V2 0 XMS managers ROM DOS etc only S5 to S50 may be used UX Disables all CD DVD UltraDMA even for drives that can do it "PIO mode" then handles all CD DVD I O Except for a few unusual drives by Sony etc which do not follow all ATAPI "rules" UX is rarely needed UX does not affect hard disks Xnn Sets a separate "User 1" cache for user drivers Values for nn are the same as for S above If X is omitted data for requests addressed to the "User 1" cache shall go into UIDE"s "Common" cache Ynn Sets a separate "User 2" cache for user drivers Values for nn are the same as for S above If Y is omitted data for requests addressed to the "User 2" cache shall go into UIDE"s "Common" cache Z For XMGR UIDE UHDD limits XMS moves to 2K byte sections not 64K when in protected mode Z is unneeded for JEMM386 JEMMEX MS DOS EMM386 or real mode UMBPCI If other EMM VCPI or DPMI drivers are used systems must be tested to see if Z is required BAD schemes that allow not enough interrupts during XMS moves can still be in use UIDE"s old N4 switch works the same and can still be used The "stand alone" UHDD ignores N4 or Z and will call the XMS manager to do its XMS moves UHDD usually needs only a H switch to load in HMA space also C S X or Y switches to specify cache sizes A summary of all UHDD switches is as follows: A Sets ALTERNATE addressing for "Legacy" IDE controllers same as UIDE A above Rarely necessary B Requests a 1408 byte "stand alone" driver no caching same as UIDE B above Cnn Sets a "CD DVD" cache size for UDVD2 use same values as for UIDE S above If C is omitted or invalid CD DVD data will go in UHDD"s "Common" cache E Makes the driver "call the BIOS" for hard disk I O requests same as UIDE E above E dismisses UltraDMA disk logic and saves 496 bytes H Loads all but 832 bytes of the driver 640 with B into HMA space See the note for UIDE H above Q Awaits "data request" before beginning UltraDMA I O with old controllers same as UIDE Q above Rarely necessary R15 Reserves 15 MB or 63 MB of XMS for old DOS "game" programs R63 same as UIDE R above Rarely necessary Snn Sets a "Common" cache size same values as UIDE S above Xnn Sets the "User 1" cache size same values for UIDE S above If X is omitted invalid "User 1" data will go in UHDD"s "Common" cache Ynn Sets the "User 2" cache size same values for UIDE S above If Y is omitted invalid "User 2" data will go in UHDD"s "Common" cache Z See Z above UDVD2 normally needs only a H switch to use HMA space and a D: switch to specify a driver "device name" A summary of all UDVD2 switches is as follows: A Sets ALTERNATE addressing for "Legacy" IDE controllers same as UIDE A above Rarely necessary D: Sets a "device name" used by the CD DVD Redirector to access CD DVD drives same as UIDE D: above H Puts all but 144 bytes of the driver in HMA space See the note for UIDE H above Rnn Reserves 15 MB or 63 MB of XMS for old DOS "game" programs same as UIDE R above Rarely necessary UX Disables CD DVD UltraDMA same as UIDE UX above Rarely necessary For all switches in each driver a dash may replace the slash and lower case letters may be used if desired 5 Setup and Configuration XMGR RDISK and UIDE are all loaded using the CONFIG SYS file Your CONFIG SYS should have command lines similar to the following examples: DEVICE C: DOSDVRS XMGR SYS N128 B DEVICEHIGH C: DRIVERS RDISK COM S500 DEVICEHIGH C: SYSTEM UIDE SYS D:TOSHIBA1 S511 H DEVICEHIGH C: USERDVRS UHDD SYS S500 C80 H DEVICEHIGH C: MYDVRS UDVD2 SYS D:BLURAY1 H Note that "Int 13h" BIOS drivers must be loaded first so UIDE UHDD can intercept and cache their DOS Int 13h calls Also note that any user drivers that call UIDE to do caching must be loaded after UIDE so they will "find" UIDE in memory and can "link" to it This also applies if UHDD followed by UDVD2 are used in place of UIDE See the CONFIG SYS examples below With V3 70+ UMBPCI and XMGR a "boot" procedure is not needed UMBPCI loads first to enable upper memory then XMGR loads to offer it and XMS to DOS then other drivers may load For V6 22 V7 10 MS DOS JEMM386 can also be loaded to offer extra upper memory in the "video graphics" areas or if other JEMM386 features are desired NOTE: FreeDOS and some other DOS variants will NOT "add up" the memory found by both UMBPCI and JEMM386 like MS DOS does FreeDOS users who want extra upper memory or other items must omit UMBPCI and load JEMMEX or HIMEMX JEMM386 per their instructions or load XMGR JEMM386 as shown in the 3rd example below An example CONFIG SYS file using V3 70+ UMBPCI and XMGR is as follows: SHELL C: DOS COMMAND COM C: DOS E:512 P DEVICE C: BIN UMBPCI SYS DEVICE C: BIN XMGR SYS W DOS HIGH UMB DEVICE C: BIN JEMM386 EXE I B000 B7FF X C800 EFFF NOEMS ;Optional Int 13h drivers cached by UIDE load now DEVICEHIGH C: BIN UIDE SYS D:CDROM1 S511 C250 H ;Or UHDD plus ; UDVD2 here User drivers that call UIDE load now DEVICEHIGH C: BIN RDISK COM S250 ;Optional Etc XMGR can be used "stand alone" on a small XMS only system It must be the first DOS system driver to load and it must load in LOW memory as in the following example: SHELL C: DOS COMMAND COM C: DOS E:512 P DEVICE C: BIN XMGR SYS DOS HIGH Int 13h drivers cached by UHDD load now DEVICE C: BIN UHDD SYS S80 C15 ;Or UIDE in place DEVICE C: BIN UDVD2 SYS ; of UHDD + UDVD2 User drivers that call UHDD load now DEVICE C: BIN RDISK COM S20 ;Optional Etc With JEMM386 and XMGR XMGR loads first in "boot" mode then JEMM386 and then XMGR finally loads in upper memory JEMMEX can also be used and if so XMGR can be omitted An example CONFIG SYS file which uses the XMGR "boot" procedure is shown below Note that in this example UIDE sets a 2 GIGABYTE disk cache plus a 700 Megabyte CD DVD cache SHELL C: DOS COMMAND COM C: DOS E:512 P DEVICE C: BIN XMGR SYS B ; B for "boot" DOS HIGH UMB DEVICE C: DOS JEMM386 EXE I B000 B7FF NOEMS ;Or JEMMEX here DEVICEHIGH C: BIN XMGR SYS ;No "boot" here Int 13h drivers cached by UIDE load now DEVICEHIGH C: BIN UIDE SYS D:MYDVD S2047 C700 H ;Or UHDD plus ; UDVD2 here User drivers that call UIDE load now DEVICEHIGH C: BIN RDISK COM S500 ;Optional Etc After the above drivers are loaded further CONFIG SYS drivers SETVER ANSI SYS etc can then load in any desired order When a specific RDISK drive letter is required RDISK can now be loaded by AUTOEXEC BAT and its : switch can specify any "free" drive letter e g :Q assigns drive Q: for RDISK files Whenever RDISK is used AUTOEXEC BAT should also include commands which copy all RDISK programs and data up to the RAM disk This is required each time DOS loads as XMS memory is LOST when a system shuts down Such copies usually take little time If RDISK and UIDE UHDD are used users must balance how much XMS memory the drivers use RDISK must take no more XMS than its files may need UIDE UHDD can take most remaining XMS for its caches Some XMS memory must be saved for other programs needing it As an example on a 4 GB system RDISK might use 500 MB UIDE UHDD might use 3 GB and 500 MB is free for other programs These values can be adjusted so RDISK holds programs and "fast" data files while UIDE UHDD cache "ordinary" files Properly balanced use of XMS will give a VERY high speed DOS system Please be sure to set each hard disk"s geometry correctly in your BIOS Set it to "Auto" "LBA" or "LBA Assisted" but NOT to "None" "Normal" "CHS" "ECHS" "User Cylinders Heads Sectors" "Revised ECHS" or "Bit Shift" should run but are NOT preferred If a BIOS has a setting like "UltraDMA" or "UDMA Capable" for a disk enable it "Laptop" power saving items like a "drive spin down timeout" should run O K but must be TESTED before use All these drivers allow 7 seconds for a disk or CD DVD drive to spin up after being idle More DRASTIC power saving items like a "drive SHUTDOWN timeout" may require "extra" logic to restart the drive should be DISABLED or driver I O requests may time out Also be sure to use an 80 connector cable for any UltraDMA drive using "mode 3" ATA 44 44 MB sec or higher When cabling a single drive to an IDE channel note that you MUST use both "ends" of the cable NOT an "end" and the middle connector This prevents ERRORS since an unused cable end can pick up "noise" like a RADIO antenna Be sure to enable all CD DVD drive s through the BIOS set up routines A drive that is "disabled" may cause the BIOS to clear all its UltraDMA flags and force the drive into "PIO mode" zero which is terribly SLOW 6 Error Reporting XMGR and UIDE UHDD UDVD2 will return normal XMS and CD DVD error codes as needed They are listed in the "V3 0 XMS Specification" and in the Microsoft "MS DOS CD ROM Extensions 2 1" document Both are available from Microsoft or from other Internet sources UIDE and UHDD work as "BIOS drivers" and return whichever codes are set for diskettes and hard disks handled by the BIOS For their SATA and IDE hard disks UIDE UHDD can post the following error codes: Code 0Fh DMA error CCh Disk is FAULTED 20h Controller busy E0h Hard I O error AAh Disk not ready FFh XMS memory error Many DOS programs display only "Disk Error" messages with NO code thus disk errors may require running a diagnostic to get better information 7 Technical Notes In all of the following notes "UIDE" also applies to UHDD or UDVD2 as necessary The JEMMEX or JEMM386 drivers are now recommended for use with UIDE if using a DOS system that needs their extra upper memory DPMI VCPI logic etc Other EMM drivers are essentially "abandoned" some with never corrected ERRORS and they should NOT be used The "VirtualBox" emulator as of 15 Oct 2012 does not set a "change line available" bit in BIOS byte 0:48Fh for A: and B: diskettes UIDE will IGNORE diskette drives without a "change line" normally 1985 or older as they cannot declare "media changes" i e a NEW diskette was loaded Until "VirtualBox" gets corrected UIDE will NOT run A: or B: diskettes in such an environment UIDE"s R15 or R63 switches DOS "game" programs are for a real mode system using UMBPCI and XMGR Game players like real mode as it gives more speed If protected mode JEMM386 EMM386 is desired UIDE using a R switch must load prior to the "EMM" driver so the XMS reserved by UIDE is just beyond the HMA If using UMBPCI XMGR UIDE and then an EMM driver this works fine But FreeDOS users and others whose DOS systems permit only one XMS provider i e UMBPCI cannot be used must load XMGR HIMEMX first UIDE second into low memory upper memory isn"t yet enabled then JEMM386 EMM386 last Using JEMMEX with UIDE and a R switch is unrecommended JEMMEX must load first and takes some XMS itself which pushes the reserved XMS above its intended 16 64 MB area and a few DOS "games" programs may CRASH UIDE shall NOT include any huge AHCI logic and will run hard disks in "Legacy" "Compatibility" "Native IDE" mode when using AHCI controllers If a "new" AHCI BIOS has no such settings UIDE with a E switch should be able to call the BIOS and use its logic to handle AHCI disks NOTE that much "DOS driver" code is now being omitted in AHCI BIOS programs Thus UIDE should be TESTED before normal use with an AHCI mainboard Also note that CD DVD drives are not supported by an AHCI BIOS for file I O only for "boot" CDs On a system whose AHCI chips can be set for "Legacy" "Compatibility" "Native IDE" mode CD DVD drives should be run from AHCI ports using such modes On mainboards with no such settings UIDE can run CD DVD drives only on the parallel IDE port 80 pin cable or IDE capable "add on" cards from Promise etc that UIDE can "detect" using normal PCI bus logic UIDE handles only "Legacy" or "Native PCI" IDE controllers RAID only chipsets Via VT6420 etc "port multiplier" chips and ADMA chipsets are not currently supported AHCI is supported only through "Legacy" "Compatiblity" or "Native IDE" controller settings or by UIDE "calling the BIOS" as noted above To use UIDE a mainboard BIOS must set SATA and IDE controllers to some form of "IDE" mode not RAID ADMA AHCI for best speed If no "Legacy" "Compatibility" "Native IDE" BIOS setting for disk controllers is provided a Sabrent converter card or similar will let UIDE handle SATA hard disks or CD DVD drives from the parallel port IDE controller channel using full UltraDMA speeds Except if necessary for AHCI it is NOT RECOMMENDED for UIDE to run any DOS disk using only the BIOS Many BIOS programs have no DOS "Virtual DMA" logic If so when an EMM driver JEMM386 etc enables its "V86 protected mode" the BIOS can do only PIO mode transfers and LOSES much speed If needed get SATA to IDE adapters for SATA disks as above or get "Int 13h" disk drivers for SCSI or other disk models UIDE can then handle such disks at full DMA speeds XMGR loads in UMBPCI upper memory BEFORE that memory is declared to the DOS system Memory displays using UMBPCI may not list XMGR since its memory is not part of the DOS memory lists Such memory displays will begin with a block having a 00A7h offset or greater if using 80 or 128 XMS "Handles" The upper memory skipped by this offset contains XMGR The UMBPCI upper memory manager uses system "Shadow RAM" that CANNOT do DMA Newer BIOS programs may use UltraDMA to load programs into upper memory If this is UMBPCI "Shadow RAM" a CRASH will occur To stop this and handle new BIOS programs users should follow these two RULES for running UMBPCI together with XMGR and UIDE UHDD: A The loading "order" for V3 70+ UMBPCI and XMGR shown in section 5 above MUST be used This lets the XMGR "I O Catcher" intercept and process upper memory disk I O until UIDE UHDD loads and takes over disk UltraDMA Old UMBPCI versions or other UMBPCI loading schemes are NOT recommended B When CHS I O is done MS DOS V6 22 or older every disk MUST have valid CHS parameters Otherwise UIDE UHDD and the "I O Catcher" let the BIOS deal with CHS I O If BIOS UltraDMA is not disabled a similar "Shadow RAM" CRASH will occur Some "CD ROM boot" programs handle the CD DVD as a "fake" hard disk and provide incorrect EDD BIOS data for it In scanning for disks to use UIDE may display "EDD BIOS error Unit ignored " then go on searching for more UltraDMA disks Users who did NOT "boot" from CD DVD need to see which disk was passed over and why Users who DID "boot" from CD DVD where all SATA UltraDMA disks were found may IGNORE this message It is caused by an ERROR in the "CD ROM boot" program NOT by a problem with UIDE or its SATA UltraDMA disks Some BIOS programs do not "configure" a mainboard controller if no user drives are on it An unconfigured controller causes UIDE to display "BAD controller" then it goes on looking for others to use If this message is displayed users should verify that each SATA UltraDMA drive was made "active" thru the BIOS set up logic If so "BAD controller" says a chip was not set to both "Bus Master" and "I O Space" modes and the BIOS should be UPDATED ">XMGR RDISK and UIDE DOS Device Drivers 1 Description XMGR RDISK and UIDE are a group of DOS device drivers for a PC system with an 80386+ CPU and using MS DOS V5 0+ or equivalent XMGR is a DOS driver w [更多]

2,161

社区成员

发帖
与我相关
我的任务
社区描述
Linux/Unix社区 UNIX文化
社区管理员
  • UNIX文化社区
  • 文天大人
加入社区
  • 近7日
  • 近30日
  • 至今
社区公告
暂无公告

试试用AI创作助手写篇文章吧