Concurrent issue:about read write lock

reentrantlock 2013-12-30 10:08:01
For the following testing code,how to make the read thread can automatically be notified when the write thread finishes writing new data to the container.
In order to improve the performance of concurrent reading data, read lock and write lock are used in the programme,beacause of this,I feel difficult to implement both of threads communicate with each other.

Anybody can provide a solution to reach the objective? Please note the pre-condition: don't use another lock to replace my read-write lock,if so,concurrent performance will be reduced.That's not what I need.
Thanks a lot in advance.

------------------------------------------------------------------------------------------
import java.util.ArrayList;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.locks.ReentrantReadWriteLock;
import java.util.concurrent.locks.ReentrantReadWriteLock.ReadLock;
import java.util.concurrent.locks.ReentrantReadWriteLock.WriteLock;

public class ReadWriteLockTest {

private ArrayList<String> container = new ArrayList<String>();

private ReentrantReadWriteLock readWriteLock = new ReentrantReadWriteLock();

private ReadLock readLock = readWriteLock.readLock();
private WriteLock writeLock = readWriteLock.writeLock();

public void execute(){
for(int k=0;k<1;k++){
Thread writeThread = new Thread(new Runnable(){

@Override
public void run() {

while(true){
writeLock.lock();
try{
for(int j = 0;j<3;j++){
container.add("_"+j);
}

System.out.println(Thread.currentThread().getName()+" write size:"+container.size());
}
finally{
writeLock.unlock();
}

try {
System.out.println(Thread.currentThread().getName()+" sleeping...");
TimeUnit.SECONDS.sleep(20);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}


});

writeThread.start();
}

for(int x=0;x<2;x++){
Thread readThread = new Thread(new Runnable(){

@Override
public void run() {

while(true){
readLock.lock();
try{
TimeUnit.SECONDS.sleep(1);
while(container.size()>0){
String str = container.get(0);
synchronized(container){ // remove operation need exclusive lock when read-read
container.remove(str);
}
System.out.println(Thread.currentThread().getName()+" read data:"+str+ " size:"+ container.size());
}


} catch (Exception e){
e.printStackTrace();
}finally{
readLock.unlock();
}

try {
TimeUnit.SECONDS.sleep(1);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}


});

readThread.start();
}

}


/**
* @param args
* @throws InterruptedException
*/
public static void main(String[] args) throws Exception {
ReadWriteLockTest test = new ReadWriteLockTest();
test.execute();
TimeUnit.SECONDS.sleep(100000);
}

}
...全文
305 15 打赏 收藏 转发到动态 举报
AI 作业
写回复
用AI写文章
15 条回复
切换为时间正序
请发表友善的回复…
发表回复
reentrantlock 2014-01-07
  • 打赏
  • 举报
回复
@Lsheep, please modify my source code if you are confident of your technology. Thanks a million in advance.
reentrantlock 2014-01-07
  • 打赏
  • 举报
回复
I feel no difficulty to implement it if I use ReentrantLock or traditional synchroniz ation mechanism instead of ReentrantReadWriteLock.
reentrantlock 2014-01-07
  • 打赏
  • 举报
回复
引用 11 楼 Lsheep 的回复:
[quote=引用 10 楼 u013192985 的回复:] [quote=引用 9 楼 Lsheep 的回复:] [quote=引用 8 楼 u013192985 的回复:] [quote=引用 5 楼 Lsheep 的回复:] 首先,下次贴代码请放在标签里,就如我下面展示代码那样; 然后,你下面这块代码是有问题的:有可能某个reader执行synchronized代码之前被调度,另一个reader也进入循环得到同一个元素,最后这个元素被remove了两次,好在remove不存在的元素不会抛异常,只会返回false。

while(container.size()>0){
    String str = container.get(0);
    synchronized(container){ // remove operation need exclusive lock when read-read
        container.remove(str);
    }
最后,我不明白你想干什么,当一个writer在写数据时,其他的reader会被阻塞,当writer执行完后,自动会去唤醒一个reader。你为什么想让writer去主动的notify reader呢?就你给的例子而言,代码中有一些sleep方法,如果你想notify一个sleep线程,那只能interrupte了,所以你到底想干什么呢?
当writer执行完后,自动会去唤醒一个reader? Are you sure? Please note,there is not any wait statement in the run method of read thread,how can the write thread awake the read thread? One word,no "wait",how to wake it? Anyway,thanks for your comments.[/quote] 假设现在系统里只有A,B两个线程,有一个锁L,线程A得到了,然后切换到线程B,线程B等待锁L,接着又切换到A,A释放L,B就会被调度(唤醒这个词或许不太准确),得到L,继续执行。 如果你想知道reader和writer具体用synchronized还是wait(),notify()方法实现同步的,可以去看看ReentrantReadWriteLock的实现细节。[/quote] I feel difficult to communicate with you for this issue. I am not sure whether you don't understand my question or lack of knowledge of technology. However, if you really can help me solve this issue,can you paste your code here?[/quote] 好吧,那我们重新捋一下:你想问的是不是下面两个问题之一,或者都是 1,如何使用notify 2,reader和writer是如何通信的[/quote] Yeah,but the pre-condition is you must use ReentrantReadWriteLock to implement.
Lsheep 2014-01-07
  • 打赏
  • 举报
回复
引用 10 楼 u013192985 的回复:
[quote=引用 9 楼 Lsheep 的回复:] [quote=引用 8 楼 u013192985 的回复:] [quote=引用 5 楼 Lsheep 的回复:] 首先,下次贴代码请放在标签里,就如我下面展示代码那样; 然后,你下面这块代码是有问题的:有可能某个reader执行synchronized代码之前被调度,另一个reader也进入循环得到同一个元素,最后这个元素被remove了两次,好在remove不存在的元素不会抛异常,只会返回false。

while(container.size()>0){
    String str = container.get(0);
    synchronized(container){ // remove operation need exclusive lock when read-read
        container.remove(str);
    }
最后,我不明白你想干什么,当一个writer在写数据时,其他的reader会被阻塞,当writer执行完后,自动会去唤醒一个reader。你为什么想让writer去主动的notify reader呢?就你给的例子而言,代码中有一些sleep方法,如果你想notify一个sleep线程,那只能interrupte了,所以你到底想干什么呢?
当writer执行完后,自动会去唤醒一个reader? Are you sure? Please note,there is not any wait statement in the run method of read thread,how can the write thread awake the read thread? One word,no "wait",how to wake it? Anyway,thanks for your comments.[/quote] 假设现在系统里只有A,B两个线程,有一个锁L,线程A得到了,然后切换到线程B,线程B等待锁L,接着又切换到A,A释放L,B就会被调度(唤醒这个词或许不太准确),得到L,继续执行。 如果你想知道reader和writer具体用synchronized还是wait(),notify()方法实现同步的,可以去看看ReentrantReadWriteLock的实现细节。[/quote] I feel difficult to communicate with you for this issue. I am not sure whether you don't understand my question or lack of knowledge of technology. However, if you really can help me solve this issue,can you paste your code here?[/quote] 好吧,那我们重新捋一下:你想问的是不是下面两个问题之一,或者都是 1,如何使用notify 2,reader和writer是如何通信的
Lsheep 2014-01-07
  • 打赏
  • 举报
回复
可能我还是没明白你的意思,下面这段话: how to make the read thread can automatically be notified when the write thread finishes writing new data to the container. 当写进程完成写操作后,如何自动唤醒读进程。 不对呀,在我的理解里,这个本来就是啊,writer在写时,所有的reader都会阻塞,当这个writer释放写锁后,并且没有其他的writer在等待锁时,reader自然就被调度了。下面这段是从java文档里找的。 <p>A thread that tries to acquire a fair read lock (non-reentrantly) * will block if either the write lock is held, or there is a waiting * writer thread. The thread will not acquire the read lock until * after the oldest currently waiting writer thread has acquired and * released the write lock. Of course, if a waiting writer abandons * its wait, leaving one or more reader threads as the longest waiters * in the queue with the write lock free, then those readers will be * assigned the read lock. * * <p>A thread that tries to acquire a fair write lock (non-reentrantly) * will block unless both the read lock and write lock are free (which * implies there are no waiting threads). (Note that the non-blocking * {@link ReadLock#tryLock()} and {@link WriteLock#tryLock()} methods * do not honor this fair setting and will acquire the lock if it is * possible, regardless of waiting threads.) * <p> 所以,如果我哪些地方理解的不对,请指出。
reentrantlock 2014-01-06
  • 打赏
  • 举报
回复
引用 9 楼 Lsheep 的回复:
[quote=引用 8 楼 u013192985 的回复:] [quote=引用 5 楼 Lsheep 的回复:] 首先,下次贴代码请放在标签里,就如我下面展示代码那样; 然后,你下面这块代码是有问题的:有可能某个reader执行synchronized代码之前被调度,另一个reader也进入循环得到同一个元素,最后这个元素被remove了两次,好在remove不存在的元素不会抛异常,只会返回false。

while(container.size()>0){
    String str = container.get(0);
    synchronized(container){ // remove operation need exclusive lock when read-read
        container.remove(str);
    }
最后,我不明白你想干什么,当一个writer在写数据时,其他的reader会被阻塞,当writer执行完后,自动会去唤醒一个reader。你为什么想让writer去主动的notify reader呢?就你给的例子而言,代码中有一些sleep方法,如果你想notify一个sleep线程,那只能interrupte了,所以你到底想干什么呢?
当writer执行完后,自动会去唤醒一个reader? Are you sure? Please note,there is not any wait statement in the run method of read thread,how can the write thread awake the read thread? One word,no "wait",how to wake it? Anyway,thanks for your comments.[/quote] 假设现在系统里只有A,B两个线程,有一个锁L,线程A得到了,然后切换到线程B,线程B等待锁L,接着又切换到A,A释放L,B就会被调度(唤醒这个词或许不太准确),得到L,继续执行。 如果你想知道reader和writer具体用synchronized还是wait(),notify()方法实现同步的,可以去看看ReentrantReadWriteLock的实现细节。[/quote] I feel difficult to communicate with you for this issue. I am not sure whether you don't understand my question or lack of knowledge of technology. However, if you really can help me solve this issue,can you paste your code here?
Lsheep 2014-01-06
  • 打赏
  • 举报
回复
引用 8 楼 u013192985 的回复:
[quote=引用 5 楼 Lsheep 的回复:] 首先,下次贴代码请放在标签里,就如我下面展示代码那样; 然后,你下面这块代码是有问题的:有可能某个reader执行synchronized代码之前被调度,另一个reader也进入循环得到同一个元素,最后这个元素被remove了两次,好在remove不存在的元素不会抛异常,只会返回false。

while(container.size()>0){
    String str = container.get(0);
    synchronized(container){ // remove operation need exclusive lock when read-read
        container.remove(str);
    }
最后,我不明白你想干什么,当一个writer在写数据时,其他的reader会被阻塞,当writer执行完后,自动会去唤醒一个reader。你为什么想让writer去主动的notify reader呢?就你给的例子而言,代码中有一些sleep方法,如果你想notify一个sleep线程,那只能interrupte了,所以你到底想干什么呢?
当writer执行完后,自动会去唤醒一个reader? Are you sure? Please note,there is not any wait statement in the run method of read thread,how can the write thread awake the read thread? One word,no "wait",how to wake it? Anyway,thanks for your comments.[/quote] 假设现在系统里只有A,B两个线程,有一个锁L,线程A得到了,然后切换到线程B,线程B等待锁L,接着又切换到A,A释放L,B就会被调度(唤醒这个词或许不太准确),得到L,继续执行。 如果你想知道reader和writer具体用synchronized还是wait(),notify()方法实现同步的,可以去看看ReentrantReadWriteLock的实现细节。
reentrantlock 2014-01-03
  • 打赏
  • 举报
回复
引用 5 楼 Lsheep 的回复:
首先,下次贴代码请放在标签里,就如我下面展示代码那样; 然后,你下面这块代码是有问题的:有可能某个reader执行synchronized代码之前被调度,另一个reader也进入循环得到同一个元素,最后这个元素被remove了两次,好在remove不存在的元素不会抛异常,只会返回false。

while(container.size()>0){
    String str = container.get(0);
    synchronized(container){ // remove operation need exclusive lock when read-read
        container.remove(str);
    }
最后,我不明白你想干什么,当一个writer在写数据时,其他的reader会被阻塞,当writer执行完后,自动会去唤醒一个reader。你为什么想让writer去主动的notify reader呢?就你给的例子而言,代码中有一些sleep方法,如果你想notify一个sleep线程,那只能interrupte了,所以你到底想干什么呢?
当writer执行完后,自动会去唤醒一个reader? Are you sure? Please note,there is not any wait statement in the run method of read thread,how can the write thread awake the read thread? One word,no "wait",how to wake it? Anyway,thanks for your comments.
reentrantlock 2014-01-03
  • 打赏
  • 举报
回复
引用 5 楼 Lsheep 的回复:
首先,下次贴代码请放在标签里,就如我下面展示代码那样; 然后,你下面这块代码是有问题的:有可能某个reader执行synchronized代码之前被调度,另一个reader也进入循环得到同一个元素,最后这个元素被remove了两次,好在remove不存在的元素不会抛异常,只会返回false。

while(container.size()>0){
    String str = container.get(0);
    synchronized(container){ // remove operation need exclusive lock when read-read
        container.remove(str);
    }
最后,我不明白你想干什么,当一个writer在写数据时,其他的reader会被阻塞,当writer执行完后,自动会去唤醒一个reader。你为什么想让writer去主动的notify reader呢?就你给的例子而言,代码中有一些sleep方法,如果你想notify一个sleep线程,那只能interrupte了,所以你到底想干什么呢?
My question is very simple: how to use notify-wait mechanism to make read thread and write thread communicate each other. If you can do it,would you please improve my sample code? Besides, you can remove the sleep statement in my code,coz I just use it to simulate.
reentrantlock 2014-01-03
  • 打赏
  • 举报
回复
引用 4 楼 xiaoyagouwei 的回复:
English no good!~ 读写锁,写锁在写入数据的时候,读取线程是无法获取该锁的! 所以存在线程在写入集合的时候,其他线程本身就无法读取集合数据,当写入集合完毕时,写锁nulock之后,读取线程会自动获取的到锁。 至于你说的,怎么通知读取线程,告诉他“我写入集合已经完毕”,读写锁实际上不就是这样子吗?只要在写入,没有写完成,都无法读取。 或许没有太明白你的意思是什么? 或者你可以采用: Lock lock = new ReentrantLock(); Condition wirteLock = lock.newCondition(); Condition readLock = lock.newCondition();
To be honest, I feel you don't really understand what is read-write lock in JDK 1.5. The main purpose to use read-write lock instead of reentrantlock is to improve the performance of read thread.Coz multi-threads can read data concurrently when none of write thread is writing data. This is also the difference between read-write lock and reentrantlock. Anyway,I appreciate your attention to my post.
Lsheep 2014-01-03
  • 打赏
  • 举报
回复
首先,下次贴代码请放在标签里,就如我下面展示代码那样; 然后,你下面这块代码是有问题的:有可能某个reader执行synchronized代码之前被调度,另一个reader也进入循环得到同一个元素,最后这个元素被remove了两次,好在remove不存在的元素不会抛异常,只会返回false。

while(container.size()>0){
    String str = container.get(0);
    synchronized(container){ // remove operation need exclusive lock when read-read
        container.remove(str);
    }
最后,我不明白你想干什么,当一个writer在写数据时,其他的reader会被阻塞,当writer执行完后,自动会去唤醒一个reader。你为什么想让writer去主动的notify reader呢?就你给的例子而言,代码中有一些sleep方法,如果你想notify一个sleep线程,那只能interrupte了,所以你到底想干什么呢?
小乞丐_落尘 2014-01-03
  • 打赏
  • 举报
回复
English no good!~ 读写锁,写锁在写入数据的时候,读取线程是无法获取该锁的! 所以存在线程在写入集合的时候,其他线程本身就无法读取集合数据,当写入集合完毕时,写锁nulock之后,读取线程会自动获取的到锁。 至于你说的,怎么通知读取线程,告诉他“我写入集合已经完毕”,读写锁实际上不就是这样子吗?只要在写入,没有写完成,都无法读取。 或许没有太明白你的意思是什么? 或者你可以采用: Lock lock = new ReentrantLock(); Condition wirteLock = lock.newCondition(); Condition readLock = lock.newCondition();
reentrantlock 2014-01-02
  • 打赏
  • 举报
回复
引用 2 楼 lonthy 的回复:
I hava a long time no see the knowledge of threads,at the same time,my ability of Java is limited,Hope the page host don't hate my comment,hehe.make great efforts ,to be a DaShen like LZ.
Anyway,thanks for your attention to this question.
reentrantlock 2013-12-31
  • 打赏
  • 举报
回复
To be honest,I ruminated this question several days.
lonthy 2013-12-31
  • 打赏
  • 举报
回复
I hava a long time no see the knowledge of threads,at the same time,my ability of Java is limited,Hope the page host don't hate my comment,hehe.make great efforts ,to be a DaShen like LZ.
Contents Overview 1 Lesson 1: Concepts – Locks and Lock Manager 3 Lesson 2: Concepts – Batch and Transaction 31 Lesson 3: Concepts – Locks and Applications 51 Lesson 4: Information Collection and Analysis 63 Lesson 5: Concepts – Formulating and Implementing Resolution 81 Module 4: Troubleshooting Locking and Blocking Overview At the end of this module, you will be able to:  Discuss how lock manager uses lock mode, lock resources, and lock compatibility to achieve transaction isolation.  Describe the various transaction types and how transactions differ from batches.  Describe how to troubleshoot blocking and locking issues.  Analyze the output of blocking scripts and Microsoft® SQL Server™ Profiler to troubleshoot locking and blocking issues.  Formulate hypothesis to resolve locking and blocking issues. Lesson 1: Concepts – Locks and Lock Manager This lesson outlines some of the common causes that contribute to the perception of a slow server. What You Will Learn After completing this lesson, you will be able to:  Describe locking architecture used by SQL Server.  Identify the various lock modes used by SQL Server.  Discuss lock compatibility and concurrent access.  Identify different types of lock resources.  Discuss dynamic locking and lock escalation.  Differentiate locks, latches, and other SQL Server internal “locking” mechanism such as spinlocks and other synchronization objects. Recommended Reading  Chapter 14 “Locking”, Inside SQL Server 2000 by Kalen Delaney  SOX000821700049 – SQL 7.0 How to interpret lock resource Ids  SOX000925700237 – TITLE: Lock escalation in SQL 7.0  SOX001109700040 – INF: Queries with PREFETCH in the plan hold lock until the end of transaction Locking Concepts Delivery Tip Prior to delivering this material, test the class to see if they fully understand the different isolation levels. If the class is not confident in their understanding, review appendix A04_Locking and its accompanying PowerPoint® file. Transactions in SQL Server provide the ACID properties: Atomicity A transaction either commits or aborts. If a transaction commits, all of its effects remain. If it aborts, all of its effects are undone. It is an “all or nothing” operation. Consistency An application should maintain the consistency of a database. For example, if you defer constraint checking, it is your responsibility to ensure that the database is consistent. Isolation Concurrent transactions are isolated from the updates of other incomplete transactions. These updates do not constitute a consistent state. This property is often called serializability. For example, a second transaction traversing the doubly linked list mentioned above would see the list before or after the insert, but it will see only complete changes. Durability After a transaction commits, its effects will persist even if there are system failures. Consistency and isolation are the most important in describing SQL Server’s locking model. It is up to the application to define what consistency means, and isolation in some form is needed to achieve consistent results. SQL Server uses locking to achieve isolation. Definition of Dependency: A set of transactions can run concurrently if their outputs are disjoint from the union of one another’s input and output sets. For example, if T1 writes some object that is in T2’s input or output set, there is a dependency between T1 and T2. Bad Dependencies These include lost updates, dirty reads, non-repeatable reads, and phantoms. ANSI SQL Isolation Levels An isolation level determines the degree to which data is isolated for use by one process and guarded against interference from other processes. Prior to SQL Server 7.0, REPEATABLE READ and SERIALIZABLE isolation levels were synonymous. There was no way to prevent non-repeatable reads while not preventing phantoms. By default, SQL Server 2000 operates at an isolation level of READ COMMITTED. To make use of either more or less strict isolation levels in applications, locking can be customized for an entire session by setting the isolation level of the session with the SET TRANSACTION ISOLATION LEVEL statement. To determine the transaction isolation level currently set, use the DBCC USEROPTIONS statement, for example: USE pubs GO SET TRANSACTION ISOLATION LEVEL REPEATABLE READ GO DBCC USEROPTIONS GO Multigranular Locking Multigranular Locking In our example, if one transaction (T1) holds an exclusive lock at the table level, and another transaction (T2) holds an exclusive lock at the row level, each of the transactions believe they have exclusive access to the resource. In this scenario, since T1 believes it locks the entire table, it might inadvertently make changes to the same row that T2 thought it has locked exclusively. In a multigranular locking environment, there must be a way to effectively overcome this scenario. Intent lock is the answer to this problem. Intent Lock Intent Lock is the term used to mean placing a marker in a higher-level lock queue. The type of intent lock can also be called the multigranular lock mode. An intent lock indicates that SQL Server wants to acquire a shared (S) lock or exclusive (X) lock on some of the resources lower down in the hierarchy. For example, a shared intent lock placed at the table level means that a transaction intends on placing shared (S) locks on pages or rows within that table. Setting an intent lock at the table level prevents another transaction from subsequently acquiring an exclusive (X) lock on the table containing that page. Intent locks improve performance because SQL Server examines intent locks only at the table level to determine whether a transaction can safely acquire a lock on that table. This removes the requirement to examine every row or page lock on the table to determine whether a transaction can lock the entire table. Lock Mode The code shown in the slide represents how the lock mode is stored internally. You can see these codes by querying the master.dbo.spt_values table: SELECT * FROM master.dbo.spt_values WHERE type = N'L' However, the req_mode column of master.dbo.syslockinfo has lock mode code that is one less than the code values shown here. For example, value of req_mode = 3 represents the Shared lock mode rather than the Schema Modification lock mode. Lock Compatibility These locks can apply at any coarser level of granularity. If a row is locked, SQL Server will apply intent locks at both the page and the table level. If a page is locked, SQL Server will apply an intent lock at the table level. SIX locks imply that we have shared access to a resource and we have also placed X locks at a lower level in the hierarchy. SQL Server never asks for SIX locks directly, they are always the result of a conversion. For example, suppose a transaction scanned a page using an S lock and then subsequently decided to perform a row level update. The row would obtain an X lock, but now the page would require an IX lock. The resultant mode on the page would be SIX. Another type of table lock is a schema stability lock (Sch-S) and is compatible with all table locks except the schema modification lock (Sch-M). The schema modification lock (Sch-M) is incompatible with all table locks. Locking Resources Delivery Tip Note the differences between Key and Key Range locks. Key Range locks will be covered in a couple of slides. SQL Server can lock these resources: Item Description DB A database. File A database file Index An entire index of a table. Table An entire table, including all data and indexes. Extent A contiguous group of data pages or index pages. Page An 8-KB data page or index page. Key Row lock within an index. Key-range A key-range. Used to lock ranges between records in a table to prevent phantom insertions or deletions into a set of records. Ensures serializable transactions. RID A Row Identifier. Used to individually lock a single row within a table. Application A lock resource defined by an application. The lock manager knows nothing about the resource format. It simply compares the 'strings' representing the lock resources to determine whether it has found a match. If a match is found, it knows that resource is already locked. Some of the resources have “sub-resources.” The followings are sub-resources displayed by the sp_lock output: Database Lock Sub-Resources: Full Database Lock (default) [BULK-OP-DB] – Bulk Operation Lock for Database [BULK-OP-LOG] – Bulk Operation Lock for Log Table Lock Sub-Resources: Full Table Lock (default) [UPD-STATS] – Update statistics Lock [COMPILE] – Compile Lock Index Lock sub-Resources: Full Index Lock (default) [INDEX_ID] – Index ID Lock [INDEX_NAME] – Index Name Lock [BULK_ALLOC] – Bulk Allocation Lock [DEFRAG] – Defragmentation Lock For more information, see also… SOX000821700049 SQL 7.0 How to interpret lock resource Ids Lock Resource Block The resource type has the following resource block format: Resource Type (Code) Content DB (2) Data 1: sub-resource; Data 2: 0; Data 3: 0 File (3) Data 1: File ID; Data 2: 0; Data 3: 0 Index (4) Data 1: Object ID; Data 2: sub-resource; Data 3: Index ID Table (5) Data 1: Object ID; Data 2: sub-resource; Data 3: 0. Page (6) Data 1: Page Number; Data 3: 0. Key (7) Data 1: Object ID; Data 2: Index ID; Data 3: Hashed Key Extent (8) Data 1: Extent ID; Data 3: 0. RID (9) Data 1: RID; Data 3: 0. Application (10) Data 1: Application resource name The rsc_bin column of master..syslockinfo contains the resource block in hexadecimal format. For an example of how to decode value from this column using the information above, let us assume we have the following value: 0x000705001F83D775010002014F0BEC4E With byte swapping within each field, this can be decoded as: Byte 0: Flag – 0x00 Byte 1: Resource Type – 0x07 (Key) Byte 2-3: DBID – 0x0005 Byte 4-7: ObjectID – 0x 75D7831F (1977058079) Byte 8-9: IndexID – 0x0001 Byte 10-16: Hash Key value – 0x 02014F0BEC4E For more information about how to decode this value, see also… Inside SQL Server 2000, pages 803 and 806. Key Range Locking Key Range Locking To support SERIALIZABLE transaction semantics, SQL Server needs to lock sets of rows specified by a predicate, such as WHERE salary BETWEEN 30000 AND 50000 SQL Server needs to lock data that does not exist! If no rows satisfy the WHERE condition the first time the range is scanned, no rows should be returned on any subsequent scans. Key range locks are similar to row locks on index keys (whether clustered or not). The locks are placed on individual keys rather than at the node level. The hash value consists of all the key components and the locator. So, for a nonclustered index over a heap, where columns c1 and c2 where indexed, the hash would contain contributions from c1, c2 and the RID. A key range lock applied to a particular key means that all keys between the value locked and the next value would be locked for all data modification. Key range locks can lock a slightly larger range than that implied by the WHERE clause. Suppose the following select was executed in a transaction with isolation level SERIALIZABLE: SELECT * FROM members WHERE first_name between ‘Al’ and ‘Carl’ If 'Al', 'Bob', and 'Dave' are index keys in the table, the first two of these would acquire key range locks. Although this would prevent anyone from inserting either 'Alex' or 'Ben', it would also prevent someone from inserting 'Dan', which is not within the range of the WHERE clause. Prior to SQL Server 7.0, page locking was used to prevent phantoms by locking the entire set of pages on which the phantom would exist. This can be too conservative. Key Range locking lets SQL Server lock only a much more restrictive area of the table. Impact Key-range locking ensures that these scenarios are SERIALIZABLE:  Range scan query  Singleton fetch of nonexistent row  Delete operation  Insert operation However, the following conditions must be satisfied before key-range locking can occur:  The transaction-isolation level must be set to SERIALIZABLE.  The operation performed on the data must use an index range access. Range locking is activated only when query processing (such as the optimizer) chooses an index path to access the data. Key Range Lock Mode Again, the req_mode column of master.dbo.syslockinfo has lock mode code that is one less than the code values shown here. Dynamic Locking When modifying individual rows, SQL Server typically would take row locks to maximize concurrency (for example, OLTP, order-entry application). When scanning larger volumes of data, it would be more appropriate to take page or table locks to minimize the cost of acquiring locks (for example, DSS, data warehouse, reporting). Locking Decision The decision about which unit to lock is made dynamically, taking many factors into account, including other activity on the system. For example, if there are multiple transactions currently accessing a table, SQL Server will tend to favor row locking more so than it otherwise would. It may mean the difference between scanning the table now and paying a bit more in locking cost, or having to wait to acquire a more coarse lock. A preliminary locking decision is made during query optimization, but that decision can be adjusted when the query is actually executed. Lock Escalation When the lock count for the transaction exceeds and is a multiple of ESCALATION_THRESHOLD (1250), the Lock Manager attempts to escalate. For example, when a transaction acquired 1250 locks, lock manager will try to escalate. The number of locks held may continue to increase after the escalation attempt (for example, because new tables are accessed, or the previous lock escalation attempts failed due to incompatible locks held by another spid). If the lock count for this transaction reaches 2500 (1250 * 2), Lock Manager will attempt escalation again. The Lock Manager looks at the lock memory it is using and if it is more than 40 percent of SQL Server’s allocated buffer pool memory, it tries to find a scan (SDES) where no escalation has already been performed. It then repeats the search operation until all scans have been escalated or until the memory used drops under the MEMORY_LOAD_ESCALATION_THRESHOLD (40%) value. If lock escalation is not possible or fails to significantly reduce lock memory footprint, SQL Server can continue to acquire locks until the total lock memory reaches 60 percent of the buffer pool (MAX_LOCK_RESOURCE_MEMORY_PERCENTAGE=60). Lock escalation may be also done when a single scan (SDES) holds more than LOCK_ESCALATION_THRESHOLD (765) locks. There is no lock escalation on temporary tables or system tables. Trace Flag 1211 disables lock escalation. Important Do not relay this to the customer without careful consideration. Lock escalation is a necessary feature, not something to be avoided completely. Trace flags are global and disabling lock escalation could lead to out of memory situations, extremely poor performing queries, or other problems. Lock escalation tracing can be seen using the Profiler or with the general locking trace flag, -T1200. However, Trace Flag 1200 shows all lock activity so it should not be usable on a production system. For more information, see also… SOX000925700237 “TITLE: SQL 7.0 Lock escalation in SQL 7.0” Lock Timeout Application Lock Timeout An application can set lock timeout for a session with the SET option: SET LOCK_TIMEOUT N where N is a number of milliseconds. A value of -1 means that there will be no timeout, which is equivalent to the version 6.5 behavior. A value of 0 means that there will be no waiting; if a process finds a resource locked, it will generate error message 1222 and continue with the next statement. The current value of LOCK_TIMEOUT is stored in the global variable @@lock_timeout. Note After a lock timeout any transaction containing the statement, is rolled back or canceled by SQL Server 2000 (bug#352640 was filed). This behavior is different from that of SQL Server 7.0. With SQL Server 7.0, the application must have an error handler that can trap error 1222 and if an application does not trap the error, it can proceed unaware that an individual statement within a transaction has been canceled, and errors can occur because statements later in the transaction may depend on the statement that was never executed. Bug#352640 is fixed in hotfix build 8.00.266 whereby a lock timeout will only Internal Lock Timeout At time, internal operations within SQL Server will attempt to acquire locks via lock manager. Typically, these lock requests are issued with “no waiting.” For example, the ghost record processing might try to clean up rows on a particular page, and before it can do that, it needs to lock the page. Thus, the ghost record manager will request a page lock with no wait so that if it cannot lock the page, it will just move on to other pages; it can always come back to this page later. If you look at SQL Profiler Lock: Timeout events, internal lock timeout typically have a duration value of zero. Lock Duration Lock Mode and Transaction Isolation Level For REPEATABLE READ transaction isolation level, update locks are held until data is read and processed, unless promoted to exclusive locks. "Data is processed" means that we have decided whether the row in question matched the search criteria; if not then the update lock is released, otherwise, we get an exclusive lock and make the modification. Consider the following query: use northwind go dbcc traceon(3604, 1200, 1211) -- turn on lock tracing -- and disable escalation go set transaction isolation level repeatable read begin tran update dbo.[order details] set discount = convert (real, discount) where discount = 0.0 exec sp_lock Update locks are promoted to exclusive locks when there is a match; otherwise, the update lock is released. The sp_lock output verifies that the SPID does not hold any update locks or shared locks at the end of the query. Lock escalation is turned off so that exclusive table lock is not held at the end. Warning Do not use trace flag 1200 in a production environment because it produces a lot of output and slows down the server. Trace flag 1211 should not be used unless you have done extensive study to make sure it helps with performance. These trace flags are used here for illustration and learning purposes only. Lock Ownership Most of the locking discussion in this lesson relates to locks owned by “transactions.” In addition to transaction, cursor and session can be owners of locks and they both affect how long locks are held. For every row that is fetched, when SCROLL_LOCKS option is used, regardless of the state of a transaction, a cursor lock is held until the next row is fetched or when the cursor is closed. Locks owned by session are outside the scope of a transaction. The duration of these locks are bounded by the connection and the process will continue to hold these locks until the process disconnects. A typical lock owned by session is the database (DB) lock. Locking – Read Committed Scan Under read committed isolation level, when database pages are scanned, shared locks are held when the page is read and processed. The shared locks are released “behind” the scan and allow other transactions to update rows. It is important to note that the shared lock currently acquired will not be released until shared lock for the next page is successfully acquired (this is commonly know as “crabbing”). If the same pages are scanned again, rows may be modified or deleted by other transactions. Locking – Repeatable Read Scan Under repeatable read isolation level, when database pages are scanned, shared locks are held when the page is read and processed. SQL Server continues to hold these shared locks, thus preventing other transactions to update rows. If the same pages are scanned again, previously scanned rows will not change but new rows may be added by other transactions. Locking – Serializable Read Scan Under serializable read isolation level, when database pages are scanned, shared locks are held not only on rows but also on scanned key range. SQL Server continues to hold these shared locks until the end of transaction. Because key range locks are held, not only will this prevent other transactions from modifying the rows, no new rows can be inserted. Prefetch and Isolation Level Prefetch and Locking Behavior The prefetch feature is available for use with SQL Server 7.0 and SQL Server 2000. When searching for data using a nonclustered index, the index is searched for a particular value. When that value is found, the index points to the disk address. The traditional approach would be to immediately issue an I/O for that row, given the disk address. The result is one synchronous I/O per row and, at most, one disk at a time working to evaluate the query. This does not take advantage of striped disk sets. The prefetch feature takes a different approach. It continues looking for more record pointers in the nonclustered index. When it has collected a number of them, it provides the storage engine with prefetch hints. These hints tell the storage engine that the query processor will need these particular records soon. The storage engine can now issue several I/Os simultaneously, taking advantage of striped disk sets to execute multiple operations simultaneously. For example, if the engine is scanning a nonclustered index to determine which rows qualify but will eventually need to visit the data page as well to access columns that are not in the index, it may decide to submit asynchronous page read requests for a group of qualifying rows. The prefetch data pages are then revisited later to avoid waiting for each individual page read to complete in a serial fashion. This data access path requires that a lock be held between the prefetch request and the row lookup to stabilize the row on the page so it is not to be moved by a page split or clustered key update. For our example, the isolation level of the query is escalated to REPEATABLE READ, overriding the transaction isolation level. With SQL Server 7.0 and SQL Server 2000, portions of a transaction can execute at a different transaction isolation level than the entire transaction itself. This is implemented as lock classes. Lock classes are used to control lock lifetime when portions of a transaction need to execute at a stricter isolation level than the underlying transaction. Unfortunately, in SQL Server 7.0 and SQL Server 2000, the lock class is created at the topmost operator of the query and hence released only at the end of the query. Currently there is no support to release the lock (lock class) after the row has been discarded or fetched by the filter or join operator. This is because isolation level can be set at the query level via a lock class, but no lower. Because of this, locks acquired during the query will not be released until the query completes. If prefetch is occurring you may see a single SPID that holds hundreds of Shared KEY or PAG locks even though the connection’s isolation level is READ COMMITTED. Isolation level can be determined from DBCC PSS output. For details about this behavior see “SOX001109700040 INF: Queries with PREFETCH in the plan hold lock until the end of transaction”. Other Locking Mechanism Lock manager does not manage latches and spinlocks. Latches Latches are internal mechanisms used to protect pages while doing operations such as placing a row physically on a page, compressing space on a page, or retrieving rows from a page. Latches can roughly be divided into I/O latches and non-I/O latches. If you see a high number of non-I/O related latches, SQL Server is usually doing a large number of hash or sort operations in tempdb. You can monitor latch activities via DBCC SQLPERF(‘WAITSTATS’) command. Spinlock A spinlock is an internal data structure that is used to protect vital information that is shared within SQL Server. On a multi-processor machine, when SQL Server tries to access a particular resource protected by a spinlock, it must first acquire the spinlock. If it fails, it executes a loop that will check to see if the lock is available and if not, decrements a counter. If the counter reaches zero, it yields the processor to another thread and goes into a “sleep” (wait) state for a pre-determined amount of time. When it wakes, hopefully, the lock is free and available. If not, the loop starts again and it is terminated only when the lock is acquired. The reason for implementing a spinlock is that it is probably less costly to “spin” for a short time rather than yielding the processor. Yielding the processor will force an expensive context switch where:  The old thread’s state must be saved  The new thread’s state must be reloaded  The data stored in the L1 and L2 cache are useless to the processor On a single-processor computer, the loop is not useful because no other thread can be running and thus, no one can release the spinlock for the currently executing thread to acquire. In this situation, the thread yields the processor immediately. Lesson 2: Concepts – Batch and Transaction This lesson outlines some of the common causes that contribute to the perception of a slow server. What You Will Learn After completing this lesson, you will be able to:  Review batch processing and error checking.  Review explicit, implicit and autocommit transactions and transaction nesting level.  Discuss how commit and rollback transaction done in stored procedure and trigger affects transaction nesting level.  Discuss various transaction isolation level and their impact on locking.  Discuss the difference between aborting a statement, a transaction, and a batch.  Describe how @@error, @@transcount, and @@rowcount can be used for error checking and handling. Recommended Reading  Charter 12 “Transactions and Triggers”, Inside SQL Server 2000 by Kalen Delaney Batch Definition SQL Profiler Statements and Batches To help further your understanding of what is a batch and what is a statement, you can use SQL Profiler to study the definition of batch and statement.  Try This: Using SQL Profiler to Analyze Batch 1. Log on to a server with Query Analyzer 2. Startup the SQL Profiler against the same server 3. Start a trace using the “StandardSQLProfiler” template 4. Execute the following using Query Analyzer: SELECT @@VERSION SELECT @@SPID The ‘SQL:BatchCompleted’ event is captured by the trace. It shows both the statements as a single batch. 5. Now execute the following using Query Analyzer {call sp_who()} What shows up? The ‘RPC:Completed’ with the sp_who information. RPC is simply another entry point to the SQL Server to call stored procedures with native data types. This allows one to avoid parsing. The ‘RPC:Completed’ event should be considered the same as a batch for the purposes of this discussion. Stop the current trace and start a new trace using the “SQLProfilerTSQL_SPs” template. Issue the same command as outlines in step 5 above. Looking at the output, not only can you see the batch markers but each statement as executed within the batch. Autocommit, Explicit, and Implicit Transaction Autocommit Transaction Mode (Default) Autocommit mode is the default transaction management mode of SQL Server. Every Transact-SQL statement, whether it is a standalone statement or part of a batch, is committed or rolled back when it completes. If a statement completes successfully, it is committed; if it encounters any error, it is rolled back. A SQL Server connection operates in autocommit mode whenever this default mode has not been overridden by either explicit or implicit transactions. Autocommit mode is also the default mode for ADO, OLE DB, ODBC, and DB-Library. A SQL Server connection operates in autocommit mode until a BEGIN TRANSACTION statement starts an explicit transaction, or implicit transaction mode is set on. When the explicit transaction is committed or rolled back, or when implicit transaction mode is turned off, SQL Server returns to autocommit mode. Explicit Transaction Mode An explicit transaction is a transaction that starts with a BEGIN TRANSACTION statement. An explicit transaction can contain one or more statements and must be terminated by either a COMMIT TRANSACTION or a ROLLBACK TRANSACTION statement. Implicit Transaction Mode SQL Server can automatically or, more precisely, implicitly start a transaction for you if a SET IMPLICIT_TRANSACTIONS ON statement is run or if the implicit transaction option is turned on globally by running sp_configure ‘user options’ 2. (Actually, the bit mask 0x2 must be turned on for the user option so you might have to perform an ‘OR’ operation with the existing user option value.) See SQL Server 2000 Books Online on how to turn on implicit transaction under ODBC and OLE DB (acdata.chm::/ac_8_md_06_2g6r.htm). Transaction Nesting Explicit transactions can be nested. Committing inner transactions is ignored by SQL Server other than to decrements @@TRANCOUNT. The transaction is either committed or rolled back based on the action taken at the end of the outermost transaction. If the outer transaction is committed, the inner nested transactions are also committed. If the outer transaction is rolled back, then all inner transactions are also rolled back, regardless of whether the inner transactions were individually committed. Each call to COMMIT TRANSACTION applies to the last executed BEGIN TRANSACTION. If the BEGIN TRANSACTION statements are nested, then a COMMIT statement applies only to the last nested transaction, which is the innermost transaction. Even if a COMMIT TRANSACTION transaction_name statement within a nested transaction refers to the transaction name of the outer transaction, the commit applies only to the innermost transaction. If a ROLLBACK TRANSACTION statement without a transaction_name parameter is executed at any level of a set of nested transaction, it rolls back all the nested transactions, including the outermost transaction. The @@TRANCOUNT function records the current transaction nesting level. Each BEGIN TRANSACTION statement increments @@TRANCOUNT by one. Each COMMIT TRANSACTION statement decrements @@TRANCOUNT by one. A ROLLBACK TRANSACTION statement that does not have a transaction name rolls back all nested transactions and decrements @@TRANCOUNT to 0. A ROLLBACK TRANSACTION that uses the transaction name of the outermost transaction in a set of nested transactions rolls back all the nested transactions and decrements @@TRANCOUNT to 0. When you are unsure if you are already in a transaction, SELECT @@TRANCOUNT to determine whether it is 1 or more. If @@TRANCOUNT is 0 you are not in a transaction. You can also find the transaction nesting level by checking the sysprocess.open_tran column. See SQL Server 2000 Books Online topic “Nesting Transactions” (acdata.chm::/ac_8_md_06_66nq.htm) for more information. Statement, Transaction, and Batch Abort One batch can have many statements and one transaction can have multiple statements, also. One transaction can span multiple batches and one batch can have multiple transactions. Statement Abort Currently executing statement is aborted. This can be a bit confusing when you start talking about statements in a trigger or stored procedure. Let us look closely at the following trigger: CREATE TRIGGER TRG8134 ON TBL8134 AFTER INSERT AS BEGIN SELECT 1/0 SELECT 'Next command in trigger' END To fire the INSERT trigger, the batch could be as simple as ‘INSERT INTO TBL8134 VALUES(1)’. However, the trigger contains two statements that must be executed as part of the batch to satisfy the clients insert request. When the ‘SELECT 1/0’ causes the divide by zero error, a statement abort is issued for the ‘SELECT 1/0’ statement. Batch and Transaction Abort On SQL Server 2000 (and SQL Server 7.0) whenever a non-informational error is encountered in a trigger, the statement abort is promoted to a batch and transactional abort. Thus, in the example the statement abort for ‘select 1/0’ promotion results in an entire batch abort. No further statements in the trigger or batch will be executed and a rollback is issued. On SQL Server 6.5, the statement aborts immediately and results in a transaction abort. However, the rest of the statements within the trigger are executed. This trigger could return ‘Next command in trigger’ as a result set. Once the trigger completes the batch abort promotion takes effect. Conversely, submitting a similar set of statements in a standalone batch can result in different behavior. SELECT 1/0 SELECT 'Next command in batch' Not considering the set option possibilities, a divide by zero error generally results in a statement abort. Since it is not in a trigger, the promotion to a batch abort is avoided and subsequent SELECT statement can execute. The programmer should add an “if @@ERROR” check immediately after the ‘select 1/0’ to T-SQL execution to control the flow correctly. Aborting and Set Options ARITHABORT If SET ARITHABORT is ON, these error conditions cause the query or batch to terminate. If the errors occur in a transaction, the transaction is rolled back. If SET ARITHABORT is OFF and one of these errors occurs, a warning message is displayed, and NULL is assigned to the result of the arithmetic operation. When an INSERT, DELETE, or UPDATE statement encounters an arithmetic error (overflow, divide-by-zero, or a domain error) during expression evaluation when SET ARITHABORT is OFF, SQL Server inserts or updates a NULL value. If the target column is not nullable, the insert or update action fails and the user receives an error. XACT_ABORT When SET XACT_ABORT is ON, if a Transact-SQL statement raises a run-time error, the entire transaction is terminated and rolled back. When OFF, only the Transact-SQL statement that raised the error is rolled back and the transaction continues processing. Compile errors, such as syntax errors, are not affected by SET XACT_ABORT. For example: CREATE TABLE t1 (a int PRIMARY KEY) CREATE TABLE t2 (a int REFERENCES t1(a)) GO INSERT INTO t1 VALUES (1) INSERT INTO t1 VALUES (3) INSERT INTO t1 VALUES (4) INSERT INTO t1 VALUES (6) GO SET XACT_ABORT OFF GO BEGIN TRAN INSERT INTO t2 VALUES (1) INSERT INTO t2 VALUES (2) /* Foreign key error */ INSERT INTO t2 VALUES (3) COMMIT TRAN SELECT 'Continue running batch 1...' GO SET XACT_ABORT ON GO BEGIN TRAN INSERT INTO t2 VALUES (4) INSERT INTO t2 VALUES (5) /* Foreign key error */ INSERT INTO t2 VALUES (6) COMMIT TRAN SELECT 'Continue running batch 2...' GO /* Select shows only keys 1 and 3 added. Key 2 insert failed and was rolled back, but XACT_ABORT was OFF and rest of transaction succeeded. Key 5 insert error with XACT_ABORT ON caused all of the second transaction to roll back. Also note that 'Continue running batch 2...' is not Returned to indicate that the batch is aborted. */ SELECT * FROM t2 GO DROP TABLE t2 DROP TABLE t1 GO Compile and Run-time Errors Compile Errors Compile errors are encountered during syntax checks, security checks, and other general operations to prepare the batch for execution. These errors can prevent the optimization of the query and thus lead to immediate abort. The statement is not run and the batch is aborted. The transaction state is generally left untouched. For example, assume there are four statements in a particular batch. If the third statement has a syntax error, none of the statements in the batch is executed. Optimization Errors Optimization errors would include rare situations where the statement encounters a problem when attempting to build an optimal execution plan. Example: “too many tables referenced in the query” error is reported because a “work table” was added to the plan. Runtime Errors Runtime errors are those that are encountered during the execution of the query. Consider the following batch: SELECT * FROM pubs.dbo.titles UPDATE pubs.dbo.authors SET au_lname = au_lname SELECT * FROM foo UPDATE pubs.dbo.authors SET au_lname = au_lname If you run the above statements in a batch, the first two statements will be executed, the third statement will fail because table foo does not exist, and the batch will terminate. Deferred Name Resolution is the feature that allows this batch to start executing before resolving the object foo. This feature allows SQL Server to delay object resolution and place a “placeholder” in the query’s execution. The object referenced by the placeholder is resolved until the query is executed. In our example, the execution of the statement “SELECT * FROM foo” will trigger another compile process to resolve the name again. This time, error message 208 is returned. Error: 208, Level 16, State 1, Line 1 Invalid object name 'foo'. Message 208 can be encountered as a runtime or compile error depending on whether the Deferred Name Resolution feature is available. In SQL Server 6.5 this would be considered a compile error and on SQL Server 2000 (and SQL Server7.0) as a runtime error due to Deferred Name Resolution. In the following example, if a trigger referenced authors2, the error is detected as SQL Server attempts to execute the trigger. However, under SQL Server 6.5 the create trigger statement fails because authors2 does not exist at compile time. When errors are encountered in a trigger, generally, the statement, batch, and transaction are aborted. You should be able to observe this by running the following script in pubs database: Create table tblTest(iID int) go create trigger trgInsert on tblTest for INSERT as begin select * from authors select * from authors2 select * from titles end go begin tran select 'Before' insert into tblTest values(1) select 'After' go select @@TRANCOUNT go When run in a batch, the statement and the batch are aborted but the transaction remains active. The follow script illustrates this: begin tran select 'Before' select * from authors2 select 'After' go select @@TRANCOUNT go One other factor in a compile versus runtime error is implicit data type conversions. If you were to run the following statements on SQL Server 6.5 and SQL Server 2000 (and SQL Server 7.0): create table tblData(dtData datetime) go select 1 insert into tblData values(12/13/99) go On SQL Server 6.5, you get an error before execution of the batch begins so no statements are executed and the batch is aborted. Error: 206, Level 16, State 2, Line 2 Operand type clash: int is incompatible with datetime On SQL Server 2000, you get the default value (1900-01-01 00:00:00.000) inserted into the table. SQL Server 2000 implicit data type conversion treats this as integer division. The integer division of 12/13/99 is 0, so the default date and time value is inserted, no error returned. To correct the problem on either version is to wrap the date string with quotes. See Bug #56118 (sqlbug_70) for more details about this situation. Another example of a runtime error is a 605 message. Error: 605 Attempt to fetch logical page %S_PGID in database '%.*ls' belongs to object '%.*ls', not to object '%.*ls'. A 605 error is always a runtime error. However, depending on the transaction isolation level, (e.g. using the NOLOCK lock hint), established by the SPID the handling of the error can vary. Specifically, a 605 error is considered an ACCESS error. Errors associated with buffer and page access are found in the 600 series of errors. When the error is encountered, the isolation level of the SPID is examined to determine proper handling based on information or fatal error level. Transaction Error Checking Not all errors cause transactions to automatically rollback. Although it is difficult to determine exactly which errors will rollback transactions and which errors will not, the main idea here is that programmers must perform error checking and handle errors appropriately. Error Handling Raiserror Details Raiserror seems to be a source of confusion but is really rather simple. Raiserror with severity levels of 20 or higher will terminate the connection. Of course, when the connection is terminated a full rollback of any open transaction will immediately be instantiated by the SQL Server (except distributed transaction with DTC involved). Severity levels lower than 20 will simply result in the error message being returned to the client. They do not affect the transaction scope of the connection. Consider the following batch: use pubs begin tran update authors set au_lname = 'smith' raiserror ('This is bad', 19, 1) with log select @@trancount With severity set at 19, the 'select @@trancount' will be executed after the raiserror statement and will return a value of 1. If severity is changed to 20, then the select statement will not run and the connection is broken. Important Error handling must occur not only in T-SQL batches and stored procedures, but also in application program code. Transactions and Triggers (1 of 2) Basic behavior assumes the implicit transactions setting is set to OFF. This behavior makes it possible to identify business logic errors in a trigger, raise an error, rollback the action, and add an audit table entry. Logically, the insert to the audit table cannot take place before the ROLLBACK action and you would not want to build in the audit table insert into every applications error handler that violated the business rule of the trigger. For more information, see also… SQL Server 2000 Books Online topic “Rollbacks in stored procedure and triggers“ (acdata.chm::/ac_8_md_06_4qcz.htm) IMPLICIT_TRANSACTIONS ON Behavior The behavior of firing other triggers on the same table can be tricky. Say you added a trigger that checks the CODE field. Read only versions of the rows contain the code ‘RO’ and read/write versions use ‘RW.’ Whenever someone tries to delete a row with a code ‘RO’ the trigger issues the rollback and logs an audit table entry. However, you also have a second trigger that is responsible for cascading delete operations. One client could issue the delete without implicit transactions on and only the current trigger would execute and then terminate the batch. However, a second client with implicit transactions on could issue the same delete and the secondary trigger would fire. You end up with a situation in which the cascading delete operations can take place (are committed) but the initial row remains in the table because of the rollback operation. None of the delete operations should be allowed but because the transaction scope was restarted because of the implicit transactions setting, they did. Transactions and Triggers (2 of 2) It is extremely difficult to determine the execution state of a trigger when using explicit rollback statements in combination with implicit transactions. The RETURN statement is not allowed to return a value. The only way I have found to set the @@ERROR is using a ‘raiserror’ as the last execution statement in the last trigger to execute. If you modify the example, this following RAISERROR statement will set @@ERROR to 50000: CREATE TRIGGER trgTest on tblTest for INSERT AS BEGIN ROLLBACK INSERT INTO tblAudit VALUES (1) RAISERROR('This is bad', 14,1) END However, this value does not carry over to a secondary trigger for the same table. If you raise an error at the end of the first trigger and then look at @@ERROR in the secondary trigger the @@ERROR remains 0. Carrying Forward an Active/Open Transaction It is possible to exit from a trigger and carry forward an open transaction by issuing a BEGIN TRAN or by setting implicit transaction on and doing INSERT, UPDATE, or DELETE. Warning It is never recommended that a trigger call BEGIN TRANSACTION. By doing this you increment the transaction count. Invalid code logic, not calling commit transaction, can lead to a situation where the transaction count remains elevated upon exit of the trigger. Transaction Count The behavior is better explained by understanding how the server works. It does not matter whether you are in a transaction, when a modification takes place the transaction count is incremented. So, in the simplest form, during the processing of an insert the transaction count is 1. On completion of the insert, the server will commit (and thus decrement the transaction count). If the commit identifies the transaction count has returned to 0, the actual commit processing is completed. Issuing a commit when the transaction count is greater than 1 simply decrements the nested transaction counter. Thus, when we enter a trigger, the transaction count is 1. At the completion of the trigger, the transaction count will be 0 due to the commit issued at the end of the modification statement (insert). In our example, if the connection was already in a transaction and called the second INSERT, since implicit transaction is ON, the transaction count in the trigger will be 2 as long as the ROLLBACK is not executed. At the end of the insert, the commit is again issued to decrement the transaction reference count to 1. However, the value does not return to 0 so the transaction remains open/active. Subsequent triggers are only fired if the transaction count at the end of the trigger remains greater than or equal to 1. The key to continuation of secondary triggers and the batch is the transaction count at the end of a trigger execution. If the trigger that performs a rollback has done an explicit begin transaction or uses implicit transactions, subsequent triggers and the batch will continue. If the transaction count is not 1 or greater, subsequent triggers and the batch will not execute. Warning Forcing the transaction count after issuing a rollback is dangerous because you can easily loose track of your transaction nesting level. When performing an explicit rollback in a trigger, you should immediately issue a return statement to maintain consistent behavior between a connection with and without implicit transaction settings. This will force the trigger(s) and batch to terminate immediately. One of the methods of dealing with this issue is to run ‘SET IMPLICIT_TRANSACTIONS OFF’ as the first statement of any trigger. Other methods may entails checking @@TRANCOUNT at the end of the trigger and continue to COMMIT the transaction as long as @@TRANCOUNT is greater than 1. Examples The following examples are based on this table: create table tbl50000Insert (iID int NOT NULL) go Note If more than one trigger is used, to guarantee the trigger firing sequence, the sp_settriggerorder command should be used. This command is omitted in these examples to simplify the complexity of the statements. First Example In the first example, the second trigger was never fired and the batch, starting with the insert statement, was aborted. Thus, the print statement was never issued. print('Trigger issues rollback - cancels batch') go create trigger trg50000Insert on tbl50000Insert for INSERT as begin select 'Inserted', * from inserted rollback tran select 'End of trigger', @@TRANCOUNT as 'TRANCOUNT' end go create trigger trg50000Insert2 on tbl50000Insert for INSERT as begin select 'In Trigger2' select 'Trigger 2 Inserted', * from inserted end go insert into tbl50000Insert values(1) print('---------------------- In same batch') select * from tbl50000Insert go -- Cleanup drop trigger trg50000Insert drop trigger trg50000Insert2 go delete from tbl50000Insert Second Example The next example shows that since a new transaction is started, the second trigger will be fired and the print statement in the batch will be executed. Note that the insert is rolled back. print('Trigger issues rollback - increases tran count to continue batch') go create trigger trg50000Insert on tbl50000Insert for INSERT as begin select 'Inserted', * from inserted rollback tran begin tran end go create trigger trg50000Insert2 on tbl50000Insert for INSERT as begin select 'In Trigger2' select 'Trigger 2 Inserted', * from inserted end go insert into tbl50000Insert values(2) print('---------------------- In same batch') select * from tbl50000Insert go -- Cleanup drop trigger trg50000Insert drop trigger trg50000Insert2 go delete from tbl50000Insert Third Example In the third example, the raiserror statement is used to set the @@ERROR value and the BEGIN TRAN statement is used in the trigger to allow the batch to continue to run. print('Trigger issues rollback - uses raiserror to set @@ERROR') go create trigger trg50000Insert on tbl50000Insert for INSERT as begin select 'Inserted', * from inserted rollback tran begin tran -- Increase @@trancount to allow -- batch to continue select @@trancount as ‘Trancount’ raiserror('This is from the trigger', 14,1) end go insert into tbl50000Insert values(3) select @@ERROR as 'ERROR', @@TRANCOUNT as 'Trancount' go -- Cleanup drop trigger trg50000Insert go delete from tbl50000Insert Fourth Example For the fourth example, a second trigger is added to illustrate the fact that @@ERROR value set in the first trigger will not be seen in the second trigger nor will it show up in the batch after the second trigger is fired. print('Trigger issues rollback - uses raiserror to set @@ERROR, not seen in second trigger and cleared in batch') go create trigger trg50000Insert on tbl50000Insert for INSERT as begin select 'Inserted', * from inserted rollback begin tran -- Increase @@trancount to -- allow batch to continue select @@TRANCOUNT as 'Trancount' raiserror('This is from the trigger', 14,1) end go create trigger trg50000Insert2 on tbl50000Insert for INSERT as begin select @@ERROR as 'ERROR', @@TRANCOUNT as 'Trancount' end go insert into tbl50000Insert values(4) select @@ERROR as 'ERROR', @@TRANCOUNT as 'Trancount' go -- Cleanup drop trigger trg50000Insert drop trigger trg50000Insert2 go delete from tbl50000Insert Lesson 3: Concepts – Locks and Applications This lesson outlines some of the common causes that contribute to the perception of a slow server. What You Will Learn After completing this lesson, you will be able to:  Explain how lock hints are used and their impact.  Discuss the effect on locking when an application uses Microsoft Transaction Server.  Identify the different kinds of deadlocks including distributed deadlock. Recommended Reading  Charter 14 “Locking”, Inside SQL Server 2000 by Kalen Delaney  Charter 16 “Query Tuning”, Inside SQL Server 2000 by Kalen Delaney Q239753 – Deadlock Situation Not Detected by SQL Server Q288752 – Blocked SPID Not Participating in Deadlock May Incorrectly be Chosen as victim Locking Hints UPDLOCK If update locks are used instead of shared locks while reading a table, the locks are held until the end of the statement or transaction. UPDLOCK has the advantage of allowing you to read data (without blocking other readers) and update it later with the assurance that the data has not changed since you last read it. READPAST READPAST is an optimizer hint for use with SELECT statements. When this hint is used, SQL Server will read past locked rows. For example, assume table T1 contains a single integer column with the values of 1, 2, 3, 4, and 5. If transaction A changes the value of 3 to 8 but has not yet committed, a SELECT * FROM T1 (READPAST) yields values 1, 2, 4, 5. Tip READPAST only applies to transactions operating at READ COMMITTED isolation and only reads past row-level locks. This lock hint can be used to implement a work queue on a SQL Server table. For example, assume there are many external work requests being thrown into a table and they should be serviced in approximate insertion order but they do not have to be completely FIFO. If you have 4 worker threads consuming work items from the queue they could each pick up a record using read past locking and then delete the entry from the queue and commit when they're done. If they fail, they could rollback, leaving the entry on the queue for the next worker thread to pick up. Caution The READPAST hint is not compatible with HOLDLOCK.  Try This: Using Locking Hints 1. Open a Query Window and connect to the pubs database. 2. Execute the following statements (--Conn 1 is optional to help you keep track of each connection): BEGIN TRANSACTION -- Conn 1 UPDATE titles SET price = price * 0.9 WHERE title_id = 'BU1032' 3. Open a second connection and execute the following statements: SELECT @@lock_timeout -- Conn 2 GO SELECT * FROM titles SELECT * FROM authors 4. Open a third connection and execute the following statements: SET LOCK_TIMEOUT 0 -- Conn 3 SELECT * FROM titles SELECT * FROM authors 5. Open a fourth connection and execute the following statement: SELECT * FROM titles (READPAST) -- Conn 4 WHERE title_ID < 'C' SELECT * FROM authors How many records were returned? 3 6. Open a fifth connection and execute the following statement: SELECT * FROM titles (NOLOCK) -- Conn 5 WHERE title_ID 0 the lock manager also checks for deadlocks every time a SPID gets blocked. So a single deadlock will trigger 20 seconds of more immediate deadlock detection, but if no additional deadlocks occur in that 20 seconds, the lock manager no longer checks for deadlocks at each block and detection again only happens every 5 seconds. Although normally not needed, you may use trace flag -T1205 to trace the deadlock detection process. Note Please note the distinction between application lock and other locks’ deadlock detection. For application lock, we do not rollback the transaction of the deadlock victim but simply return a -3 to sp_getapplock, which the application needs to handle itself. Deadlock Resolution How is a deadlock resolved? SQL Server picks one of the connections as a deadlock victim. The victim is chosen based on either which is the least expensive transaction (calculated using the number and size of the log records) to roll back or in which process “SET DEADLOCK_PRIORITY LOW” is specified. The victim’s transaction is rolled back, held locks are released, and SQL Server sends error 1205 to the victim’s client application to notify it that it was chosen as a victim. The other process can then obtain access to the resource it was waiting on and continue. Error 1205: Your transaction (process ID #%d) was deadlocked with another process and has been chosen as the deadlock victim. Rerun your transaction. Symptoms of deadlocking Error 1205 usually is not written to the SQL Server errorlog. Unfortunately, you cannot use sp_altermessage to cause 1205 to be written to the errorlog. If the client application does not capture and display error 1205, some of the symptoms of deadlock occurring are:  Clients complain of mysteriously canceled queries when using certain features of an application.  May be accompanied by excessive blocking. Lock contention increases the chances that a deadlock will occur. Triggers and Deadlock Triggers promote the deadlock priority of the SPID for the life of the trigger execution when the DEADLOCK PRIORITY is not set to low. When a statement in a trigger causes a deadlock to occur, the SPID executing the trigger is given preferential treatment and will not become the victim. Warning Bug 235794 is filed against SQL Server 2000 where a blocked SPID that is not a participant of a deadlock may incorrectly be chosen as a deadlock victim if the SPID is blocked by one of the deadlock participants and the SPID has the least amount of transaction logging. See KB article Q288752: “Blocked Spid Not Participating in Deadlock May Incorrectly be Chosen as victim” for more information. Distributed Deadlock – Scenario 1 Distributed Deadlocks The term distributed deadlock is ambiguous. There are many types of distributed deadlocks. Scenario 1 Client application opens connection A, begins a transaction, acquires some locks, opens connection B, connection B gets blocked by A but the application is designed to not commit A’s transaction until B completes. Note SQL Server has no way of knowing that connection A is somehow dependent on B – they are two distinct connections with two distinct transactions. This situation is discussed in scenario #4 in “Q224453 INF: Understanding and Resolving SQL Server 7.0 Blocking Problems”. Distributed Deadlock – Scenario 2 Scenario 2 Distributed deadlock involving bound connections. Two connections can be bound into a single transaction context with sp_getbindtoken/sp_bindsession or via DTC. Spid 60 enlists in a transaction with spid 61. A third spid 62 is blocked by spid 60, but spid 61 is blocked by spid 62. Because they are doing work in the same transaction, spid 60 cannot commit until spid 61 finishes his work, but spid 61 is blocked by 62 who is blocked by 60. This scenario is described in article “Q239753 - Deadlock Situation Not Detected by SQL Server.” Note SQL Server 6.5 and 7.0 do not detect this deadlock. The SQL Server 2000 deadlock detection algorithm has been enhanced to detect this type of distributed deadlock. The diagram in the slide illustrates this situation. Resources locked by a spid are below that spid (in a box). Arrows indicate blocking and are drawn from the blocked spid to the resource that the spid requires. A circle represents a transaction; spids in the same transaction are shown in the same circle. Distributed Deadlock – Scenario 3 Scenario 3 Distributed deadlock involving linked servers or server-to-server RPC. Spid 60 on Server 1 executes a stored procedure on Server 2 via linked server. This stored procedure does a loopback linked server query against a table on Server 1, and this connection is blocked by a lock held by Spid 60. Note No version of SQL Server is currently designed to detect this distributed deadlock. Lesson 4: Information Collection and Analysis This lesson outlines some of the common causes that contribute to the perception of a slow server. What You Will Learn After completing this lesson, you will be able to:  Identify specific information needed for troubleshooting issues.  Locate and collect information needed for troubleshooting issues.  Analyze output of DBCC Inputbuffer, DBCC PSS, and DBCC Page commands.  Review information collected from master.dbo.sysprocesses table.  Review information collected from master.dbo.syslockinfo table.  Review output of sp_who, sp_who2, sp_lock.  Analyze Profiler log for query usage pattern.  Review output of trace flags to help troubleshoot deadlocks. Recommended Reading Q244455 - INF: Definition of Sysprocesses Waittype and Lastwaittype Fields Q244456 - INF: Description of DBCC PSS Command for SQL Server 7.0 Q271509 - INF: How to Monitor SQL Server 2000 Blocking Q251004 - How to Monitor SQL Server 7.0 Blocking Q224453 - Understanding and Resolving SQL Server 7.0 Blocking Problem Q282749 – BUG: Deadlock information reported with SQL Server 2000 Profiler Locking and Blocking  Try This: Examine Blocked Processes 1. Open a Query Window and connect to the pubs database. Execute the following statements: BEGIN TRAN -- connection 1 UPDATE titles SET price = price + 1 2. Open another connection and execute the following statement: SELECT * FROM titles-- connection 2 3. Open a third connection and execute sp_who; note the process id (spid) of the blocked process. (Connection 3) 4. In the same connection, execute the following: SELECT spid, cmd, waittype FROM master..sysprocesses WHERE waittype 0 -- connection 3 5. Do not close any of the connections! What was the wait type of the blocked process?  Try This: Look at locks held Assumes all your connections are still open from the previous exercise. • Execute sp_lock -- Connection 3 What locks is the process from the previous example holding? Make sure you run ROLLBACK TRAN in Connection 1 to clean up your transaction. Collecting Information See Module 2 for more about how to gather this information using various tools. Recognizing Blocking Problems How to Recognize Blocking Problems  Users complain about poor performance at a certain time of day, or after a certain number of users connect.  SELECT * FROM sysprocesses or sp_who2 shows non-zero values in the blocked or BlkBy column.  More severe blocking incidents will have long blocking chains or large sysprocesses.waittime values for blocked spids.  Possibl
Python参考手册,官方正式版参考手册,chm版。以下摘取部分内容:Navigation index modules | next | Python » 3.6.5 Documentation » Python Documentation contents What’s New in Python What’s New In Python 3.6 Summary – Release highlights New Features PEP 498: Formatted string literals PEP 526: Syntax for variable annotations PEP 515: Underscores in Numeric Literals PEP 525: Asynchronous Generators PEP 530: Asynchronous Comprehensions PEP 487: Simpler customization of class creation PEP 487: Descriptor Protocol Enhancements PEP 519: Adding a file system path protocol PEP 495: Local Time Disambiguation PEP 529: Change Windows filesystem encoding to UTF-8 PEP 528: Change Windows console encoding to UTF-8 PEP 520: Preserving Class Attribute Definition Order PEP 468: Preserving Keyword Argument Order New dict implementation PEP 523: Adding a frame evaluation API to CPython PYTHONMALLOC environment variable DTrace and SystemTap probing support Other Language Changes New Modules secrets Improved Modules array ast asyncio binascii cmath collections concurrent.futures contextlib datetime decimal distutils email encodings enum faulthandler fileinput hashlib http.client idlelib and IDLE importlib inspect json logging math multiprocessing os pathlib pdb pickle pickletools pydoc random re readline rlcompleter shlex site sqlite3 socket socketserver ssl statistics struct subprocess sys telnetlib time timeit tkinter traceback tracemalloc typing unicodedata unittest.mock urllib.request urllib.robotparser venv warnings winreg winsound xmlrpc.client zipfile zlib Optimizations Build and C API Changes Other Improvements Deprecated New Keywords Deprecated Python behavior Deprecated Python modules, functions and methods asynchat asyncore dbm distutils grp importlib os re ssl tkinter venv Deprecated functions and types of the C API Deprecated Build Options Removed API and Feature Removals Porting to Python 3.6 Changes in ‘python’ Command Behavior Changes in the Python API Changes in the C API CPython bytecode changes Notable changes in Python 3.6.2 New make regen-all build target Removal of make touch build target Notable changes in Python 3.6.5 What’s New In Python 3.5 Summary – Release highlights New Features PEP 492 - Coroutines with async and await syntax PEP 465 - A dedicated infix operator for matrix multiplication PEP 448 - Additional Unpacking Generalizations PEP 461 - percent formatting support for bytes and bytearray PEP 484 - Type Hints PEP 471 - os.scandir() function – a better and faster directory iterator PEP 475: Retry system calls failing with EINTR PEP 479: Change StopIteration handling inside generators PEP 485: A function for testing approximate equality PEP 486: Make the Python Launcher aware of virtual environments PEP 488: Elimination of PYO files PEP 489: Multi-phase extension module initialization Other Language Changes New Modules typing zipapp Improved Modules argparse asyncio bz2 cgi cmath code collections collections.abc compileall concurrent.futures configparser contextlib csv curses dbm difflib distutils doctest email enum faulthandler functools glob gzip heapq http http.client idlelib and IDLE imaplib imghdr importlib inspect io ipaddress json linecache locale logging lzma math multiprocessing operator os pathlib pickle poplib re readline selectors shutil signal smtpd smtplib sndhdr socket ssl Memory BIO Support Application-Layer Protocol Negotiation Support Other Changes sqlite3 subprocess sys sysconfig tarfile threading time timeit tkinter traceback types unicodedata unittest unittest.mock urllib wsgiref xmlrpc xml.sax zipfile Other module-level changes Optimizations Build and C API Changes Deprecated New Keywords Deprecated Python Behavior Unsupported Operating Systems Deprecated Python modules, functions and methods Removed API and Feature Removals Porting to Python 3.5 Changes in Python behavior Changes in the Python API Changes in the C API What’s New In Python 3.4 Summary – Release Highlights New Features PEP 453: Explicit Bootstrapping of PIP in Python Installations Bootstrapping pip By Default Documentation Changes PEP 446: Newly Created File Descriptors Are Non-Inheritable Improvements to Codec Handling PEP 451: A ModuleSpec Type for the Import System Other Language Changes New Modules asyncio ensurepip enum pathlib selectors statistics tracemalloc Improved Modules abc aifc argparse audioop base64 collections colorsys contextlib dbm dis doctest email filecmp functools gc glob hashlib hmac html http idlelib and IDLE importlib inspect ipaddress logging marshal mmap multiprocessing operator os pdb pickle plistlib poplib pprint pty pydoc re resource select shelve shutil smtpd smtplib socket sqlite3 ssl stat struct subprocess sunau sys tarfile textwrap threading traceback types urllib unittest venv wave weakref xml.etree zipfile CPython Implementation Changes PEP 445: Customization of CPython Memory Allocators PEP 442: Safe Object Finalization PEP 456: Secure and Interchangeable Hash Algorithm PEP 436: Argument Clinic Other Build and C API Changes Other Improvements Significant Optimizations Deprecated Deprecations in the Python API Deprecated Features Removed Operating Systems No Longer Supported API and Feature Removals Code Cleanups Porting to Python 3.4 Changes in ‘python’ Command Behavior Changes in the Python API Changes in the C API Changed in 3.4.3 PEP 476: Enabling certificate verification by default for stdlib http clients What’s New In Python 3.3 Summary – Release highlights PEP 405: Virtual Environments PEP 420: Implicit Namespace Packages PEP 3118: New memoryview implementation and buffer protocol documentation Features API changes PEP 393: Flexible String Representation Functionality Performance and resource usage PEP 397: Python Launcher for Windows PEP 3151: Reworking the OS and IO exception hierarchy PEP 380: Syntax for Delegating to a Subgenerator PEP 409: Suppressing exception context PEP 414: Explicit Unicode literals PEP 3155: Qualified name for classes and functions PEP 412: Key-Sharing Dictionary PEP 362: Function Signature Object PEP 421: Adding sys.implementation SimpleNamespace Using importlib as the Implementation of Import New APIs Visible Changes Other Language Changes A Finer-Grained Import Lock Builtin functions and types New Modules faulthandler ipaddress lzma Improved Modules abc array base64 binascii bz2 codecs collections contextlib crypt curses datetime decimal Features API changes email Policy Framework Provisional Policy with New Header API Other API Changes ftplib functools gc hmac http html imaplib inspect io itertools logging math mmap multiprocessing nntplib os pdb pickle pydoc re sched select shlex shutil signal smtpd smtplib socket socketserver sqlite3 ssl stat struct subprocess sys tarfile tempfile textwrap threading time types unittest urllib webbrowser xml.etree.ElementTree zlib Optimizations Build and C API Changes Deprecated Unsupported Operating Systems Deprecated Python modules, functions and methods Deprecated functions and types of the C API Deprecated features Porting to Python 3.3 Porting Python code Porting C code Building C extensions Command Line Switch Changes What’s New In Python 3.2 PEP 384: Defining a Stable ABI PEP 389: Argparse Command Line Parsing Module PEP 391: Dictionary Based Configuration for Logging PEP 3148: The concurrent.futures module PEP 3147: PYC Repository Directories PEP 3149: ABI Version Tagged .so Files PEP 3333: Python Web Server Gateway Interface v1.0.1 Other Language Changes New, Improved, and Deprecated Modules email elementtree functools itertools collections threading datetime and time math abc io reprlib logging csv contextlib decimal and fractions ftp popen select gzip and zipfile tarfile hashlib ast os shutil sqlite3 html socket ssl nntp certificates imaplib http.client unittest random poplib asyncore tempfile inspect pydoc dis dbm ctypes site sysconfig pdb configparser urllib.parse mailbox turtledemo Multi-threading Optimizations Unicode Codecs Documentation IDLE Code Repository Build and C API Changes Porting to Python 3.2 What’s New In Python 3.1 PEP 372: Ordered Dictionaries PEP 378: Format Specifier for Thousands Separator Other Language Changes New, Improved, and Deprecated Modules Optimizations IDLE Build and C API Changes Porting to Python 3.1 What’s New In Python 3.0 Common Stumbling Blocks Print Is A Function Views And Iterators Instead Of Lists Ordering Comparisons Integers Text Vs. Data Instead Of Unicode Vs. 8-bit Overview Of Syntax Changes New Syntax Changed Syntax Removed Syntax Changes Already Present In Python 2.6 Library Changes PEP 3101: A New Approach To String Formatting Changes To Exceptions Miscellaneous Other Changes Operators And Special Methods Builtins Build and C API Changes Performance Porting To Python 3.0 What’s New in Python 2.7 The Future for Python 2.x Changes to the Handling of Deprecation Warnings Python 3.1 Features PEP 372: Adding an Ordered Dictionary to collections PEP 378: Format Specifier for Thousands Separator PEP 389: The argparse Module for Parsing Command Lines PEP 391: Dictionary-Based Configuration For Logging PEP 3106: Dictionary Views PEP 3137: The memoryview Object Other Language Changes Interpreter Changes Optimizations New and Improved Modules New module: importlib New module: sysconfig ttk: Themed Widgets for Tk Updated module: unittest Updated module: ElementTree 1.3 Build and C API Changes Capsules Port-Specific Changes: Windows Port-Specific Changes: Mac OS X Port-Specific Changes: FreeBSD Other Changes and Fixes Porting to Python 2.7 New Features Added to Python 2.7 Maintenance Releases PEP 434: IDLE Enhancement Exception for All Branches PEP 466: Network Security Enhancements for Python 2.7 Acknowledgements What’s New in Python 2.6 Python 3.0 Changes to the Development Process New Issue Tracker: Roundup New Documentation Format: reStructuredText Using Sphinx PEP 343: The ‘with’ statement Writing Context Managers The contextlib module PEP 366: Explicit Relative Imports From a Main Module PEP 370: Per-user site-packages Directory PEP 371: The multiprocessing Package PEP 3101: Advanced String Formatting PEP 3105: print As a Function PEP 3110: Exception-Handling Changes PEP 3112: Byte Literals PEP 3116: New I/O Library PEP 3118: Revised Buffer Protocol PEP 3119: Abstract Base Classes PEP 3127: Integer Literal Support and Syntax PEP 3129: Class Decorators PEP 3141: A Type Hierarchy for Numbers The fractions Module Other Language Changes Optimizations Interpreter Changes New and Improved Modules The ast module The future_builtins module The json module: JavaScript Object Notation The plistlib module: A Property-List Parser ctypes Enhancements Improved SSL Support Deprecations and Removals Build and C API Changes Port-Specific Changes: Windows Port-Specific Changes: Mac OS X Port-Specific Changes: IRIX Porting to Python 2.6 Acknowledgements What’s New in Python 2.5 PEP 308: Conditional Expressions PEP 309: Partial Function Application PEP 314: Metadata for Python Software Packages v1.1 PEP 328: Absolute and Relative Imports PEP 338: Executing Modules as Scripts PEP 341: Unified try/except/finally PEP 342: New Generator Features PEP 343: The ‘with’ statement Writing Context Managers The contextlib module PEP 352: Exceptions as New-Style Classes PEP 353: Using ssize_t as the index type PEP 357: The ‘__index__’ method Other Language Changes Interactive Interpreter Changes Optimizations New, Improved, and Removed Modules The ctypes package The ElementTree package The hashlib package The sqlite3 package The wsgiref package Build and C API Changes Port-Specific Changes Porting to Python 2.5 Acknowledgements What’s New in Python 2.4 PEP 218: Built-In Set Objects PEP 237: Unifying Long Integers and Integers PEP 289: Generator Expressions PEP 292: Simpler String Substitutions PEP 318: Decorators for Functions and Methods PEP 322: Reverse Iteration PEP 324: New subprocess Module PEP 327: Decimal Data Type Why is Decimal needed? The Decimal type The Context type PEP 328: Multi-line Imports PEP 331: Locale-Independent Float/String Conversions Other Language Changes Optimizations New, Improved, and Deprecated Modules cookielib doctest Build and C API Changes Port-Specific Changes Porting to Python 2.4 Acknowledgements What’s New in Python 2.3 PEP 218: A Standard Set Datatype PEP 255: Simple Generators PEP 263: Source Code Encodings PEP 273: Importing Modules from ZIP Archives PEP 277: Unicode file name support for Windows NT PEP 278: Universal Newline Support PEP 279: enumerate() PEP 282: The logging Package PEP 285: A Boolean Type PEP 293: Codec Error Handling Callbacks PEP 301: Package Index and Metadata for Distutils PEP 302: New Import Hooks PEP 305: Comma-separated Files PEP 307: Pickle Enhancements Extended Slices Other Language Changes String Changes Optimizations New, Improved, and Deprecated Modules Date/Time Type The optparse Module Pymalloc: A Specialized Object Allocator Build and C API Changes Port-Specific Changes Other Changes and Fixes Porting to Python 2.3 Acknowledgements What’s New in Python 2.2 Introduction PEPs 252 and 253: Type and Class Changes Old and New Classes Descriptors Multiple Inheritance: The Diamond Rule Attribute Access Related Links PEP 234: Iterators PEP 255: Simple Generators PEP 237: Unifying Long Integers and Integers PEP 238: Changing the Division Operator Unicode Changes PEP 227: Nested Scopes New and Improved Modules Interpreter Changes and Fixes Other Changes and Fixes Acknowledgements What’s New in Python 2.1 Introduction PEP 227: Nested Scopes PEP 236: __future__ Directives PEP 207: Rich Comparisons PEP 230: Warning Framework PEP 229: New Build System PEP 205: Weak References PEP 232: Function Attributes PEP 235: Importing Modules on Case-Insensitive Platforms PEP 217: Interactive Display Hook PEP 208: New Coercion Model PEP 241: Metadata in Python Packages New and Improved Modules Other Changes and Fixes Acknowledgements What’s New in Python 2.0 Introduction What About Python 1.6? New Development Process Unicode List Comprehensions Augmented Assignment String Methods Garbage Collection of Cycles Other Core Changes Minor Language Changes Changes to Built-in Functions Porting to 2.0 Extending/Embedding Changes Distutils: Making Modules Easy to Install XML Modules SAX2 Support DOM Support Relationship to PyXML Module changes New modules IDLE Improvements Deleted and Deprecated Modules Acknowledgements Changelog Python 3.6.5 final? Tests Build Python 3.6.5 release candidate 1? Security Core and Builtins Library Documentation Tests Build Windows macOS IDLE Tools/Demos C API Python 3.6.4 final? Python 3.6.4 release candidate 1? Core and Builtins Library Documentation Tests Build Windows macOS IDLE Tools/Demos C API Python 3.6.3 final? Library Build Python 3.6.3 release candidate 1? Security Core and Builtins Library Documentation Tests Build Windows IDLE Tools/Demos Python 3.6.2 final? Python 3.6.2 release candidate 2? Security Python 3.6.2 release candidate 1? Core and Builtins Library Security Library IDLE C API Build Documentation Tools/Demos Tests Windows Python 3.6.1 final? Core and Builtins Build Python 3.6.1 release candidate 1? Core and Builtins Library IDLE Windows C API Documentation Tests Build Python 3.6.0 final? Python 3.6.0 release candidate 2? Core and Builtins Tools/Demos Windows Build Python 3.6.0 release candidate 1? Core and Builtins Library C API Documentation Tools/Demos Python 3.6.0 beta 4? Core and Builtins Library Documentation Tests Build Python 3.6.0 beta 3? Core and Builtins Library Windows Build Tests Python 3.6.0 beta 2? Core and Builtins Library Windows C API Build Tests Python 3.6.0 beta 1? Core and Builtins Library IDLE C API Tests Build Tools/Demos Windows Python 3.6.0 alpha 4? Core and Builtins Library IDLE Tests Windows Build Python 3.6.0 alpha 3? Core and Builtins Library Security Library Security Library IDLE C API Build Tools/Demos Documentation Tests Python 3.6.0 alpha 2? Core and Builtins Library Security Library Security Library IDLE Documentation Tests Windows Build Windows C API Tools/Demos Python 3.6.0 alpha 1? Core and Builtins Library Security Library Security Library Security Library IDLE Documentation Tests Build Windows Tools/Demos C API Python 3.5.3 final? Python 3.5.3 release candidate 1? Core and Builtins Library Security Library Security Library IDLE C API Documentation Tests Tools/Demos Windows Build Python 3.5.2 final? Core and Builtins Tests IDLE Python 3.5.2 release candidate 1? Core and Builtins Security Library Security Library Security Library Security Library Security Library IDLE Documentation Tests Build Windows Tools/Demos Windows Python 3.5.1 final? Core and Builtins Windows Python 3.5.1 release candidate 1? Core and Builtins Library IDLE Documentation Tests Build Windows Tools/Demos Python 3.5.0 final? Build Python 3.5.0 release candidate 4? Library Build Python 3.5.0 release candidate 3? Core and Builtins Library Python 3.5.0 release candidate 2? Core and Builtins Library Python 3.5.0 release candidate 1? Core and Builtins Library IDLE Documentation Tests Python 3.5.0 beta 4? Core and Builtins Library Build Python 3.5.0 beta 3? Core and Builtins Library Tests Documentation Build Python 3.5.0 beta 2? Core and Builtins Library Python 3.5.0 beta 1? Core and Builtins Library IDLE Tests Documentation Tools/Demos Python 3.5.0 alpha 4? Core and Builtins Library Build Tests Tools/Demos C API Python 3.5.0 alpha 3? Core and Builtins Library Build Tests Tools/Demos Python 3.5.0 alpha 2? Core and Builtins Library Build C API Windows Python 3.5.0 alpha 1? Core and Builtins Library IDLE Build C API Documentation Tests Tools/Demos Windows The Python Tutorial 1. Whetting Your Appetite 2. Using the Python Interpreter 2.1. Invoking the Interpreter 2.1.1. Argument Passing 2.1.2. Interactive Mode 2.2. The Interpreter and Its Environment 2.2.1. Source Code Encoding 3. An Informal Introduction to Python 3.1. Using Python as a Calculator 3.1.1. Numbers 3.1.2. Strings 3.1.3. Lists 3.2. First Steps Towards Programming 4. More Control Flow Tools 4.1. if Statements 4.2. for Statements 4.3. The range() Function 4.4. break and continue Statements, and else Clauses on Loops 4.5. pass Statements 4.6. Defining Functions 4.7. More on Defining Functions 4.7.1. Default Argument Values 4.7.2. Keyword Arguments 4.7.3. Arbitrary Argument Lists 4.7.4. Unpacking Argument Lists 4.7.5. Lambda Expressions 4.7.6. Documentation Strings 4.7.7. Function Annotations 4.8. Intermezzo: Coding Style 5. Data Structures 5.1. More on Lists 5.1.1. Using Lists as Stacks 5.1.2. Using Lists as Queues 5.1.3. List Comprehensions 5.1.4. Nested List Comprehensions 5.2. The del statement 5.3. Tuples and Sequences 5.4. Sets 5.5. Dictionaries 5.6. Looping Techniques 5.7. More on Conditions 5.8. Comparing Sequences and Other Types 6. Modules 6.1. More on Modules 6.1.1. Executing modules as scripts 6.1.2. The Module Search Path 6.1.3. “Compiled” Python files 6.2. Standard Modules 6.3. The dir() Function 6.4. Packages 6.4.1. Importing * From a Package 6.4.2. Intra-package References 6.4.3. Packages in Multiple Directories 7. Input and Output 7.1. Fancier Output Formatting 7.1.1. Old string formatting 7.2. Reading and Writing Files 7.2.1. Methods of File Objects 7.2.2. Saving structured data with json 8. Errors and Exceptions 8.1. Syntax Errors 8.2. Exceptions 8.3. Handling Exceptions 8.4. Raising Exceptions 8.5. User-defined Exceptions 8.6. Defining Clean-up Actions 8.7. Predefined Clean-up Actions 9. Classes 9.1. A Word About Names and Objects 9.2. Python Scopes and Namespaces 9.2.1. Scopes and Namespaces Example 9.3. A First Look at Classes 9.3.1. Class Definition Syntax 9.3.2. Class Objects 9.3.3. Instance Objects 9.3.4. Method Objects 9.3.5. Class and Instance Variables 9.4. Random Remarks 9.5. Inheritance 9.5.1. Multiple Inheritance 9.6. Private Variables 9.7. Odds and Ends 9.8. Iterators 9.9. Generators 9.10. Generator Expressions 10. Brief Tour of the Standard Library 10.1. Operating System Interface 10.2. File Wildcards 10.3. Command Line Arguments 10.4. Error Output Redirection and Program Termination 10.5. String Pattern Matching 10.6. Mathematics 10.7. Internet Access 10.8. Dates and Times 10.9. Data Compression 10.10. Performance Measurement 10.11. Quality Control 10.12. Batteries Included 11. Brief Tour of the Standard Library — Part II 11.1. Output Formatting 11.2. Templating 11.3. Working with Binary Data Record Layouts 11.4. Multi-threading 11.5. Logging 11.6. Weak References 11.7. Tools for Working with Lists 11.8. Decimal Floating Point Arithmetic 12. Virtual Environments and Packages 12.1. Introduction 12.2. Creating Virtual Environments 12.3. Managing Packages with pip 13. What Now? 14. Interactive Input Editing and History Substitution 14.1. Tab Completion and History Editing 14.2. Alternatives to the Interactive Interpreter 15. Floating Point Arithmetic: Issues and Limitations 15.1. Representation Error 16. Appendix 16.1. Interactive Mode 16.1.1. Error Handling 16.1.2. Executable Python Scripts 16.1.3. The Interactive Startup File 16.1.4. The Customization Modules Python Setup and Usage 1. Command line and environment 1.1. Command line 1.1.1. Interface options 1.1.2. Generic options 1.1.3. Miscellaneous options 1.1.4. Options you shouldn’t use 1.2. Environment variables 1.2.1. Debug-mode variables 2. Using Python on Unix platforms 2.1. Getting and installing the latest version of Python 2.1.1. On Linux 2.1.2. On FreeBSD and OpenBSD 2.1.3. On OpenSolaris 2.2. Building Python 2.3. Python-related paths and files 2.4. Miscellaneous 2.5. Editors and IDEs 3. Using Python on Windows 3.1. Installing Python 3.1.1. Supported Versions 3.1.2. Installation Steps 3.1.3. Removing the MAX_PATH Limitation 3.1.4. Installing Without UI 3.1.5. Installing Without Downloading 3.1.6. Modifying an install 3.1.7. Other Platforms 3.2. Alternative bundles 3.3. Configuring Python 3.3.1. Excursus: Setting environment variables 3.3.2. Finding the Python executable 3.4. Python Launcher for Windows 3.4.1. Getting started 3.4.1.1. From the command-line 3.4.1.2. Virtual environments 3.4.1.3. From a script 3.4.1.4. From file associations 3.4.2. Shebang Lines 3.4.3. Arguments in shebang lines 3.4.4. Customization 3.4.4.1. Customization via INI files 3.4.4.2. Customizing default Python versions 3.4.5. Diagnostics 3.5. Finding modules 3.6. Additional modules 3.6.1. PyWin32 3.6.2. cx_Freeze 3.6.3. WConio 3.7. Compiling Python on Windows 3.8. Embedded Distribution 3.8.1. Python Application 3.8.2. Embedding Python 3.9. Other resources 4. Using Python on a Macintosh 4.1. Getting and Installing MacPython 4.1.1. How to run a Python script 4.1.2. Running scripts with a GUI 4.1.3. Configuration 4.2. The IDE 4.3. Installing Additional Python Packages 4.4. GUI Programming on the Mac 4.5. Distributing Python Applications on the Mac 4.6. Other Resources The Python Language Reference 1. Introduction 1.1. Alternate Implementations 1.2. Notation 2. Lexical analysis 2.1. Line structure 2.1.1. Logical lines 2.1.2. Physical lines 2.1.3. Comments 2.1.4. Encoding declarations 2.1.5. Explicit line joining 2.1.6. Implicit line joining 2.1.7. Blank lines 2.1.8. Indentation 2.1.9. Whitespace between tokens 2.2. Other tokens 2.3. Identifiers and keywords 2.3.1. Keywords 2.3.2. Reserved classes of identifiers 2.4. Literals 2.4.1. String and Bytes literals 2.4.2. String literal concatenation 2.4.3. Formatted string literals 2.4.4. Numeric literals 2.4.5. Integer literals 2.4.6. Floating point literals 2.4.7. Imaginary literals 2.5. Operators 2.6. Delimiters 3. Data model 3.1. Objects, values and types 3.2. The standard type hierarchy 3.3. Special method names 3.3.1. Basic customization 3.3.2. Customizing attribute access 3.3.2.1. Customizing module attribute access 3.3.2.2. Implementing Descriptors 3.3.2.3. Invoking Descriptors 3.3.2.4. __slots__ 3.3.2.4.1. Notes on using __slots__ 3.3.3. Customizing class creation 3.3.3.1. Metaclasses 3.3.3.2. Determining the appropriate metaclass 3.3.3.3. Preparing the class namespace 3.3.3.4. Executing the class body 3.3.3.5. Creating the class object 3.3.3.6. Metaclass example 3.3.4. Customizing instance and subclass checks 3.3.5. Emulating callable objects 3.3.6. Emulating container types 3.3.7. Emulating numeric types 3.3.8. With Statement Context Managers 3.3.9. Special method lookup 3.4. Coroutines 3.4.1. Awaitable Objects 3.4.2. Coroutine Objects 3.4.3. Asynchronous Iterators 3.4.4. Asynchronous Context Managers 4. Execution model 4.1. Structure of a program 4.2. Naming and binding 4.2.1. Binding of names 4.2.2. Resolution of names 4.2.3. Builtins and restricted execution 4.2.4. Interaction with dynamic features 4.3. Exceptions 5. The import system 5.1. importlib 5.2. Packages 5.2.1. Regular packages 5.2.2. Namespace packages 5.3. Searching 5.3.1. The module cache 5.3.2. Finders and loaders 5.3.3. Import hooks 5.3.4. The meta path 5.4. Loading 5.4.1. Loaders 5.4.2. Submodules 5.4.3. Module spec 5.4.4. Import-related module attributes 5.4.5. module.__path__ 5.4.6. Module reprs 5.5. The Path Based Finder 5.5.1. Path entry finders 5.5.2. Path entry finder protocol 5.6. Replacing the standard import system 5.7. Special considerations for __main__ 5.7.1. __main__.__spec__ 5.8. Open issues 5.9. References 6. Expressions 6.1. Arithmetic conversions 6.2. Atoms 6.2.1. Identifiers (Names) 6.2.2. Literals 6.2.3. Parenthesized forms 6.2.4. Displays for lists, sets and dictionaries 6.2.5. List displays 6.2.6. Set displays 6.2.7. Dictionary displays 6.2.8. Generator expressions 6.2.9. Yield expressions 6.2.9.1. Generator-iterator methods 6.2.9.2. Examples 6.2.9.3. Asynchronous generator functions 6.2.9.4. Asynchronous generator-iterator methods 6.3. Primaries 6.3.1. Attribute references 6.3.2. Subscriptions 6.3.3. Slicings 6.3.4. Calls 6.4. Await expression 6.5. The power operator 6.6. Unary arithmetic and bitwise operations 6.7. Binary arithmetic operations 6.8. Shifting operations 6.9. Binary bitwise operations 6.10. Comparisons 6.10.1. Value comparisons 6.10.2. Membership test operations 6.10.3. Identity comparisons 6.11. Boolean operations 6.12. Conditional expressions 6.13. Lambdas 6.14. Expression lists 6.15. Evaluation order 6.16. Operator precedence 7. Simple statements 7.1. Expression statements 7.2. Assignment statements 7.2.1. Augmented assignment statements 7.2.2. Annotated assignment statements 7.3. The assert statement 7.4. The pass statement 7.5. The del statement 7.6. The return statement 7.7. The yield statement 7.8. The raise statement 7.9. The break statement 7.10. The continue statement 7.11. The import statement 7.11.1. Future statements 7.12. The global statement 7.13. The nonlocal statement 8. Compound statements 8.1. The if statement 8.2. The while statement 8.3. The for statement 8.4. The try statement 8.5. The with statement 8.6. Function definitions 8.7. Class definitions 8.8. Coroutines 8.8.1. Coroutine function definition 8.8.2. The async for statement 8.8.3. The async with statement 9. Top-level components 9.1. Complete Python programs 9.2. File input 9.3. Interactive input 9.4. Expression input 10. Full Grammar specification The Python Standard Library 1. Introduction 2. Built-in Functions 3. Built-in Constants 3.1. Constants added by the site module 4. Built-in Types 4.1. Truth Value Testing 4.2. Boolean Operations — and, or, not 4.3. Comparisons 4.4. Numeric Types — int, float, complex 4.4.1. Bitwise Operations on Integer Types 4.4.2. Additional Methods on Integer Types 4.4.3. Additional Methods on Float 4.4.4. Hashing of numeric types 4.5. Iterator Types 4.5.1. Generator Types 4.6. Sequence Types — list, tuple, range 4.6.1. Common Sequence Operations 4.6.2. Immutable Sequence Types 4.6.3. Mutable Sequence Types 4.6.4. Lists 4.6.5. Tuples 4.6.6. Ranges 4.7. Text Sequence Type — str 4.7.1. String Methods 4.7.2. printf-style String Formatting 4.8. Binary Sequence Types — bytes, bytearray, memoryview 4.8.1. Bytes Objects 4.8.2. Bytearray Objects 4.8.3. Bytes and Bytearray Operations 4.8.4. printf-style Bytes Formatting 4.8.5. Memory Views 4.9. Set Types — set, frozenset 4.10. Mapping Types — dict 4.10.1. Dictionary view objects 4.11. Context Manager Types 4.12. Other Built-in Types 4.12.1. Modules 4.12.2. Classes and Class Instances 4.12.3. Functions 4.12.4. Methods 4.12.5. Code Objects 4.12.6. Type Objects 4.12.7. The Null Object 4.12.8. The Ellipsis Object 4.12.9. The NotImplemented Object 4.12.10. Boolean Values 4.12.11. Internal Objects 4.13. Special Attributes 5. Built-in Exceptions 5.1. Base classes 5.2. Concrete exceptions 5.2.1. OS exceptions 5.3. Warnings 5.4. Exception hierarchy 6. Text Processing Services 6.1. string — Common string operations 6.1.1. String constants 6.1.2. Custom String Formatting 6.1.3. Format String Syntax 6.1.3.1. Format Specification Mini-Language 6.1.3.2. Format examples 6.1.4. Template strings 6.1.5. Helper functions 6.2. re — Regular expression operations 6.2.1. Regular Expression Syntax 6.2.2. Module Contents 6.2.3. Regular Expression Objects 6.2.4. Match Objects 6.2.5. Regular Expression Examples 6.2.5.1. Checking for a Pair 6.2.5.2. Simulating scanf() 6.2.5.3. search() vs. match() 6.2.5.4. Making a Phonebook 6.2.5.5. Text Munging 6.2.5.6. Finding all Adverbs 6.2.5.7. Finding all Adverbs and their Positions 6.2.5.8. Raw String Notation 6.2.5.9. Writing a Tokenizer 6.3. difflib — Helpers for computing deltas 6.3.1. SequenceMatcher Objects 6.3.2. SequenceMatcher Examples 6.3.3. Differ Objects 6.3.4. Differ Example 6.3.5. A command-line interface to difflib 6.4. textwrap — Text wrapping and filling 6.5. unicodedata — Unicode Database 6.6. stringprep — Internet String Preparation 6.7. readline — GNU readline interface 6.7.1. Init file 6.7.2. Line buffer 6.7.3. History file 6.7.4. History list 6.7.5. Startup hooks 6.7.6. Completion 6.7.7. Example 6.8. rlcompleter — Completion function for GNU readline 6.8.1. Completer Objects 7. Binary Data Services 7.1. struct — Interpret bytes as packed binary data 7.1.1. Functions and Exceptions 7.1.2. Format Strings 7.1.2.1. Byte Order, Size, and Alignment 7.1.2.2. Format Characters 7.1.2.3. Examples 7.1.3. Classes 7.2. codecs — Codec registry and base classes 7.2.1. Codec Base Classes 7.2.1.1. Error Handlers 7.2.1.2. Stateless Encoding and Decoding 7.2.1.3. Incremental Encoding and Decoding 7.2.1.3.1. IncrementalEncoder Objects 7.2.1.3.2. IncrementalDecoder Objects 7.2.1.4. Stream Encoding and Decoding 7.2.1.4.1. StreamWriter Objects 7.2.1.4.2. StreamReader Objects 7.2.1.4.3. StreamReaderWriter Objects 7.2.1.4.4. StreamRecoder Objects 7.2.2. Encodings and Unicode 7.2.3. Standard Encodings 7.2.4. Python Specific Encodings 7.2.4.1. Text Encodings 7.2.4.2. Binary Transforms 7.2.4.3. Text Transforms 7.2.5. encodings.idna — Internationalized Domain Names in Applications 7.2.6. encodings.mbcs — Windows ANSI codepage 7.2.7. encodings.utf_8_sig — UTF-8 codec with BOM signature 8. Data Types 8.1. datetime — Basic date and time types 8.1.1. Available Types 8.1.2. timedelta Objects 8.1.3. date Objects 8.1.4. datetime Objects 8.1.5. time Objects 8.1.6. tzinfo Objects 8.1.7. timezone Objects 8.1.8. strftime() and strptime() Behavior 8.2. calendar — General calendar-related functions 8.3. collections — Container datatypes 8.3.1. ChainMap objects 8.3.1.1. ChainMap Examples and Recipes 8.3.2. Counter objects 8.3.3. deque objects 8.3.3.1. deque Recipes 8.3.4. defaultdict objects 8.3.4.1. defaultdict Examples 8.3.5. namedtuple() Factory Function for Tuples with Named Fields 8.3.6. OrderedDict objects 8.3.6.1. OrderedDict Examples and Recipes 8.3.7. UserDict objects 8.3.8. UserList objects 8.3.9. UserString objects 8.4. collections.abc — Abstract Base Classes for Containers 8.4.1. Collections Abstract Base Classes 8.5. heapq — Heap queue algorithm 8.5.1. Basic Examples 8.5.2. Priority Queue Implementation Notes 8.5.3. Theory 8.6. bisect — Array bisection algorithm 8.6.1. Searching Sorted Lists 8.6.2. Other Examples 8.7. array — Efficient arrays of numeric values 8.8. weakref — Weak references 8.8.1. Weak Reference Objects 8.8.2. Example 8.8.3. Finalizer Objects 8.8.4. Comparing finalizers with __del__() methods 8.9. types — Dynamic type creation and names for built-in types 8.9.1. Dynamic Type Creation 8.9.2. Standard Interpreter Types 8.9.3. Additional Utility Classes and Functions 8.9.4. Coroutine Utility Functions 8.10. copy — Shallow and deep copy operations 8.11. pprint — Data pretty printer 8.11.1. PrettyPrinter Objects 8.11.2. Example 8.12. reprlib — Alternate repr() implementation 8.12.1. Repr Objects 8.12.2. Subclassing Repr Objects 8.13. enum — Support for enumerations 8.13.1. Module Contents 8.13.2. Creating an Enum 8.13.3. Programmatic access to enumeration members and their attributes 8.13.4. Duplicating enum members and values 8.13.5. Ensuring unique enumeration values 8.13.6. Using automatic values 8.13.7. Iteration 8.13.8. Comparisons 8.13.9. Allowed members and attributes of enumerations 8.13.10. Restricted subclassing of enumerations 8.13.11. Pickling 8.13.12. Functional API 8.13.13. Derived Enumerations 8.13.13.1. IntEnum 8.13.13.2. IntFlag 8.13.13.3. Flag 8.13.13.4. Others 8.13.14. Interesting examples 8.13.14.1. Omitting values 8.13.14.1.1. Using auto 8.13.14.1.2. Using object 8.13.14.1.3. Using a descriptive string 8.13.14.1.4. Using a custom __new__() 8.13.14.2. OrderedEnum 8.13.14.3. DuplicateFreeEnum 8.13.14.4. Planet 8.13.15. How are Enums different? 8.13.15.1. Enum Classes 8.13.15.2. Enum Members (aka instances) 8.13.15.3. Finer Points 8.13.15.3.1. Supported __dunder__ names 8.13.15.3.2. Supported _sunder_ names 8.13.15.3.3. Enum member type 8.13.15.3.4. Boolean value of Enum classes and members 8.13.15.3.5. Enum classes with methods 8.13.15.3.6. Combining members of Flag 9. Numeric and Mathematical Modules 9.1. numbers — Numeric abstract base classes 9.1.1. The numeric tower 9.1.2. Notes for type implementors 9.1.2.1. Adding More Numeric ABCs 9.1.2.2. Implementing the arithmetic operations 9.2. math — Mathematical functions 9.2.1. Number-theoretic and representation functions 9.2.2. Power and logarithmic functions 9.2.3. Trigonometric functions 9.2.4. Angular conversion 9.2.5. Hyperbolic functions 9.2.6. Special functions 9.2.7. Constants 9.3. cmath — Mathematical functions for complex numbers 9.3.1. Conversions to and from polar coordinates 9.3.2. Power and logarithmic functions 9.3.3. Trigonometric functions 9.3.4. Hyperbolic functions 9.3.5. Classification functions 9.3.6. Constants 9.4. decimal — Decimal fixed point and floating point arithmetic 9.4.1. Quick-start Tutorial 9.4.2. Decimal objects 9.4.2.1. Logical operands 9.4.3. Context objects 9.4.4. Constants 9.4.5. Rounding modes 9.4.6. Signals 9.4.7. Floating Point Notes 9.4.7.1. Mitigating round-off error with increased precision 9.4.7.2. Special values 9.4.8. Working with threads 9.4.9. Recipes 9.4.10. Decimal FAQ 9.5. fractions — Rational numbers 9.6. random — Generate pseudo-random numbers 9.6.1. Bookkeeping functions 9.6.2. Functions for integers 9.6.3. Functions for sequences 9.6.4. Real-valued distributions 9.6.5. Alternative Generator 9.6.6. Notes on Reproducibility 9.6.7. Examples and Recipes 9.7. statistics — Mathematical statistics functions 9.7.1. Averages and measures of central location 9.7.2. Measures of spread 9.7.3. Function details 9.7.4. Exceptions 10. Functional Programming Modules 10.1. itertools — Functions creating iterators for efficient looping 10.1.1. Itertool functions 10.1.2. Itertools Recipes 10.2. functools — Higher-order functions and operations on callable objects 10.2.1. partial Objects 10.3. operator — Standard operators as functions 10.3.1. Mapping Operators to Functions 10.3.2. Inplace Operators 11. File and Directory Access 11.1. pathlib — Object-oriented filesystem paths 11.1.1. Basic use 11.1.2. Pure paths 11.1.2.1. General properties 11.1.2.2. Operators 11.1.2.3. Accessing individual parts 11.1.2.4. Methods and properties 11.1.3. Concrete paths 11.1.3.1. Methods 11.2. os.path — Common pathname manipulations 11.3. fileinput — Iterate over lines from multiple input streams 11.4. stat — Interpreting stat() results 11.5. filecmp — File and Directory Comparisons 11.5.1. The dircmp class 11.6. tempfile — Generate temporary files and directories 11.6.1. Examples 11.6.2. Deprecated functions and variables 11.7. glob — Unix style pathname pattern expansion 11.8. fnmatch — Unix filename pattern matching 11.9. linecache — Random access to text lines 11.10. shutil — High-level file operations 11.10.1. Directory and files operations 11.10.1.1. copytree example 11.10.1.2. rmtree example 11.10.2. Archiving operations 11.10.2.1. Archiving example 11.10.3. Querying the size of the output terminal 11.11. macpath — Mac OS 9 path manipulation functions 12. Data Persistence 12.1. pickle — Python object serialization 12.1.1. Relationship to other Python modules 12.1.1.1. Comparison with marshal 12.1.1.2. Comparison with json 12.1.2. Data stream format 12.1.3. Module Interface 12.1.4. What can be pickled and unpickled? 12.1.5. Pickling Class Instances 12.1.5.1. Persistence of External Objects 12.1.5.2. Dispatch Tables 12.1.5.3. Handling Stateful Objects 12.1.6. Restricting Globals 12.1.7. Performance 12.1.8. Examples 12.2. copyreg — Register pickle support functions 12.2.1. Example 12.3. shelve — Python object persistence 12.3.1. Restrictions 12.3.2. Example 12.4. marshal — Internal Python object serialization 12.5. dbm — Interfaces to Unix “databases” 12.5.1. dbm.gnu — GNU’s reinterpretation of dbm 12.5.2. dbm.ndbm — Interface based on ndbm 12.5.3. dbm.dumb — Portable DBM implementation 12.6. sqlite3 — DB-API 2.0 interface for SQLite databases 12.6.1. Module functions and constants 12.6.2. Connection Objects 12.6.3. Cursor Objects 12.6.4. Row Objects 12.6.5. Exceptions 12.6.6. SQLite and Python types 12.6.6.1. Introduction 12.6.6.2. Using adapters to store additional Python types in SQLite databases 12.6.6.2.1. Letting your object adapt itself 12.6.6.2.2. Registering an adapter callable 12.6.6.3. Converting SQLite values to custom Python types 12.6.6.4. Default adapters and converters 12.6.7. Controlling Transactions 12.6.8. Using sqlite3 efficiently 12.6.8.1. Using shortcut methods 12.6.8.2. Accessing columns by name instead of by index 12.6.8.3. Using the connection as a context manager 12.6.9. Common issues 12.6.9.1. Multithreading 13. Data Compression and Archiving 13.1. zlib — Compression compatible with gzip 13.2. gzip — Support for gzip files 13.2.1. Examples of usage 13.3. bz2 — Support for bzip2 compression 13.3.1. (De)compression of files 13.3.2. Incremental (de)compression 13.3.3. One-shot (de)compression 13.4. lzma — Compression using the LZMA algorithm 13.4.1. Reading and writing compressed files 13.4.2. Compressing and decompressing data in memory 13.4.3. Miscellaneous 13.4.4. Specifying custom filter chains 13.4.5. Examples 13.5. zipfile — Work with ZIP archives 13.5.1. ZipFile Objects 13.5.2. PyZipFile Objects 13.5.3. ZipInfo Objects 13.5.4. Command-Line Interface 13.5.4.1. Command-line options 13.6. tarfile — Read and write tar archive files 13.6.1. TarFile Objects 13.6.2. TarInfo Objects 13.6.3. Command-Line Interface 13.6.3.1. Command-line options 13.6.4. Examples 13.6.5. Supported tar formats 13.6.6. Unicode issues 14. File Formats 14.1. csv — CSV File Reading and Writing 14.1.1. Module Contents 14.1.2. Dialects and Formatting Parameters 14.1.3. Reader Objects 14.1.4. Writer Objects 14.1.5. Examples 14.2. configparser — Configuration file parser 14.2.1. Quick Start 14.2.2. Supported Datatypes 14.2.3. Fallback Values 14.2.4. Supported INI File Structure 14.2.5. Interpolation of values 14.2.6. Mapping Protocol Access 14.2.7. Customizing Parser Behaviour 14.2.8. Legacy API Examples 14.2.9. ConfigParser Objects 14.2.10. RawConfigParser Objects 14.2.11. Exceptions 14.3. netrc — netrc file processing 14.3.1. netrc Objects 14.4. xdrlib — Encode and decode XDR data 14.4.1. Packer Objects 14.4.2. Unpacker Objects 14.4.3. Exceptions 14.5. plistlib — Generate and parse Mac OS X .plist files 14.5.1. Examples 15. Cryptographic Services 15.1. hashlib — Secure hashes and message digests 15.1.1. Hash algorithms 15.1.2. SHAKE variable length digests 15.1.3. Key derivation 15.1.4. BLAKE2 15.1.4.1. Creating hash objects 15.1.4.2. Constants 15.1.4.3. Examples 15.1.4.3.1. Simple hashing 15.1.4.3.2. Using different digest sizes 15.1.4.3.3. Keyed hashing 15.1.4.3.4. Randomized hashing 15.1.4.3.5. Personalization 15.1.4.3.6. Tree mode 15.1.4.4. Credits 15.2. hmac — Keyed-Hashing for Message Authentication 15.3. secrets — Generate secure random numbers for managing secrets 15.3.1. Random numbers 15.3.2. Generating tokens 15.3.2.1. How many bytes should tokens use? 15.3.3. Other functions 15.3.4. Recipes and best practices 16. Generic Operating System Services 16.1. os — Miscellaneous operating system interfaces 16.1.1. File Names, Command Line Arguments, and Environment Variables 16.1.2. Process Parameters 16.1.3. File Object Creation 16.1.4. File Descriptor Operations 16.1.4.1. Querying the size of a terminal 16.1.4.2. Inheritance of File Descriptors 16.1.5. Files and Directories 16.1.5.1. Linux extended attributes 16.1.6. Process Management 16.1.7. Interface to the scheduler 16.1.8. Miscellaneous System Information 16.1.9. Random numbers 16.2. io — Core tools for working with streams 16.2.1. Overview 16.2.1.1. Text I/O 16.2.1.2. Binary I/O 16.2.1.3. Raw I/O 16.2.2. High-level Module Interface 16.2.2.1. In-memory streams 16.2.3. Class hierarchy 16.2.3.1. I/O Base Classes 16.2.3.2. Raw File I/O 16.2.3.3. Buffered Streams 16.2.3.4. Text I/O 16.2.4. Performance 16.2.4.1. Binary I/O 16.2.4.2. Text I/O 16.2.4.3. Multi-threading 16.2.4.4. Reentrancy 16.3. time — Time access and conversions 16.3.1. Functions 16.3.2. Clock ID Constants 16.3.3. Timezone Constants 16.4. argparse — Parser for command-line options, arguments and sub-commands 16.4.1. Example 16.4.1.1. Creating a parser 16.4.1.2. Adding arguments 16.4.1.3. Parsing arguments 16.4.2. ArgumentParser objects 16.4.2.1. prog 16.4.2.2. usage 16.4.2.3. description 16.4.2.4. epilog 16.4.2.5. parents 16.4.2.6. formatter_class 16.4.2.7. prefix_chars 16.4.2.8. fromfile_prefix_chars 16.4.2.9. argument_default 16.4.2.10. allow_abbrev 16.4.2.11. conflict_handler 16.4.2.12. add_help 16.4.3. The add_argument() method 16.4.3.1. name or flags 16.4.3.2. action 16.4.3.3. nargs 16.4.3.4. const 16.4.3.5. default 16.4.3.6. type 16.4.3.7. choices 16.4.3.8. required 16.4.3.9. help 16.4.3.10. metavar 16.4.3.11. dest 16.4.3.12. Action classes 16.4.4. The parse_args() method 16.4.4.1. Option value syntax 16.4.4.2. Invalid arguments 16.4.4.3. Arguments containing - 16.4.4.4. Argument abbreviations (prefix matching) 16.4.4.5. Beyond sys.argv 16.4.4.6. The Namespace object 16.4.5. Other utilities 16.4.5.1. Sub-commands 16.4.5.2. FileType objects 16.4.5.3. Argument groups 16.4.5.4. Mutual exclusion 16.4.5.5. Parser defaults 16.4.5.6. Printing help 16.4.5.7. Partial parsing 16.4.5.8. Customizing file parsing 16.4.5.9. Exiting methods 16.4.6. Upgrading optparse code 16.5. getopt — C-style parser for command line options 16.6. logging — Logging facility for Python 16.6.1. Logger Objects 16.6.2. Logging Levels 16.6.3. Handler Objects 16.6.4. Formatter Objects 16.6.5. Filter Objects 16.6.6. LogRecord Objects 16.6.7. LogRecord attributes 16.6.8. LoggerAdapter Objects 16.6.9. Thread Safety 16.6.10. Module-Level Functions 16.6.11. Module-Level Attributes 16.6.12. Integration with the warnings module 16.7. logging.config — Logging configuration 16.7.1. Configuration functions 16.7.2. Configuration dictionary schema 16.7.2.1. Dictionary Schema Details 16.7.2.2. Incremental Configuration 16.7.2.3. Object connections 16.7.2.4. User-defined objects 16.7.2.5. Access to external objects 16.7.2.6. Access to internal objects 16.7.2.7. Import resolution and custom importers 16.7.3. Configuration file format 16.8. logging.handlers — Logging handlers 16.8.1. StreamHandler 16.8.2. FileHandler 16.8.3. NullHandler 16.8.4. WatchedFileHandler 16.8.5. BaseRotatingHandler 16.8.6. RotatingFileHandler 16.8.7. TimedRotatingFileHandler 16.8.8. SocketHandler 16.8.9. DatagramHandler 16.8.10. SysLogHandler 16.8.11. NTEventLogHandler 16.8.12. SMTPHandler 16.8.13. MemoryHandler 16.8.14. HTTPHandler 16.8.15. QueueHandler 16.8.16. QueueListener 16.9. getpass — Portable password input 16.10. curses — Terminal handling for character-cell displays 16.10.1. Functions 16.10.2. Window Objects 16.10.3. Constants 16.11. curses.textpad — Text input widget for curses programs 16.11.1. Textbox objects 16.12. curses.ascii — Utilities for ASCII characters 16.13. curses.panel — A panel stack extension for curses 16.13.1. Functions 16.13.2. Panel Objects 16.14. platform — Access to underlying platform’s identifying data 16.14.1. Cross Platform 16.14.2. Java Platform 16.14.3. Windows Platform 16.14.3.1. Win95/98 specific 16.14.4. Mac OS Platform 16.14.5. Unix Platforms 16.15. errno — Standard errno system symbols 16.16. ctypes — A foreign function library for Python 16.16.1. ctypes tutorial 16.16.1.1. Loading dynamic link libraries 16.16.1.2. Accessing functions from loaded dlls 16.16.1.3. Calling functions 16.16.1.4. Fundamental data types 16.16.1.5. Calling functions, continued 16.16.1.6. Calling functions with your own custom data types 16.16.1.7. Specifying the required argument types (function prototypes) 16.16.1.8. Return types 16.16.1.9. Passing pointers (or: passing parameters by reference) 16.16.1.10. Structures and unions 16.16.1.11. Structure/union alignment and byte order 16.16.1.12. Bit fields in structures and unions 16.16.1.13. Arrays 16.16.1.14. Pointers 16.16.1.15. Type conversions 16.16.1.16. Incomplete Types 16.16.1.17. Callback functions 16.16.1.18. Accessing values exported from dlls 16.16.1.19. Surprises 16.16.1.20. Variable-sized data types 16.16.2. ctypes reference 16.16.2.1. Finding shared libraries 16.16.2.2. Loading shared libraries 16.16.2.3. Foreign functions 16.16.2.4. Function prototypes 16.16.2.5. Utility functions 16.16.2.6. Data types 16.16.2.7. Fundamental data types 16.16.2.8. Structured data types 16.16.2.9. Arrays and pointers 17. Concurrent Execution 17.1. threading — Thread-based parallelism 17.1.1. Thread-Local Data 17.1.2. Thread Objects 17.1.3. Lock Objects 17.1.4. RLock Objects 17.1.5. Condition Objects 17.1.6. Semaphore Objects 17.1.6.1. Semaphore Example 17.1.7. Event Objects 17.1.8. Timer Objects 17.1.9. Barrier Objects 17.1.10. Using locks, conditions, and semaphores in the with statement 17.2. multiprocessing — Process-based parallelism 17.2.1. Introduction 17.2.1.1. The Process class 17.2.1.2. Contexts and start methods 17.2.1.3. Exchanging objects between processes 17.2.1.4. Synchronization between processes 17.2.1.5. Sharing state between processes 17.2.1.6. Using a pool of workers 17.2.2. Reference 17.2.2.1. Process and exceptions 17.2.2.2. Pipes and Queues 17.2.2.3. Miscellaneous 17.2.2.4. Connection Objects 17.2.2.5. Synchronization primitives 17.2.2.6. Shared ctypes Objects 17.2.2.6.1. The multiprocessing.sharedctypes module 17.2.2.7. Managers 17.2.2.7.1. Customized managers 17.2.2.7.2. Using a remote manager 17.2.2.8. Proxy Objects 17.2.2.8.1. Cleanup 17.2.2.9. Process Pools 17.2.2.10. Listeners and Clients 17.2.2.10.1. Address Formats 17.2.2.11. Authentication keys 17.2.2.12. Logging 17.2.2.13. The multiprocessing.dummy module 17.2.3. Programming guidelines 17.2.3.1. All start methods 17.2.3.2. The spawn and forkserver start methods 17.2.4. Examples 17.3. The concurrent package 17.4. concurrent.futures — Launching parallel tasks 17.4.1. Executor Objects 17.4.2. ThreadPoolExecutor 17.4.2.1. ThreadPoolExecutor Example 17.4.3. ProcessPoolExecutor 17.4.3.1. ProcessPoolExecutor Example 17.4.4. Future Objects 17.4.5. Module Functions 17.4.6. Exception classes 17.5. subprocess — Subprocess management 17.5.1. Using the subprocess Module 17.5.1.1. Frequently Used Arguments 17.5.1.2. Popen Constructor 17.5.1.3. Exceptions 17.5.2. Security Considerations 17.5.3. Popen Objects 17.5.4. Windows Popen Helpers 17.5.4.1. Constants 17.5.5. Older high-level API 17.5.6. Replacing Older Functions with the subprocess Module 17.5.6.1. Replacing /bin/sh shell backquote 17.5.6.2. Replacing shell pipeline 17.5.6.3. Replacing os.system() 17.5.6.4. Replacing the os.spawn family 17.5.6.5. Replacing os.popen(), os.popen2(), os.popen3() 17.5.6.6. Replacing functions from the popen2 module 17.5.7. Legacy Shell Invocation Functions 17.5.8. Notes 17.5.8.1. Converting an argument sequence to a string on Windows 17.6. sched — Event scheduler 17.6.1. Scheduler Objects 17.7. queue — A synchronized queue class 17.7.1. Queue Objects 17.8. dummy_threading — Drop-in replacement for the threading module 17.9. _thread — Low-level threading API 17.10. _dummy_thread — Drop-in replacement for the _thread module 18. Interprocess Communication and Networking 18.1. socket — Low-level networking interface 18.1.1. Socket families 18.1.2. Module contents 18.1.2.1. Exceptions 18.1.2.2. Constants 18.1.2.3. Functions 18.1.2.3.1. Creating sockets 18.1.2.3.2. Other functions 18.1.3. Socket Objects 18.1.4. Notes on socket timeouts 18.1.4.1. Timeouts and the connect method 18.1.4.2. Timeouts and the accept method 18.1.5. Example 18.2. ssl — TLS/SSL wrapper for socket objects 18.2.1. Functions, Constants, and Exceptions 18.2.1.1. Socket creation 18.2.1.2. Context creation 18.2.1.3. Random generation 18.2.1.4. Certificate handling 18.2.1.5. Constants 18.2.2. SSL Sockets 18.2.3. SSL Contexts 18.2.4. Certificates 18.2.4.1. Certificate chains 18.2.4.2. CA certificates 18.2.4.3. Combined key and certificate 18.2.4.4. Self-signed certificates 18.2.5. Examples 18.2.5.1. Testing for SSL support 18.2.5.2. Client-side operation 18.2.5.3. Server-side operation 18.2.6. Notes on non-blocking sockets 18.2.7. Memory BIO Support 18.2.8. SSL session 18.2.9. Security considerations 18.2.9.1. Best defaults 18.2.9.2. Manual settings 18.2.9.2.1. Verifying certificates 18.2.9.2.2. Protocol versions 18.2.9.2.3. Cipher selection 18.2.9.3. Multi-processing 18.2.10. LibreSSL support 18.3. select — Waiting for I/O completion 18.3.1. /dev/poll Polling Objects 18.3.2. Edge and Level Trigger Polling (epoll) Objects 18.3.3. Polling Objects 18.3.4. Kqueue Objects 18.3.5. Kevent Objects 18.4. selectors — High-level I/O multiplexing 18.4.1. Introduction 18.4.2. Classes 18.4.3. Examples 18.5. asyncio — Asynchronous I/O, event loop, coroutines and tasks 18.5.1. Base Event Loop 18.5.1.1. Run an event loop 18.5.1.2. Calls 18.5.1.3. Delayed calls 18.5.1.4. Futures 18.5.1.5. Tasks 18.5.1.6. Creating connections 18.5.1.7. Creating listening connections 18.5.1.8. Watch file descriptors 18.5.1.9. Low-level socket operations 18.5.1.10. Resolve host name 18.5.1.11. Connect pipes 18.5.1.12. UNIX signals 18.5.1.13. Executor 18.5.1.14. Error Handling API 18.5.1.15. Debug mode 18.5.1.16. Server 18.5.1.17. Handle 18.5.1.18. Event loop examples 18.5.1.18.1. Hello World with call_soon() 18.5.1.18.2. Display the current date with call_later() 18.5.1.18.3. Watch a file descriptor for read events 18.5.1.18.4. Set signal handlers for SIGINT and SIGTERM 18.5.2. Event loops 18.5.2.1. Event loop functions 18.5.2.2. Available event loops 18.5.2.3. Platform support 18.5.2.3.1. Windows 18.5.2.3.2. Mac OS X 18.5.2.4. Event loop policies and the default policy 18.5.2.5. Event loop policy interface 18.5.2.6. Access to the global loop policy 18.5.2.7. Customizing the event loop policy 18.5.3. Tasks and coroutines 18.5.3.1. Coroutines 18.5.3.1.1. Example: Hello World coroutine 18.5.3.1.2. Example: Coroutine displaying the current date 18.5.3.1.3. Example: Chain coroutines 18.5.3.2. InvalidStateError 18.5.3.3. TimeoutError 18.5.3.4. Future 18.5.3.4.1. Example: Future with run_until_complete() 18.5.3.4.2. Example: Future with run_forever() 18.5.3.5. Task 18.5.3.5.1. Example: Parallel execution of tasks 18.5.3.6. Task functions 18.5.4. Transports and protocols (callback based API) 18.5.4.1. Transports 18.5.4.1.1. BaseTransport 18.5.4.1.2. ReadTransport 18.5.4.1.3. WriteTransport 18.5.4.1.4. DatagramTransport 18.5.4.1.5. BaseSubprocessTransport 18.5.4.2. Protocols 18.5.4.2.1. Protocol classes 18.5.4.2.2. Connection callbacks 18.5.4.2.3. Streaming protocols 18.5.4.2.4. Datagram protocols 18.5.4.2.5. Flow control callbacks 18.5.4.2.6. Coroutines and protocols 18.5.4.3. Protocol examples 18.5.4.3.1. TCP echo client protocol 18.5.4.3.2. TCP echo server protocol 18.5.4.3.3. UDP echo client protocol 18.5.4.3.4. UDP echo server protocol 18.5.4.3.5. Register an open socket to wait for data using a protocol 18.5.5. Streams (coroutine based API) 18.5.5.1. Stream functions 18.5.5.2. StreamReader 18.5.5.3. StreamWriter 18.5.5.4. StreamReaderProtocol 18.5.5.5. IncompleteReadError 18.5.5.6. LimitOverrunError 18.5.5.7. Stream examples 18.5.5.7.1. TCP echo client using streams 18.5.5.7.2. TCP echo server using streams 18.5.5.7.3. Get HTTP headers 18.5.5.7.4. Register an open socket to wait for data using streams 18.5.6. Subprocess 18.5.6.1. Windows event loop 18.5.6.2. Create a subprocess: high-level API using Process 18.5.6.3. Create a subprocess: low-level API using subprocess.Popen 18.5.6.4. Constants 18.5.6.5. Process 18.5.6.6. Subprocess and threads 18.5.6.7. Subprocess examples 18.5.6.7.1. Subprocess using transport and protocol 18.5.6.7.2. Subprocess using streams 18.5.7. Synchronization primitives 18.5.7.1. Locks 18.5.7.1.1. Lock 18.5.7.1.2. Event 18.5.7.1.3. Condition 18.5.7.2. Semaphores 18.5.7.2.1. Semaphore 18.5.7.2.2. BoundedSemaphore 18.5.8. Queues 18.5.8.1. Queue 18.5.8.2. PriorityQueue 18.5.8.3. LifoQueue 18.5.8.3.1. Exceptions 18.5.9. Develop with asyncio 18.5.9.1. Debug mode of asyncio 18.5.9.2. Cancellation 18.5.9.3. Concurrency and multithreading 18.5.9.4. Handle blocking functions correctly 18.5.9.5. Logging 18.5.9.6. Detect coroutine objects never scheduled 18.5.9.7. Detect exceptions never consumed 18.5.9.8. Chain coroutines correctly 18.5.9.9. Pending task destroyed 18.5.9.10. Close transports and event loops 18.6. asyncore — Asynchronous socket handler 18.6.1. asyncore Example basic HTTP client 18.6.2. asyncore Example basic echo server 18.7. asynchat — Asynchronous socket command/response handler 18.7.1. asynchat Example 18.8. signal — Set handlers for asynchronous events 18.8.1. General rules 18.8.1.1. Execution of Python signal handlers 18.8.1.2. Signals and threads 18.8.2. Module contents 18.8.3. Example 18.9. mmap — Memory-mapped file support 19. Internet Data Handling 19.1. email — An email and MIME handling package 19.1.1. email.message: Representing an email message 19.1.2. email.parser: Parsing email messages 19.1.2.1. FeedParser API 19.1.2.2. Parser API 19.1.2.3. Additional notes 19.1.3. email.generator: Generating MIME documents 19.1.4. email.policy: Policy Objects 19.1.5. email.errors: Exception and Defect classes 19.1.6. email.headerregistry: Custom Header Objects 19.1.7. email.contentmanager: Managing MIME Content 19.1.7.1. Content Manager Instances 19.1.8. email: Examples 19.1.9. email.message.Message: Representing an email message using the compat32 API 19.1.10. email.mime: Creating email and MIME objects from scratch 19.1.11. email.header: Internationalized headers 19.1.12. email.charset: Representing character sets 19.1.13. email.encoders: Encoders 19.1.14. email.utils: Miscellaneous utilities 19.1.15. email.iterators: Iterators 19.2. json — JSON encoder and decoder 19.2.1. Basic Usage 19.2.2. Encoders and Decoders 19.2.3. Exceptions 19.2.4. Standard Compliance and Interoperability 19.2.4.1. Character Encodings 19.2.4.2. Infinite and NaN Number Values 19.2.4.3. Repeated Names Within an Object 19.2.4.4. Top-level Non-Object, Non-Array Values 19.2.4.5. Implementation Limitations 19.2.5. Command Line Interface 19.2.5.1. Command line options 19.3. mailcap — Mailcap file handling 19.4. mailbox — Manipulate mailboxes in various formats 19.4.1. Mailbox objects 19.4.1.1. Maildir 19.4.1.2. mbox 19.4.1.3. MH 19.4.1.4. Babyl 19.4.1.5. MMDF 19.4.2. Message objects 19.4.2.1. MaildirMessage 19.4.2.2. mboxMessage 19.4.2.3. MHMessage 19.4.2.4. BabylMessage 19.4.2.5. MMDFMessage 19.4.3. Exceptions 19.4.4. Examples 19.5. mimetypes — Map filenames to MIME types 19.5.1. MimeTypes Objects 19.6. base64 — Base16, Base32, Base64, Base85 Data Encodings 19.7. binhex — Encode and decode binhex4 files 19.7.1. Notes 19.8. binascii — Convert between binary and ASCII 19.9. quopri — Encode and decode MIME quoted-printable data 19.10. uu — Encode and decode uuencode files 20. Structured Markup Processing Tools 20.1. html — HyperText Markup Language support 20.2. html.parser — Simple HTML and XHTML parser 20.2.1. Example HTML Parser Application 20.2.2. HTMLParser Methods 20.2.3. Examples 20.3. html.entities — Definitions of HTML general entities 20.4. XML Processing Modules 20.4.1. XML vulnerabilities 20.4.2. The defusedxml and defusedexpat Packages 20.5. xml.etree.ElementTree — The ElementTree XML API 20.5.1. Tutorial 20.5.1.1. XML tree and elements 20.5.1.2. Parsing XML 20.5.1.3. Pull API for non-blocking parsing 20.5.1.4. Finding interesting elements 20.5.1.5. Modifying an XML File 20.5.1.6. Building XML documents 20.5.1.7. Parsing XML with Namespaces 20.5.1.8. Additional resources 20.5.2. XPath support 20.5.2.1. Example 20.5.2.2. Supported XPath syntax 20.5.3. Reference 20.5.3.1. Functions 20.5.3.2. Element Objects 20.5.3.3. ElementTree Objects 20.5.3.4. QName Objects 20.5.3.5. TreeBuilder Objects 20.5.3.6. XMLParser Objects 20.5.3.7. XMLPullParser Objects 20.5.3.8. Exceptions 20.6. xml.dom — The Document Object Model API 20.6.1. Module Contents 20.6.2. Objects in the DOM 20.6.2.1. DOMImplementation Objects 20.6.2.2. Node Objects 20.6.2.3. NodeList Objects 20.6.2.4. DocumentType Objects 20.6.2.5. Document Objects 20.6.2.6. Element Objects 20.6.2.7. Attr Objects 20.6.2.8. NamedNodeMap Objects 20.6.2.9. Comment Objects 20.6.2.10. Text and CDATASection Objects 20.6.2.11. ProcessingInstruction Objects 20.6.2.12. Exceptions 20.6.3. Conformance 20.6.3.1. Type Mapping 20.6.3.2. Accessor Methods 20.7. xml.dom.minidom — Minimal DOM implementation 20.7.1. DOM Objects 20.7.2. DOM Example 20.7.3. minidom and the DOM standard 20.8. xml.dom.pulldom — Support for building partial DOM trees 20.8.1. DOMEventStream Objects 20.9. xml.sax — Support for SAX2 parsers 20.9.1. SAXException Objects 20.10. xml.sax.handler — Base classes for SAX handlers 20.10.1. ContentHandler Objects 20.10.2. DTDHandler Objects 20.10.3. EntityResolver Objects 20.10.4. ErrorHandler Objects 20.11. xml.sax.saxutils — SAX Utilities 20.12. xml.sax.xmlreader — Interface for XML parsers 20.12.1. XMLReader Objects 20.12.2. IncrementalParser Objects 20.12.3. Locator Objects 20.12.4. InputSource Objects 20.12.5. The Attributes Interface 20.12.6. The AttributesNS Interface 20.13. xml.parsers.expat — Fast XML parsing using Expat 20.13.1. XMLParser Objects 20.13.2. ExpatError Exceptions 20.13.3. Example 20.13.4. Content Model Descriptions 20.13.5. Expat error constants 21. Internet Protocols and Support 21.1. webbrowser — Convenient Web-browser controller 21.1.1. Browser Controller Objects 21.2. cgi — Common Gateway Interface support 21.2.1. Introduction 21.2.2. Using the cgi module 21.2.3. Higher Level Interface 21.2.4. Functions 21.2.5. Caring about security 21.2.6. Installing your CGI script on a Unix system 21.2.7. Testing your CGI script 21.2.8. Debugging CGI scripts 21.2.9. Common problems and solutions 21.3. cgitb — Traceback manager for CGI scripts 21.4. wsgiref — WSGI Utilities and Reference Implementation 21.4.1. wsgiref.util – WSGI environment utilities 21.4.2. wsgiref.headers – WSGI response header tools 21.4.3. wsgiref.simple_server – a simple WSGI HTTP server 21.4.4. wsgiref.validate — WSGI conformance checker 21.4.5. wsgiref.handlers – server/gateway base classes 21.4.6. Examples 21.5. urllib — URL handling modules 21.6. urllib.request — Extensible library for opening URLs 21.6.1. Request Objects 21.6.2. OpenerDirector Objects 21.6.3. BaseHandler Objects 21.6.4. HTTPRedirectHandler Objects 21.6.5. HTTPCookieProcessor Objects 21.6.6. ProxyHandler Objects 21.6.7. HTTPPasswordMgr Objects 21.6.8. HTTPPasswordMgrWithPriorAuth Objects 21.6.9. AbstractBasicAuthHandler Objects 21.6.10. HTTPBasicAuthHandler Objects 21.6.11. ProxyBasicAuthHandler Objects 21.6.12. AbstractDigestAuthHandler Objects 21.6.13. HTTPDigestAuthHandler Objects 21.6.14. ProxyDigestAuthHandler Objects 21.6.15. HTTPHandler Objects 21.6.16. HTTPSHandler Objects 21.6.17. FileHandler Objects 21.6.18. DataHandler Objects 21.6.19. FTPHandler Objects 21.6.20. CacheFTPHandler Objects 21.6.21. UnknownHandler Objects 21.6.22. HTTPErrorProcessor Objects 21.6.23. Examples 21.6.24. Legacy interface 21.6.25. urllib.request Restrictions 21.7. urllib.response — Response classes used by urllib 21.8. urllib.parse — Parse URLs into components 21.8.1. URL Parsing 21.8.2. Parsing ASCII Encoded Bytes 21.8.3. Structured Parse Results 21.8.4. URL Quoting 21.9. urllib.error — Exception classes raised by urllib.request 21.10. urllib.robotparser — Parser for robots.txt 21.11. http — HTTP modules 21.11.1. HTTP status codes 21.12. http.client — HTTP protocol client 21.12.1. HTTPConnection Objects 21.12.2. HTTPResponse Objects 21.12.3. Examples 21.12.4. HTTPMessage Objects 21.13. ftplib — FTP protocol client 21.13.1. FTP Objects 21.13.2. FTP_TLS Objects 21.14. poplib — POP3 protocol client 21.14.1. POP3 Objects 21.14.2. POP3 Example 21.15. imaplib — IMAP4 protocol client 21.15.1. IMAP4 Objects 21.15.2. IMAP4 Example 21.16. nntplib — NNTP protocol client 21.16.1. NNTP Objects 21.16.1.1. Attributes 21.16.1.2. Methods 21.16.2. Utility functions 21.17. smtplib — SMTP protocol client 21.17.1. SMTP Objects 21.17.2. SMTP Example 21.18. smtpd — SMTP Server 21.18.1. SMTPServer Objects 21.18.2. DebuggingServer Objects 21.18.3. PureProxy Objects 21.18.4. MailmanProxy Objects 21.18.5. SMTPChannel Objects 21.19. telnetlib — Telnet client 21.19.1. Telnet Objects 21.19.2. Telnet Example 21.20. uuid — UUID objects according to RFC 4122 21.20.1. Example 21.21. socketserver — A framework for network servers 21.21.1. Server Creation Notes 21.21.2. Server Objects 21.21.3. Request Handler Objects 21.21.4. Examples 21.21.4.1. socketserver.TCPServer Example 21.21.4.2. socketserver.UDPServer Example 21.21.4.3. Asynchronous Mixins 21.22. http.server — HTTP servers 21.23. http.cookies — HTTP state management 21.23.1. Cookie Objects 21.23.2. Morsel Objects 21.23.3. Example 21.24. http.cookiejar — Cookie handling for HTTP clients 21.24.1. CookieJar and FileCookieJar Objects 21.24.2. FileCookieJar subclasses and co-operation with web browsers 21.24.3. CookiePolicy Objects 21.24.4. DefaultCookiePolicy Objects 21.24.5. Cookie Objec
笔记本的风扇控制 ---------------------------------------- 09 November 2006. Summary of changes for version 20061109: 1) ACPI CA Core Subsystem: Optimized the Load ASL operator in the case where the source operand is an operation region. Simply map the operation region memory, instead of performing a bytewise read. (Region must be of type SystemMemory, see below.) Fixed the Load ASL operator for the case where the source operand is a region field. A buffer object is also allowed as the source operand. BZ 480 Fixed a problem where the Load ASL operator allowed the source operand to be an operation region of any type. It is now restricted to regions of type SystemMemory, as per the ACPI specification. BZ 481 Additional cleanup and optimizations for the new Table Manager code. AcpiEnable will now fail if all of the required ACPI tables are not loaded (FADT, FACS, DSDT). BZ 477 Added #pragma pack(8/4) to acobject.h to ensure that the structures in this header are always compiled as aligned. The ACPI_OPERAND_OBJECT has been manually optimized to be aligned and will not work if it is byte-packed. Example Code and Data Size: These are the sizes for the OS- independent acpica.lib produced by the Microsoft Visual C++ 6.0 32- bit compiler. The debug version of the code includes the debug output trace mechanism and has a much larger code and data size. Previous Release: Non-Debug Version: 78.1K Code, 17.1K Data, 95.2K Total Debug Version: 155.4K Code, 63.1K Data, 218.5K Total Current Release: Non-Debug Version: 77.9K Code, 17.0K Data, 94.9K Total Debug Version: 155.2K Code, 63.1K Data, 218.3K Total 2) iASL Compiler/Disassembler and Tools: Fixed a problem where the presence of the _OSI predefined control method within complex expressions could cause an internal compiler error. AcpiExec: Implemented full region support for multiple address spaces. SpaceId is now part of the REGION object. BZ 429 ---------------------------------------- 11 Oc

67,549

社区成员

发帖
与我相关
我的任务
社区描述
J2EE只是Java企业应用。我们需要一个跨J2SE/WEB/EJB的微容器,保护我们的业务核心组件(中间件),以延续它的生命力,而不是依赖J2SE/J2EE版本。
社区管理员
  • Java EE
加入社区
  • 近7日
  • 近30日
  • 至今
社区公告
暂无公告

试试用AI创作助手写篇文章吧