error reading:invalid property value如何解决?

chenpeng20000 2003-08-18 02:47:19
打开下载的project时:
Error reading StaticText1->Caption:Invalid property value.Ignore the error and continue?NOTE:Ignoring the error may cause components to be deleted or property values to be lost.
请问如何解决?
...全文
1179 4 打赏 收藏 转发到动态 举报
写回复
用AI写文章
4 条回复
切换为时间正序
请发表友善的回复…
发表回复
chenpeng20000 2003-08-18
  • 打赏
  • 举报
回复
装了bcb6好了
Iamsnowgirl 2003-08-18
  • 打赏
  • 举报
回复
应该是版本的问题
netsys2 2003-08-18
  • 打赏
  • 举报
回复
可能是你的PROJECT版本不对。
柯本 2003-08-18
  • 打赏
  • 举报
回复
1 升级你的BCB(底版本打开高版本会出这样问题)
2 用Text打开form,(如果不能,用convert 将form转成text)
找到StaticText的Caption,改了它(可能是字符集的问题)
Version 1.7 ----------- - ADD: Delphi/CBuilder 10.2 Tokyo now supported. - ADD: Delphi/CBuilder 10.1 Berlin now supported. - ADD: Delphi/CBuilder 10 Seattle now supported. - ADD: Delphi/CBuilder XE8 now supported. - ADD: Delphi/CBuilder XE7 now supported. - ADD: Delphi/CBuilder XE6 now supported. - ADD: Delphi/CBuilder XE5 now supported. - ADD: Delphi/CBuilder XE4 now supported. - ADD: Delphi/CBuilder XE3 now supported. - ADD: Delphi/CBuilder XE2 now supported. - ADD: Delphi/CBuilder XE now supported. - ADD: Delphi/CBuilder 2010 now supported. - ADD: Delphi/CBuilder 2009 now supported. - ADD: New demo project FlexCADImport. - FIX: The height of the TFlexRegularPolygon object incorrectly changes with its rotation. - FIX: Added division by zero protect in method TFlexControl.MovePathSegment. - FIX: The background beyond docuemnt wasn't filled when TFlexPanel.DocClipping=True. - FIX: In "Windows ClearType" font rendering mode (OS Windows mode) the "garbage" pixels can appear from the right and from the bottom sides of the painted rectangle of the TFlexText object. - FIX: The result rectangle incorrectly calculated in the TFlexText.GetRefreshRect method. - FIX: Added FPaintCache.rcPaint cleanup in the TFlexPanel.WMPaint method. Now it is possible to define is the drawing take place via WMPaint or via the PaintTo direct call (if rcPaint contain non-empty rectangle then WMPaint in progress). - FIX: The TFlexPanel.FPaintCache field moved in the protected class section. Added rcPaint field in FPaintCache that represents drawing rectangle. - ADD: In the text prcise mode (TFlexText.Precise=True) takes into account the rotation angle (TFlexText.Angle). - FIX: Removed FG_NEWTEXTROTATE directive (the TFlexText Precise mode should be used instead). - FIX: The TFlexRegularPolygon object clones incorrectly drawed in case when TFlexRegularPolygon have alternative brush (gradient, texture). - ADD: Add TFlexPanel.InvalidateControl virtual method which calls from TFle
集合了 所有的 Unix命令大全 登陆服务器时输入 公帐号 openlab-open123 telnet 192.168.0.23 自己帐号 sd08077-you0 ftp工具 192.168.0.202 tools-toolss 老师测评网址 http://172.16.0.198:8080/poll/ 各个 shell 可互相切换 ksh:$ sh:$ csh:guangzhou% bash:bash-3.00$ 一、注意事项 命令和参数之间必需用空格隔开,参数和参数之间也必需用空格隔开。 一行不能超过256个字符;大小写有区分。 二、特殊字符含义 文件名以“.”开头的都是隐藏文件/目录,只需在文件/目录名前加“.”就可隐藏它。 ~/ 表示主目录。 ./ 当前目录(一个点)。 ../ 上一级目录(两个点)。 ; 多个命令一起用。 > >> 输出重定向 。将一个命令的输出内容写入到一个文件里面。如果该文件存在, 就将该文件的内容覆盖; 如果不存在就先创建该文件, 然后再写入内容。 输出重定向,意思就是说,将原来屏幕输出变为文件输出,即将内容输到文件中。 < << 输入重定向。 本来命令是通过键盘得到输入的,但是用小于号,就能够使命令从文件中得到输入。 \ 表示未写完,回车换行再继续。 * 匹配零个或者多个字符。 ? 匹配一个字符。 [] 匹配中括号里的内容[a-z][A-Z][0-9]。 ! 事件。 $ 取环境变量的值。 | 管道。把前一命令的输出作为后一命令的输入,把几个命令连接起来。 |经常跟tee连用,tee 把内容保存到文档并显示出来。 三、通用后接命令符 -a 所有(all)。 -e 所有(every),比a更详细。 -f 取消保护。 -i 添加提示。 -p 强制执行。 -r 目录管理。 分屏显示的中途操作 空格 继续打开下一屏; 回车 继续打开下一行; b 另外开上一屏; f 另外开下一屏; h 帮助; q或Ctrl+C 退出; /字符串 从上往下查找匹配的字符串; ?字符串 从下往上查找匹配的字符串; n 继续查找。 四、退出命令 exit 退出; DOS内部命令 用于退出当前的命令处理器(COMMAND.COM) 恢复前一个命令处理器。 Ctrl+d 跟exit一样效果,表中止本次操作。 logout 当csh时可用来退出,其他shell不可用。 clear 清屏,清除(之前的内容并未删除,只是没看到,拉回上面可以看回)。 五、目录管理命令 pwd 显示当前所在目录,打印当前目录的绝对路径。 cd 进入某目录,DOS内部命令 显示或改变当前目录。 cd回车/cd ~ 都是回到自己的主目录。 cd . 当前目录(空格再加一个点)。 cd .. 回到上一级目录(空格再加两个点)。 cd ../.. 向上两级。 cd /user/s0807 从绝对路径去到某目录。 cd ~/s0807 直接进入主目录下的某目录(“cd ~"相当于主目录的路径的简写)。 ls 显示当前目录的所有目录和文件。 用法 ls [-aAbcCdeEfFghHilLmnopqrRstux1@] [file...] ls /etc/ 显示某目录下的所有文件和目录,如etc目录下的。 ls -l (list)列表显示文件(默认按文件名排序), 显示文件的权限、硬链接数(即包含文件数,普通文件是1,目录1+)、用户、组名、大小、修改日期、文件名。 ls -t (time)按修改时间排序,显示目录和文件。 ls -lt 是“-l”和“-t”的组合,按时间顺序显示列表。 ls -F 显示文件类型,目录“/ ”结尾;可执行文件“*”结尾;文本文件(none),没有结尾。 ls -R 递归显示目录结构。即该目录下的文件和各个副目录下的文件都一一显示。 ls -a 显示所有文件,包括隐藏文件。 文件权限 r 读权限。对普通文件来说,是读取该文件的权限;对目录来说,是获得该目录下的文件信息。 w 写权限。对文件,是修改;对目录,是增删文件与子目录。 (注 删除没有写权限的文件可以用 rm -f ,这是为了操作方便,是人性化的设计)。 x 执行权限;对目录,是进入该目录 - 表示没有权限 形式 - rw- r-- r-- 其中 第一个是文件类型(-表普通文件,d表目录,l表软链接文件) 第2~4个是属主,生成文件时登录的人,权限最高,用u表示 第5~7个是属组,系统管理员分配的同组的一个或几个人,用g表示 第8~10个是其他人,除属组外的人,用o表示 所有人,包括属主、属组及其他人,用a表示 chmod 更改权限; 用法 chmod [-fR] <绝对模式> 文件 ... chmod [-fR] <符号模式列表> 文件 ... 其中 <符号模式列表> 是一个用逗号分隔的表 [ugoa]{+|-|=}[rwxXlstugo] chmod u+rw 给用户加权限。同理,u-rw也可以减权限。 chmod u=rw 给用户赋权限。与加权限不一样,赋权限有覆盖的效果。 主要形式有如下几种 chmod u+rw chmod u=rw chmod u+r, u+w chmod u+rw,g+w, o+r chmod 777( 用数字的方式设置权限是最常用的) 数字表示权限时,各数位分别表示属主、属组及其他人; 其中,1是执行权(Execute),2是写权限(Write),4是读权限(Read), 具体权限相当于三种权限的数相加,如7=1+2+4,即拥有读写和执行权。 另外,临时文件/目录的权限为rwt,可写却不可删,关机后自动删除;建临时目录:chmod 777 目录名,再chmod +t 目录名。 id 显示用户有效的uid(用户字)和gid(组名) 用法 id [-ap] [user] id 显示自己的。 id root 显示root的。 id -a root 显示用户所在组的所有组名(如root用户,是所有组的组员) df 查看文件系统,查看数据区 用法 df [-F FSType] [-abeghklntVvZ] [-o FSType 特定选项] [目录 | 块设备 | 资源] df -k 以kbytes显示文件大小的查看文件系统方式 六、显示文件内容 more 分屏显示文件的内容。 用法 more [-cdflrsuw] [-行] [+行号] [+/模式] [文件名 ...]。 显示7个信息:用户名 密码 用户id(uid) 组id(gid) 描述信息(一般为空) 用户主目录 login shell(登录shell) cat 显示文件内容,不分屏(一般用在小文件,大文件显示不下);合并文件,仅在屏幕上合并,并不改变原文件。 用法 cat [ -usvtebn ] [-|文件] ... tail 实时监控文件,一般用在日志文件,可以只看其中的几行。 用法 tail [+/-[n][lbc][f]] [文件] tail [+/-[n][l][r|f]] [文件] 七、文件/目录的增删 echo 显示一行内容。 touch 如果文件/目录不存在,则创建新文件/目录;如果文件存在,那么就是更新该文件的最后访问时间, 用法 touch [-acm] [-r ref_file] 文件... touch [-acm] [MMDDhhmm[yy]] 文件... touch [-acm] [-t [[CC]YY]MMDDhhmm[.SS]] file... mkdir 创建目录(必须有创建目录的权限) 用法 mkdir [-m 模式] [-p] dirname ... mkdir dir1/dir2 在dir1下建dir2 mkdir dir13 dir4 dir5 连建多个 mkdir ~/games 用户主目录下建(默认在当前目录下创建) mkdir -p dir6/dir7/dir8 强制创建dir8;若没有前面的目录,会自动创建dir6和dir7。 不用-p时,若没有dir6/dir7,则创建失败。 cp 复制文件/目录 cp 源文件 目标文件 复制文件;若已有文件则覆盖 cp -r 源目录 目标目录 复制目录;若已有目录则把源目录复制到目标目录下, 没有目标目录时,相当于完全复制源目录,只是文件名不同。 cp beans apple dir2 把beans、apple文件复制到dir2目录下 cp -i beans apple 增加是否覆盖的提示 mv 移动或重命名文件/目录 用法 mv [-f] [-i] f1 f2 mv [-f] [-i] f1 ... fn d1 mv [-f] [-i] d1 d2 mv 源文件名 目标文件名 若目标文件名还没有,则是源文件重命名为目标文件;若目标文件已存在,则源文件覆盖目标文件。 mv 源文件名 目标目录 移动文件 mv 源目录 目标目录 若目标目录不存在,则源目录重命名;若目标目录已存在,则源目录移动到目标目录下。 rm 删除文件/目录 用法 rm [-fiRr] 文件 ... rm 文件名 删除文件。 rm -r 目录名 删除目录。 rm –f 文件 只要是该文件或者目录的拥有者,无论是否有权限删除,都可以用这个命令参数强行删除。 rm -rf * 删除所有文件及目录 rmdir 删除空目录。只可以删除空目录。 ln 创建硬链接或软链接,硬链接=同一文件的多个名字;软链接=快捷方式 用法 ln [-f] [-n] [-s] f1 [f2] ln [-f] [-n] [-s] f1 ... fn d1 ln [-f] [-n] -s d1 d2 ln file1 file1.ln 创建硬链接。感觉是同一文件,删除一个,对另一个没有影响;须两个都删除才算删除。 ln -s file1 file1.sln 创建软链接。可跨系统操作,冲破操作权限;也是快捷方式。 八、时间显示 date 显示时间,精确到秒 用法 date [-u] mmddHHMM[[cc]yy][.SS] date [-u] [+format] date -a [-]sss[.fff] cal 显示日历 cal 9 2008 显示2008年9月的日历; cal 显示当月的 用法 cal [ [月] 年 ] 九、帮助 man 帮助( format and display the on-line manual pages) 用法 man [-] [-adFlrt] [-M 路径] [-T 宏软件包] [-s 段] 名称 ... man [-] [-adFlrt] [-M path] [-T macro-package] [-s section] name... man [-M 路径] -k 关键字 ... man [-M 路径] -f 文件 ... awk 按一定格式输出(pattern scanning and processing language) 用法 awk [-Fc] [-f 源代码 | 'cmds'] [文件] 十、vi 底行模式 /? 命令模式 i a o 输入模式 vi 的使用方法 1、光标 h 左 j 下 k 上 l 右 set nu 显示行号(set nonu) 21 光标停在指定行 21G 第N行 (G到文件尾,1G到文件头) 如果要将光标移动到文件第一行,那么就按 1G H 屏幕头 M 屏幕中间 L 屏幕底 ^ 或 shift+6 行首 $ 或 shift+4 行尾 Ctrl+f 下翻 Ctrl+b 上翻 2、输入 (输入模式) o 光标往下换一行 O (大写字母o)在光标所在行上插入一空行 i 在光标所在位置的前面插入字母 a 在光标所在位置的后面插入一个新字母 退出插入状态。 3、修改替换 r 替换一个字符 dd 删除行,剪切行 (5dd删除5行) 5,10d 删除 5 至 10 行(包括第 5行和第 10 行) x 删除一个字符 dw 删除词,剪切词。 ( 3dw删除 3 单词) cw 替换一个单词。 (cw 和 dw 的区别 cw 删除某一个单词后直接进入编辑模式,而dw删除词后仍处于命令模式) cc 替换一行 C 替换从光标到行尾 yy 复制行 (用法同下的 Y ,见下行) Y 将光标移动到要复制行位置,按yy。当你想粘贴的时候,请将光标移动到你想复制的位置的前一个位置,然后按 p yw 复制词 p 当前行下粘贴 1,2co3 复制行1,2在行3之后 4,5m6 移动行4,5在行6之后 u 当你的前一个命令操作是一个误操作的时候,那么可以按一下 u键,即可复原。只能撤销一次 r file2 在光标所在处插入另一个文件 ~ 将字母变成大写 J 可以将当前行与下一行连接起来 /字符串 从上往下找匹配的字符串 ?字符串 从下往上找匹配的字符串 n 继续查找 1,$s/旧串/新串/g 替换全文(或者 %s/旧串/新串/g) (1表示从第一行开始) 没有g则只替换一次,加g替换所有 3、存盘和退出 w 存盘 w newfile 存成新文件 wq 存盘再退出VI(或者ZZ或 X) q! 强行退出不存盘 查看用户 users 显示在线用户(仅显示用户名)。 who 显示在线用户,但比users更详细,包括用户名、终端号、登录时间、IP地址。 who am i 仅显示自己,(但包括用户名、端口、登录时间、IP地址;信息量=who)。 whoami 也仅显示自己,但只有用户名(仅显示自己的有效的用户名)。 w 显示比who更多内容,还包括闲置时间、占CPU、平均占用CPU、执行命令。 用法 w [ -hlsuw ] [ 用户 ] su 改变用户,需再输入密码。 用法 su [-] [ username [ arg ... ] ] su - 相当于退出再重新登录。 查找 find 查找文件 用法 find [-H | -L] 路径列表 谓词列表 find / -name perl 从根目录开始查找名为perl的文件。 find . -mtime 10 -print 从当前目录查找距离现在10天时修改的文件,显示在屏幕上。 (注 “10”表示第10天的时候;如果是“+10”表示10天以外的范围;“-10”表示10天以内的范围。) grep 文件中查找字符;有过滤功能,只列出想要的内容 用法 grep -hblcnsviw 模式 文件 . . . 如 grep abc /etc/passwd 在passwd文件下找abc字符 wc 统计 -l 统计行数; -w统计单词数; -c 统计字符数 如 grep wang /etc/passwd|wc -l 统计passwd文件含“wang”的行数 du 查看目录情况 如 du -sk * 不加-s会显示子目录,-k按千字节排序 用法 du [-a] [-d] [-h|-k] [-r] [-o|-s] [-H|-L] [文件...] 进程管理 ps 显示进程。 用法 ps [ -aAdeflcjLPyZ ] [ -o 格式 ] [ -t 项列表 ] [ -u 用户列表 ] [ -U 用户列表 ] [ -G 组列表 ] [ -p 进程列表 ] [ -g 程序组列表 ] [ -s 标识符列表 ] [ -z 区域列表 ] ps 显示自己的进程。 ps -e 显示每个进程,包括空闲进程。 ps -f 显示详情。 ps -ef 组合-e和-f,所有进程的详情。 ps -U uidlist(用户列表) 具体查看某人的进程。 kill pkill sleep jobs 用法 jobs [-l ] fg %n bg %n stop %n 挂起(仅csh能用) Ctrl+C Ctrl+Z 网络链接 ping usage ping host [timeout] usage ping -s [-l | U] [adLnRrv] [-A addr_family] [-c traffic_class] [-g gateway [-g gateway ...]] [-F flow_label] [-I interval] [-i interface] [-P tos] [-p port] [-t ttl] host [data_size] [npackets] ifconfig -a /sbin/ifconfig 查看本机的IP地址 netstat -rn rlogin ftp 帮助文件 [sd0807@localhost ~]$ help GNU bash, version 3.1.17(1)-release (i686-redhat-linux-gnu) These shell commands are defined internally. Type `help' to see this list. Type `help name' to find out more about the function `name'. Use `info bash' to find out more about the shell in general. Use `man -k' or `info' to find out more about commands not in this list. A star (*) next to a name means that the command is disabled. JOB_SPEC [&] (( expression )) . filename [arguments] [ arg... ] [[ expression ]] alias [-p] [name[=value] ... ] bg [job_spec ...] bind [-lpvsPVS] [-m keymap] [-f fi break [n] builtin [shell-builtin [arg ...]] caller [EXPR] case WORD in [PATTERN [| PATTERN]. cd [-L|-P] [dir] command [-pVv] command [arg ...] compgen [-abcdefgjksuv] [-o option complete [-abcdefgjksuv] [-pr] [-o continue [n] declare [-afFirtx] [-p] [name[=val dirs [-clpv] [+N] [-N] disown [-h] [-ar] [jobspec ...] echo [-neE] [arg ...] enable [-pnds] [-a] [-f filename] eval [arg ...] exec [-cl] [-a name] file [redirec exit [n] export [-nf] [name[=value] ...] or false fc [-e ename] [-nlr] [first] [last fg [job_spec] for NAME [in WORDS ... ;] do COMMA for (( exp1; exp2; exp3 )); do COM function NAME { COMMANDS ; } or NA getopts optstring name [arg] hash [-lr] [-p pathname] [-dt] [na help [-s] [pattern ...] history [-c] [-d offset] [n] or hi if COMMANDS; then COMMANDS; [ elif jobs [-lnprs] [jobspec ...] or job kill [-s sigspec | -n signum | -si let arg [arg ...] local name[=value] ... logout popd [+N | -N] [-n] printf [-v var] format [arguments] pushd [dir | +N | -N] [-n] pwd [-LP] read [-ers] [-u fd] [-t timeout] [ readonly [-af] [name[=value] ...] return [n] select NAME [in WORDS ... ;] do CO set [--abefhkmnptuvxBCHP] [-o option] [arg ...] shift [n] shopt [-pqsu] [-o long-option] opt source filename [arguments] suspend [-f] test [expr] time [-p] PIPELINE times trap [-lp] [arg signal_spec ...] true type [-afptP] name [name ...] typeset [-afFirtx] [-p] name[=valu ulimit [-SHacdfilmnpqstuvx] [limit umask [-p] [-S] [mode] unalias [-a] name [name ...] unset [-f] [-v] [name ...] until COMMANDS; do COMMANDS; done variables - Some variable names an wait [n] while COMMANDS; do COMMANDS; done { COMMANDS ; } 输入 man help BASH_BUILTINS(1) BASH_BUILTINS(1) NAME bash, :, ., [, alias, bg, bind, break, builtin, cd, command, compgen, complete, continue, declare, dirs, disown, echo, enable, eval, exec, exit, export, fc, fg, getopts, hash, help, history, jobs, kill, let, local, logout, popd, printf, pushd, pwd, read, readonly, return, set, shift, shopt, source, suspend, test, times, trap, type, typeset, ulimit, umask, una- lias, unset, wait - bash built-in commands, see bash(1) BASH BUILTIN COMMANDS Unless otherwise noted, each builtin command documented in this section as accepting options preceded by - accepts -- to signify the end of the options. For example, the :, true, false, and test builtins do not accept options. : [arguments] No effect; the command does nothing beyond expanding arguments and performing any specified redirections. A zero exit code is returned. . filename [arguments] source filename [arguments] Read and execute commands from filename in the current shell environment and return the exit status of the last command executed from filename. If filename does not contain a slash, file names in PATH are used to find the directory containing file- name. The file searched for in PATH need not be executable. When bash is not in posix mode, the current directory is searched if no file is found in PATH. If the sourcepath option to the shopt builtin command is turned off, the PATH is not searched. If any arguments are supplied, they become the positional parameters when filename is executed. Otherwise the positional parameters are unchanged. The return status is the status of the last command exited within the script (0 if no commands are executed), and false if filename is not found or cannot be read. alias [-p] [name[=value] ...] Alias with no arguments or with the -p option prints the list of aliases in the form alias name=value on standard output. When arguments are supplied, an alias is defined for each name whose value is given. A trailing space in value causes the next word to be checked for alias substitution when the alias is expanded. For each name in the argument list for which no value is supplied, the name and value of the alias is printed. Alias returns true unless a name is given for which no alias has been defined. bg [jobspec ...] Resume each suspended job jobspec in the background, as if it had been started with &. If jobspec is not present, the shell’s notion of the current job is used. bg jobspec returns 0 unless run when job control is disabled or, when run with job con- trol enabled, any specified jobspec was not found or was started without job control. bind [-m keymap] [-lpsvPSV] bind [-m keymap] [-q function] [-u function] [-r keyseq] bind [-m keymap] -f filename bind [-m keymap] -x keyseq:shell-command bind [-m keymap] keyseq:function-name bind readline-command Display current readline key and function bindings, bind a key sequence to a readline function or macro, or set a readline variable. Each non-option argument is a command as it would appear in .inputrc, but each binding or command must be passed as a sepa- rate argument; e.g., ’"\C-x\C-r": re-read-init-file’. Options, if supplied, have the following meanings: -m keymap Use keymap as the keymap to be affected by the subsequent bindings. Accept- able keymap names are emacs, emacs-standard, emacs-meta, emacs-ctlx, vi, vi-move, vi-command, and vi-insert. vi is equivalent to vi-command; emacs is equivalent to emacs-standard. -l List the names of all readline functions. -p Display readline function names and bindings in such a way that they can be re-read. -P List current readline function names and bindings. -v Display readline variable names and values in such a way that they can be re- read. -V List current readline variable names and values. -s Display readline key sequences bound to macros and the strings they output in such a way that they can be re-read. -S Display readline key sequences bound to macros and the strings they output. -f filename Read key bindings from filename. -q function Query about which keys invoke the named function. -u function Unbind all keys bound to the named function. -r keyseq Remove any current binding for keyseq. -x keyseq:shell-command Cause shell-command to be executed whenever keyseq is entered. The return value is 0 unless an unrecognized option is given or an error occurred. break [n] Exit from within a for, while, until, or select loop. If n is specified, break n levels. n must be ≥ 1. If n is greater than the number of enclosing loops, all enclosing loops are exited. The return value is 0 unless the shell is not executing a loop when break is executed. builtin shell-builtin [arguments] Execute the specified shell builtin, passing it arguments, and return its exit sta- tus. This is useful when defining a function whose name is the same as a shell builtin, retaining the functionality of the builtin within the function. The cd builtin is commonly redefined this way. The return status is false if shell-builtin is not a shell builtin command. cd [-L|-P] [dir] Change the current directory to dir. The variable HOME is the default dir. The variable CDPATH defines the search path for the directory containing dir. Alterna- tive directory names in CDPATH are separated by a colon (:). A null directory name in CDPATH is the same as the current directory, i.e., ‘‘.’’. If dir begins with a slash (/), then CDPATH is not used. The -P option says to use the physical directory structure instead of following symbolic links (see also the -P option to the set builtin command); the -L option forces symbolic links to be followed. An argument of - is equivalent to $OLDPWD. If a non-empty directory name from CDPATH is used, or if - is the first argument, and the directory change is successful, the absolute path- name of the new working directory is written to the standard output. The return value is true if the directory was successfully changed; false otherwise. caller [expr] Returns the context of any active subroutine call (a shell function or a script exe- cuted with the . or source builtins. Without expr, caller displays the line number and source filename of the current subroutine call. If a non-negative integer is supplied as expr, caller displays the line number, subroutine name, and source file corresponding to that position in the current execution call stack. This extra information may be used, for example, to print a stack trace. The current frame is frame 0. The return value is 0 unless the shell is not executing a subroutine call or expr does not correspond to a valid position in the call stack. command [-pVv] command [arg ...] Run command with args suppressing the normal shell function lookup. Only builtin com- mands or commands found in the PATH are executed. If the -p option is given, the search for command is performed using a default value for PATH that is guaranteed to find all of the standard utilities. If either the -V or -v option is supplied, a description of command is printed. The -v option causes a single word indicating the command or file name used to invoke command to be displayed; the -V option produces a more verbose description. If the -V or -v option is supplied, the exit status is 0 if command was found, and 1 if not. If neither option is supplied and an error occurred or command cannot be found, the exit status is 127. Otherwise, the exit status of the command builtin is the exit status of command. compgen [option] [word] Generate possible completion matches for word according to the options, which may be any option accepted by the complete builtin with the exception of -p and -r, and write the matches to the standard output. When using the -F or -C options, the vari- ous shell variables set by the programmable completion facilities, while available, will not have useful values. The matches will be generated in the same way as if the programmable completion code had generated them directly from a completion specification with the same flags. If word is specified, only those completions matching word will be displayed. The return value is true unless an invalid option is supplied, or no matches were generated. complete [-abcdefgjksuv] [-o comp-option] [-A action] [-G globpat] [-W wordlist] [-P prefix] [-S suffix] [-X filterpat] [-F function] [-C command] name [name ...] complete -pr [name ...] Specify how arguments to each name should be completed. If the -p option is sup- plied, or if no options are supplied, existing completion specifications are printed in a way that allows them to be reused as input. The -r option removes a completion specification for each name, or, if no names are supplied, all completion specifica- tions. The process of applying these completion specifications when word completion is attempted is described above under Programmable Completion. Other options, if specified, have the following meanings. The arguments to the -G, -W, and -X options (and, if necessary, the -P and -S options) should be quoted to protect them from expansion before the complete builtin is invoked. -o comp-option The comp-option controls several aspects of the compspec’s behavior beyond the simple generation of completions. comp-option may be one of: bashdefault Perform the rest of the default bash completions if the compspec gen- erates no matches. default Use readline’s default filename completion if the compspec generates no matches. dirnames Perform directory name completion if the compspec generates no matches. filenames Tell readline that the compspec generates filenames, so it can per- form any filename-specific processing (like adding a slash to direc- tory names or suppressing trailing spaces). Intended to be used with shell functions. nospace Tell readline not to append a space (the default) to words completed at the end of the line. plusdirs After any matches defined by the compspec are generated, directory name completion is attempted and any matches are added to the results of the other actions. -A action The action may be one of the following to generate a list of possible comple- tions: alias Alias names. May also be specified as -a. arrayvar Array variable names. binding Readline key binding names. builtin Names of shell builtin commands. May also be specified as -b. command Command names. May also be specified as -c. directory Directory names. May also be specified as -d. disabled Names of disabled shell builtins. enabled Names of enabled shell builtins. export Names of exported shell variables. May also be specified as -e. file File names. May also be specified as -f. function Names of shell functions. group Group names. May also be specified as -g. helptopic Help topics as accepted by the help builtin. hostname Hostnames, as taken from the file specified by the HOSTFILE shell variable. job Job names, if job control is active. May also be specified as -j. keyword Shell reserved words. May also be specified as -k. running Names of running jobs, if job control is active. service Service names. May also be specified as -s. setopt Valid arguments for the -o option to the set builtin. shopt Shell option names as accepted by the shopt builtin. signal Signal names. stopped Names of stopped jobs, if job control is active. user User names. May also be specified as -u. variable Names of all shell variables. May also be specified as -v. -G globpat The filename expansion pattern globpat is expanded to generate the possible completions. -W wordlist The wordlist is split using the characters in the IFS special variable as delimiters, and each resultant word is expanded. The possible completions are the members of the resultant list which match the word being completed. -C command command is executed in a subshell environment, and its output is used as the possible completions. -F function The shell function function is executed in the current shell environment. When it finishes, the possible completions are retrieved from the value of the COMPREPLY array variable. -X filterpat filterpat is a pattern as used for filename expansion. It is applied to the list of possible completions generated by the preceding options and argu- ments, and each completion matching filterpat is removed from the list. A leading ! in filterpat negates the pattern; in this case, any completion not matching filterpat is removed. -P prefix prefix is added at the beginning of each possible completion after all other options have been applied. -S suffix suffix is appended to each possible completion after all other options have been applied. The return value is true unless an invalid option is supplied, an option other than -p or -r is supplied without a name argument, an attempt is made to remove a comple- tion specification for a name for which no specification exists, or an error occurs adding a completion specification. continue [n] Resume the next iteration of the enclosing for, while, until, or select loop. If n is specified, resume at the nth enclosing loop. n must be ≥ 1. If n is greater than the number of enclosing loops, the last enclosing loop (the ‘‘top-level’’ loop) is resumed. The return value is 0 unless the shell is not executing a loop when con- tinue is executed. declare [-afFirtx] [-p] [name[=value] ...] typeset [-afFirtx] [-p] [name[=value] ...] Declare variables and/or give them attributes. If no names are given then display the values of variables. The -p option will display the attributes and values of each name. When -p is used, additional options are ignored. The -F option inhibits the display of function definitions; only the function name and attributes are printed. If the extdebug shell option is enabled using shopt, the source file name and line number where the function is defined are displayed as well. The -F option implies -f. The following options can be used to restrict output to variables with the specified attribute or to give variables attributes: -a Each name is an array variable (see Arrays above). -f Use function names only. -i The variable is treated as an integer; arithmetic evaluation (see ARITHMETIC EVALUATION ) is performed when the variable is assigned a value. -r Make names readonly. These names cannot then be assigned values by subsequent assignment statements or unset. -t Give each name the trace attribute. Traced functions inherit the DEBUG and RETURN traps from the calling shell. The trace attribute has no special mean- ing for variables. -x Mark names for export to subsequent commands via the environment. Using ‘+’ instead of ‘-’ turns off the attribute instead, with the exception that +a may not be used to destroy an array variable. When used in a function, makes each name local, as with the local command. If a variable name is followed by =value, the value of the variable is set to value. The return value is 0 unless an invalid option is encountered, an attempt is made to define a function using ‘‘-f foo=bar’’, an attempt is made to assign a value to a readonly variable, an attempt is made to assign a value to an array variable without using the compound assignment syntax (see Arrays above), one of the names is not a valid shell variable name, an attempt is made to turn off readonly status for a readonly variable, an attempt is made to turn off array status for an array variable, or an attempt is made to display a non-exis- tent function with -f. dirs [-clpv] [+n] [-n] Without options, displays the list of currently remembered directories. The default display is on a single line with directory names separated by spaces. Directories are added to the list with the pushd command; the popd command removes entries from the list. +n Displays the nth entry counting from the left of the list shown by dirs when invoked without options, starting with zero. -n Displays the nth entry counting from the right of the list shown by dirs when invoked without options, starting with zero. -c Clears the directory stack by deleting all of the entries. -l Produces a longer listing; the default listing format uses a tilde to denote the home directory. -p Print the directory stack with one entry per line. -v Print the directory stack with one entry per line, prefixing each entry with its index in the stack. The return value is 0 unless an invalid option is supplied or n indexes beyond the end of the directory stack. disown [-ar] [-h] [jobspec ...] Without options, each jobspec is removed from the table of active jobs. If the -h option is given, each jobspec is not removed from the table, but is marked so that SIGHUP is not sent to the job if the shell receives a SIGHUP. If no jobspec is present, and neither the -a nor the -r option is supplied, the current job is used. If no jobspec is supplied, the -a option means to remove or mark all jobs; the -r option without a jobspec argument restricts operation to running jobs. The return value is 0 unless a jobspec does not specify a valid job. echo [-neE] [arg ...] Output the args, separated by spaces, followed by a newline. The return status is always 0. If -n is specified, the trailing newline is suppressed. If the -e option is given, interpretation of the following backslash-escaped characters is enabled. The -E option disables the interpretation of these escape characters, even on systems where they are interpreted by default. The xpg_echo shell option may be used to dynamically determine whether or not echo expands these escape characters by default. echo does not interpret -- to mean the end of options. echo interprets the following escape sequences: \a alert (bell) \b backspace \c suppress trailing newline \e an escape character \f form feed \n new line \r carriage return \t horizontal tab \v vertical tab \\ backslash \0nnn the eight-bit character whose value is the octal value nnn (zero to three octal digits) \nnn the eight-bit character whose value is the octal value nnn (one to three octal digits) \xHH the eight-bit character whose value is the hexadecimal value HH (one or two hex digits) enable [-adnps] [-f filename] [name ...] Enable and disable builtin shell commands. Disabling a builtin allows a disk command which has the same name as a shell builtin to be executed without specifying a full pathname, even though the shell normally searches for builtins before disk commands. If -n is used, each name is disabled; otherwise, names are enabled. For example, to use the test binary found via the PATH instead of the shell builtin version, run ‘‘enable -n test’’. The -f option means to load the new builtin command name from shared object filename, on systems that support dynamic loading. The -d option will delete a builtin previously loaded with -f. If no name arguments are given, or if the -p option is supplied, a list of shell builtins is printed. With no other option arguments, the list consists of all enabled shell builtins. If -n is supplied, only disabled builtins are printed. If -a is supplied, the list printed includes all builtins, with an indication of whether or not each is enabled. If -s is supplied, the output is restricted to the POSIX special builtins. The return value is 0 unless a name is not a shell builtin or there is an error loading a new builtin from a shared object. eval [arg ...] The args are read and concatenated together into a single command. This command is then read and executed by the shell, and its exit status is returned as the value of eval. If there are no args, or only null arguments, eval returns 0. exec [-cl] [-a name] [command [arguments]] If command is specified, it replaces the shell. No new process is created. The arguments become the arguments to command. If the -l option is supplied, the shell places a dash at the beginning of the zeroth arg passed to command. This is what login(1) does. The -c option causes command to be executed with an empty environ- ment. If -a is supplied, the shell passes name as the zeroth argument to the exe- cuted command. If command cannot be executed for some reason, a non-interactive shell exits, unless the shell option execfail is enabled, in which case it returns failure. An interactive shell returns failure if the file cannot be executed. If command is not specified, any redirections take effect in the current shell, and the return status is 0. If there is a redirection error, the return status is 1. exit [n] Cause the shell to exit with a status of n. If n is omitted, the exit status is that of the last command executed. A trap on EXIT is executed before the shell termi- nates. export [-fn] [name[=word]] ... export -p The supplied names are marked for automatic export to the environment of subsequently executed commands. If the -f option is given, the names refer to functions. If no names are given, or if the -p option is supplied, a list of all names that are exported in this shell is printed. The -n option causes the export property to be removed from each name. If a variable name is followed by =word, the value of the variable is set to word. export returns an exit status of 0 unless an invalid option is encountered, one of the names is not a valid shell variable name, or -f is sup- plied with a name that is not a function. fc [-e ename] [-nlr] [first] [last] fc -s [pat=rep] [cmd] Fix Command. In the first form, a range of commands from first to last is selected from the history list. First and last may be specified as a string (to locate the last command beginning with that string) or as a number (an index into the history list, where a negative number is used as an offset from the current command number). If last is not specified it is set to the current command for listing (so that ‘‘fc -l -10’’ prints the last 10 commands) and to first otherwise. If first is not speci- fied it is set to the previous command for editing and -16 for listing. The -n option suppresses the command numbers when listing. The -r option reverses the order of the commands. If the -l option is given, the commands are listed on standard output. Otherwise, the editor given by ename is invoked on a file contain- ing those commands. If ename is not given, the value of the FCEDIT variable is used, and the value of EDITOR if FCEDIT is not set. If neither variable is set, is used. When editing is complete, the edited commands are echoed and executed. In the second form, command is re-executed after each instance of pat is replaced by rep. A useful alias to use with this is ‘‘r="fc -s"’’, so that typing ‘‘r cc’’ runs the last command beginning with ‘‘cc’’ and typing ‘‘r’’ re-executes the last command. If the first form is used, the return value is 0 unless an invalid option is encoun- tered or first or last specify history lines out of range. If the -e option is sup- plied, the return value is the value of the last command executed or failure if an error occurs with the temporary file of commands. If the second form is used, the return status is that of the command re-executed, unless cmd does not specify a valid history line, in which case fc returns failure. fg [jobspec] Resume jobspec in the foreground, and make it the current job. If jobspec is not present, the shell’s notion of the current job is used. The return value is that of the command placed into the foreground, or failure if run when job control is dis- abled or, when run with job control enabled, if jobspec does not specify a valid job or jobspec specifies a job that was started without job control. getopts optstring name [args] getopts is used by shell procedures to parse positional parameters. optstring con- tains the option characters to be recognized; if a character is followed by a colon, the option is expected to have an argument, which should be separated from it by white space. The colon and question mark characters may not be used as option char- acters. Each time it is invoked, getopts places the next option in the shell vari- able name, initializing name if it does not exist, and the index of the next argument to be processed into the variable OPTIND. OPTIND is initialized to 1 each time the shell or a shell script is invoked. When an option requires an argument, getopts places that argument into the variable OPTARG. The shell does not reset OPTIND auto- matically; it must be manually reset between multiple calls to getopts within the same shell invocation if a new set of parameters is to be used. When the end of options is encountered, getopts exits with a return value greater than zero. OPTIND is set to the index of the first non-option argument, and name is set to ?. getopts normally parses the positional parameters, but if more arguments are given in args, getopts parses those instead. getopts can report errors in two ways. If the first character of optstring is a colon, silent error reporting is used. In normal operation diagnostic messages are printed when invalid options or missing option arguments are encountered. If the variable OPTERR is set to 0, no error messages will be displayed, even if the first character of optstring is not a colon. If an invalid option is seen, getopts places ? into name and, if not silent, prints an error message and unsets OPTARG. If getopts is silent, the option character found is placed in OPTARG and no diagnostic message is printed. If a required argument is not found, and getopts is not silent, a question mark (?) is placed in name, OPTARG is unset, and a diagnostic message is printed. If getopts is silent, then a colon (:) is placed in name and OPTARG is set to the option charac- ter found. getopts returns true if an option, specified or unspecified, is found. It returns false if the end of options is encountered or an error occurs. hash [-lr] [-p filename] [-dt] [name] For each name, the full file name of the command is determined by searching the directories in $PATH and remembered. If the -p option is supplied, no path search is performed, and filename is used as the full file name of the command. The -r option causes the shell to forget all remembered locations. The -d option causes the shell to forget the remembered location of each name. If the -t option is supplied, the full pathname to which each name corresponds is printed. If multiple name arguments are supplied with -t, the name is printed before the hashed full pathname. The -l option causes output to be displayed in a format that may be reused as input. If no arguments are given, or if only -l is supplied, information about remembered commands is printed. The return status is true unless a name is not found or an invalid option is supplied. help [-s] [pattern] Display helpful information about builtin commands. If pattern is specified, help gives detailed help on all commands matching pattern; otherwise help for all the builtins and shell control structures is printed. The -s option restricts the infor- mation displayed to a short usage synopsis. The return status is 0 unless no command matches pattern. history [n] history -c history -d offset history -anrw [filename] history -p arg [arg ...] history -s arg [arg ...] With no options, display the command history list with line numbers. Lines listed with a * have been modified. An argument of n lists only the last n lines. If the shell variable HISTTIMEFORMAT is set and not null, it is used as a format string for strftime(3) to display the time stamp associated with each displayed history entry. No intervening blank is printed between the formatted time stamp and the history line. If filename is supplied, it is used as the name of the history file; if not, the value of HISTFILE is used. Options, if supplied, have the following meanings: -c Clear the history list by deleting all the entries. -d offset Delete the history entry at position offset. -a Append the ‘‘new’’ history lines (history lines entered since the beginning of the current bash session) to the history file. -n Read the history lines not already read from the history file into the current history list. These are lines appended to the history file since the begin- ning of the current bash session. -r Read the contents of the history file and use them as the current history. -w Write the current history to the history file, overwriting the history file’s contents. -p Perform history substitution on the following args and display the result on the standard output. Does not store the results in the history list. Each arg must be quoted to disable normal history expansion. -s Store the args in the history list as a single entry. The last command in the history list is removed before the args are added. If the HISTTIMEFORMAT is set, the time stamp information associated with each history entry is written to the history file. The return value is 0 unless an invalid option is encountered, an error occurs while reading or writing the history file, an invalid offset is supplied as an argument to -d, or the history expansion supplied as an argument to -p fails. jobs [-lnprs] [ jobspec ... ] jobs -x command [ args ... ] The first form lists the active jobs. The options have the following meanings: -l List process IDs in addition to the normal information. -p List only the process ID of the job’s process group leader. -n Display information only about jobs that have changed status since the user was last notified of their status. -r Restrict output to running jobs. -s Restrict output to stopped jobs. If jobspec is given, output is restricted to information about that job. The return status is 0 unless an invalid option is encountered or an invalid jobspec is sup- plied. If the -x option is supplied, jobs replaces any jobspec found in command or args with the corresponding process group ID, and executes command passing it args, returning its exit status. kill [-s sigspec | -n signum | -sigspec] [pid | jobspec] ... kill -l [sigspec | exit_status] Send the signal named by sigspec or signum to the processes named by pid or jobspec. sigspec is either a case-insensitive signal name such as SIGKILL (with or without the SIG prefix) or a signal number; signum is a signal number. If sigspec is not present, then SIGTERM is assumed. An argument of -l lists the signal names. If any arguments are supplied when -l is given, the names of the signals corresponding to the arguments are listed, and the return status is 0. The exit_status argument to -l is a number specifying either a signal number or the exit status of a process termi- nated by a signal. kill returns true if at least one signal was successfully sent, or false if an error occurs or an invalid option is encountered. let arg [arg ...] Each arg is an arithmetic expression to be evaluated (see ARITHMETIC EVALUATION). If the last arg evaluates to 0, let returns 1; 0 is returned otherwise. local [option] [name[=value] ...] For each argument, a local variable named name is created, and assigned value. The option can be any of the options accepted by declare. When local is used within a function, it causes the variable name to have a visible scope restricted to that function and its children. With no operands, local writes a list of local variables to the standard output. It is an error to use local when not within a function. The return status is 0 unless local is used outside a function, an invalid name is sup- plied, or name is a readonly variable. logout Exit a login shell. popd [-n] [+n] [-n] Removes entries from the directory stack. With no arguments, removes the top direc- tory from the stack, and performs a cd to the new top directory. Arguments, if sup- plied, have the following meanings: +n Removes the nth entry counting from the left of the list shown by dirs, start- ing with zero. For example: ‘‘popd +0’’ removes the first directory, ‘‘popd +1’’ the second. -n Removes the nth entry counting from the right of the list shown by dirs, starting with zero. For example: ‘‘popd -0’’ removes the last directory, ‘‘popd -1’’ the next to last. -n Suppresses the normal change of directory when removing directories from the stack, so that only the stack is manipulated. If the popd command is successful, a dirs is performed as well, and the return status is 0. popd returns false if an invalid option is encountered, the directory stack is empty, a non-existent directory stack entry is specified, or the directory change fails. printf [-v var] format [arguments] Write the formatted arguments to the standard output under the control of the format. The format is a character string which contains three types of objects: plain charac- ters, which are simply copied to standard output, character escape sequences, which are converted and copied to the standard output, and format specifications, each of which causes printing of the next successive argument. In addition to the standard printf(1) formats, %b causes printf to expand backslash escape sequences in the cor- responding argument (except that \c terminates output, backslashes in \', \", and \? are not removed, and octal escapes beginning with \0 may contain up to four digits), and %q causes printf to output the corresponding argument in a format that can be reused as shell input. The -v option causes the output to be assigned to the variable var rather than being printed to the standard output. The format is reused as necessary to consume all of the arguments. If the format requires more arguments than are supplied, the extra format specifications behave as if a zero value or null string, as appropriate, had been supplied. The return value is zero on success, non-zero on failure. pushd [-n] [dir] pushd [-n] [+n] [-n] Adds a directory to the top of the directory stack, or rotates the stack, making the new top of the stack the current working directory. With no arguments, exchanges the top two directories and returns 0, unless the directory stack is empty. Arguments, if supplied, have the following meanings: +n Rotates the stack so that the nth directory (counting from the left of the list shown by dirs, starting with zero) is at the top. -n Rotates the stack so that the nth directory (counting from the right of the list shown by dirs, starting with zero) is at the top. -n Suppresses the normal change of directory when adding directories to the stack, so that only the stack is manipulated. dir Adds dir to the directory stack at the top, making it the new current working directory. If the pushd command is successful, a dirs is performed as well. If the first form is used, pushd returns 0 unless the cd to dir fails. With the second form, pushd returns 0 unless the directory stack is empty, a non-existent directory stack element is specified, or the directory change to the specified new current directory fails. pwd [-LP] Print the absolute pathname of the current working directory. The pathname printed contains no symbolic links if the -P option is supplied or the -o physical option to the set builtin command is enabled. If the -L option is used, the pathname printed may contain symbolic links. The return status is 0 unless an error occurs while reading the name of the current directory or an invalid option is supplied. read [-ers] [-u fd] [-t timeout] [-a aname] [-p prompt] [-n nchars] [-d delim] [name ...] One line is read from the standard input, or from the file descriptor fd supplied as an argument to the -u option, and the first word is assigned to the first name, the second word to the second name, and so on, with leftover words and their intervening separators assigned to the last name. If there are fewer words read from the input stream than names, the remaining names are assigned empty values. The characters in IFS are used to split the line into words. The backslash character (\) may be used to remove any special meaning for the next character read and for line continuation. Options, if supplied, have the following meanings: -a aname The words are assigned to sequential indices of the array variable aname, starting at 0. aname is unset before any new values are assigned. Other name arguments are ignored. -d delim The first character of delim is used to terminate the input line, rather than newline. -e If the standard input is coming from a terminal, readline (see READLINE above) is used to obtain the line. -n nchars read returns after reading nchars characters rather than waiting for a com- plete line of input. -p prompt Display prompt on standard error, without a trailing newline, before attempt- ing to read any input. The prompt is displayed only if input is coming from a terminal. -r Backslash does not act as an escape character. The backslash is considered to be part of the line. In particular, a backslash-newline pair may not be used as a line continuation. -s Silent mode. If input is coming from a terminal, characters are not echoed. -t timeout Cause read to time out and return failure if a complete line of input is not read within timeout seconds. This option has no effect if read is not reading input from the terminal or a pipe. -u fd Read input from file descriptor fd. If no names are supplied, the line read is assigned to the variable REPLY. The return code is zero, unless end-of-file is encountered, read times out, or an invalid file descriptor is supplied as the argument to -u. readonly [-apf] [name[=word] ...] The given names are marked readonly; the values of these names may not be changed by subsequent assignment. If the -f option is supplied, the functions corresponding to the names are so marked. The -a option restricts the variables to arrays. If no name arguments are given, or if the -p option is supplied, a list of all readonly names is printed. The -p option causes output to be displayed in a format that may be reused as input. If a variable name is followed by =word, the value of the vari- able is set to word. The return status is 0 unless an invalid option is encountered, one of the names is not a valid shell variable name, or -f is supplied with a name that is not a function. return [n] Causes a function to exit with the return value specified by n. If n is omitted, the return status is that of the last command executed in the function body. If used outside a function, but during execution of a script by the . (source) command, it causes the shell to stop executing that script and return either n or the exit status of the last command executed within the script as the exit status of the script. If used outside a function and not during execution of a script by ., the return status is false. Any command associated with the RETURN trap is executed before execution resumes after the function or script. set [--abefhkmnptuvxBCHP] [-o option] [arg ...] Without options, the name and value of each shell variable are displayed in a format that can be reused as input for setting or resetting the currently-set variables. Read-only variables cannot be reset. In posix mode, only shell variables are listed. The output is sorted according to the current locale. When options are specified, they set or unset shell attributes. Any arguments remaining after the options are processed are treated as values for the positional parameters and are assigned, in order, to $1, $2, ... $n. Options, if specified, have the following meanings: -a Automatically mark variables and functions which are modified or created for export to the environment of subsequent commands. -b Report the status of terminated background jobs immediately, rather than before the next primary prompt. This is effective only when job control is enabled. -e Exit immediately if a simple command (see SHELL GRAMMAR above) exits with a non-zero status. The shell does not exit if the command that fails is part of the command list immediately following a while or until keyword, part of the test in an if statement, part of a && or ││ list, or if the command’s return value is being inverted via !. A trap on ERR, if set, is executed before the shell exits. -f Disable pathname expansion. -h Remember the location of commands as they are looked up for execution. This is enabled by default. -k All arguments in the form of assignment statements are placed in the environ- ment for a command, not just those that precede the command name. -m Monitor mode. Job control is enabled. This option is on by default for interactive shells on systems that support it (see JOB CONTROL above). Back- ground processes run in a separate process group and a line containing their exit status is printed upon their completion. -n Read commands but do not execute them. This may be used to check a shell script for syntax errors. This is ignored by interactive shells. -o option-name The option-name can be one of the following: allexport Same as -a. braceexpand Same as -B. emacs Use an emacs-style command line editing interface. This is enabled by default when the shell is interactive, unless the shell is started with the --noediting option. errtrace Same as -E. functrace Same as -T. errexit Same as -e. hashall Same as -h. histexpand Same as -H. history Enable command history, as described above under HISTORY. This option is on by default in interactive shells. ignoreeof The effect is as if the shell command ‘‘IGNOREEOF=10’’ had been exe- cuted (see Shell Variables above). keyword Same as -k. monitor Same as -m. noclobber Same as -C. noexec Same as -n. noglob Same as -f. nolog Currently ignored. notify Same as -b. nounset Same as -u. onecmd Same as -t. physical Same as -P. pipefail If set, the return value of a pipeline is the value of the last (rightmost) command to exit with a non-zero status, or zero if all command
5.3.8 29-Apr-14 RAD Studio XE6 is supported Android in C++Builder XE6 is supported Lazarus 1.2.2 and FPC 2.6.4 is supported SmartFetch mode for TDataSet descendants is added The TUniDataSetOptions.MasterFieldsNullable property is added Now update queries inside TDataSet descendants have correct owner The SetOrderBy method behavior is fixed The GetOrderBy method behavior is fixed Bug with displaying already installed DAC version in setup messages is fixed Bug with the Filter behavior in the Metadata component is fixed Bug with AV on modify table with one only Object field is fixed Bug with Locate when a NULL value is present in the index field is fixed Bug with IndexFieldNames when DataTypeMapping is enabled is fixed Bug with freeing memory in the TDADataSet.Lookup method is fixed Bug with error processing on socket data reading under Unix is fixed Oracle data provider DataTypeMapping conversion from XMLType to ftString is added DataTypeMapping conversion from Interval to ftString is added Bug with detect parameters if a string constant contains ':' in the Direct mode is fixed Bug with connection when UnicodeEnvironment=True in the Direct mode is fixed Bug with detecting table names in an SQL statement containing JOIN is fixed Bug with Largint columns in UniLoader is fixed Bug with result parameters is fixed SQLServer data provider SQL Server 2014 is supported Bug with processing widestring parameters is fixed Bug with processing tinyint and single parameters in SQL Server Compact Edition is fixed MySQL data provider Bug with connection establishing when Pooling is enabled is fixed for MySQL provider Bug with RefreshRecord when ReadOnly is set to True is fixed for MySQL provider Bug with SSL connection on Android is fixed InterBase data provider TUniTransaction.OnCommitRetainig and TUniTransaction.OnRollbackRetainig events are added Bug with carriage returned in the beginning of an error message is fixed Bug with invalid client library when using connection pooling is fixed Bug with refreshing a detail dataset in Master-Detail is fixed PostgreSQL data provider Bug with parsing dollar-quoted strings is fixed Bug with "Range check error" in Protocol 2 is fixed Bug with executing "WITH ... SELECT" statements is fixed Bug with SSL connection on Android is fixed SQLite data provider Now the Direct mode is based on the SQLite engine version 3.8.4.3 The EnableLoadExtension specific option is added for the Connection component ASE data provider The PrepareMethod option is added
Contents Overview 1 Lesson 1: Concepts – Locks and Lock Manager 3 Lesson 2: Concepts – Batch and Transaction 31 Lesson 3: Concepts – Locks and Applications 51 Lesson 4: Information Collection and Analysis 63 Lesson 5: Concepts – Formulating and Implementing Resolution 81 Module 4: Troubleshooting Locking and Blocking Overview At the end of this module, you will be able to:  Discuss how lock manager uses lock mode, lock resources, and lock compatibility to achieve transaction isolation.  Describe the various transaction types and how transactions differ from batches.  Describe how to troubleshoot blocking and locking issues.  Analyze the output of blocking scripts and Microsoft® SQL Server™ Profiler to troubleshoot locking and blocking issues.  Formulate hypothesis to resolve locking and blocking issues. Lesson 1: Concepts – Locks and Lock Manager This lesson outlines some of the common causes that contribute to the perception of a slow server. What You Will Learn After completing this lesson, you will be able to:  Describe locking architecture used by SQL Server.  Identify the various lock modes used by SQL Server.  Discuss lock compatibility and concurrent access.  Identify different types of lock resources.  Discuss dynamic locking and lock escalation.  Differentiate locks, latches, and other SQL Server internal “locking” mechanism such as spinlocks and other synchronization objects. Recommended Reading  Chapter 14 “Locking”, Inside SQL Server 2000 by Kalen Delaney  SOX000821700049 – SQL 7.0 How to interpret lock resource Ids  SOX000925700237 – TITLE: Lock escalation in SQL 7.0  SOX001109700040 – INF: Queries with PREFETCH in the plan hold lock until the end of transaction Locking Concepts Delivery Tip Prior to delivering this material, test the class to see if they fully understand the different isolation levels. If the class is not confident in their understanding, review appendix A04_Locking and its accompanying PowerPoint® file. Transactions in SQL Server provide the ACID properties: Atomicity A transaction either commits or aborts. If a transaction commits, all of its effects remain. If it aborts, all of its effects are undone. It is an “all or nothing” operation. Consistency An application should maintain the consistency of a database. For example, if you defer constraint checking, it is your responsibility to ensure that the database is consistent. Isolation Concurrent transactions are isolated from the updates of other incomplete transactions. These updates do not constitute a consistent state. This property is often called serializability. For example, a second transaction traversing the doubly linked list mentioned above would see the list before or after the insert, but it will see only complete changes. Durability After a transaction commits, its effects will persist even if there are system failures. Consistency and isolation are the most important in describing SQL Server’s locking model. It is up to the application to define what consistency means, and isolation in some form is needed to achieve consistent results. SQL Server uses locking to achieve isolation. Definition of Dependency: A set of transactions can run concurrently if their outputs are disjoint from the union of one another’s input and output sets. For example, if T1 writes some object that is in T2’s input or output set, there is a dependency between T1 and T2. Bad Dependencies These include lost updates, dirty reads, non-repeatable reads, and phantoms. ANSI SQL Isolation Levels An isolation level determines the degree to which data is isolated for use by one process and guarded against interference from other processes. Prior to SQL Server 7.0, REPEATABLE READ and SERIALIZABLE isolation levels were synonymous. There was no way to prevent non-repeatable reads while not preventing phantoms. By default, SQL Server 2000 operates at an isolation level of READ COMMITTED. To make use of either more or less strict isolation levels in applications, locking can be customized for an entire session by setting the isolation level of the session with the SET TRANSACTION ISOLATION LEVEL statement. To determine the transaction isolation level currently set, use the DBCC USEROPTIONS statement, for example: USE pubs GO SET TRANSACTION ISOLATION LEVEL REPEATABLE READ GO DBCC USEROPTIONS GO Multigranular Locking Multigranular Locking In our example, if one transaction (T1) holds an exclusive lock at the table level, and another transaction (T2) holds an exclusive lock at the row level, each of the transactions believe they have exclusive access to the resource. In this scenario, since T1 believes it locks the entire table, it might inadvertently make changes to the same row that T2 thought it has locked exclusively. In a multigranular locking environment, there must be a way to effectively overcome this scenario. Intent lock is the answer to this problem. Intent Lock Intent Lock is the term used to mean placing a marker in a higher-level lock queue. The type of intent lock can also be called the multigranular lock mode. An intent lock indicates that SQL Server wants to acquire a shared (S) lock or exclusive (X) lock on some of the resources lower down in the hierarchy. For example, a shared intent lock placed at the table level means that a transaction intends on placing shared (S) locks on pages or rows within that table. Setting an intent lock at the table level prevents another transaction from subsequently acquiring an exclusive (X) lock on the table containing that page. Intent locks improve performance because SQL Server examines intent locks only at the table level to determine whether a transaction can safely acquire a lock on that table. This removes the requirement to examine every row or page lock on the table to determine whether a transaction can lock the entire table. Lock Mode The code shown in the slide represents how the lock mode is stored internally. You can see these codes by querying the master.dbo.spt_values table: SELECT * FROM master.dbo.spt_values WHERE type = N'L' However, the req_mode column of master.dbo.syslockinfo has lock mode code that is one less than the code values shown here. For example, value of req_mode = 3 represents the Shared lock mode rather than the Schema Modification lock mode. Lock Compatibility These locks can apply at any coarser level of granularity. If a row is locked, SQL Server will apply intent locks at both the page and the table level. If a page is locked, SQL Server will apply an intent lock at the table level. SIX locks imply that we have shared access to a resource and we have also placed X locks at a lower level in the hierarchy. SQL Server never asks for SIX locks directly, they are always the result of a conversion. For example, suppose a transaction scanned a page using an S lock and then subsequently decided to perform a row level update. The row would obtain an X lock, but now the page would require an IX lock. The resultant mode on the page would be SIX. Another type of table lock is a schema stability lock (Sch-S) and is compatible with all table locks except the schema modification lock (Sch-M). The schema modification lock (Sch-M) is incompatible with all table locks. Locking Resources Delivery Tip Note the differences between Key and Key Range locks. Key Range locks will be covered in a couple of slides. SQL Server can lock these resources: Item Description DB A database. File A database file Index An entire index of a table. Table An entire table, including all data and indexes. Extent A contiguous group of data pages or index pages. Page An 8-KB data page or index page. Key Row lock within an index. Key-range A key-range. Used to lock ranges between records in a table to prevent phantom insertions or deletions into a set of records. Ensures serializable transactions. RID A Row Identifier. Used to individually lock a single row within a table. Application A lock resource defined by an application. The lock manager knows nothing about the resource format. It simply compares the 'strings' representing the lock resources to determine whether it has found a match. If a match is found, it knows that resource is already locked. Some of the resources have “sub-resources.” The followings are sub-resources displayed by the sp_lock output: Database Lock Sub-Resources: Full Database Lock (default) [BULK-OP-DB] – Bulk Operation Lock for Database [BULK-OP-LOG] – Bulk Operation Lock for Log Table Lock Sub-Resources: Full Table Lock (default) [UPD-STATS] – Update statistics Lock [COMPILE] – Compile Lock Index Lock sub-Resources: Full Index Lock (default) [INDEX_ID] – Index ID Lock [INDEX_NAME] – Index Name Lock [BULK_ALLOC] – Bulk Allocation Lock [DEFRAG] – Defragmentation Lock For more information, see also… SOX000821700049 SQL 7.0 How to interpret lock resource Ids Lock Resource Block The resource type has the following resource block format: Resource Type (Code) Content DB (2) Data 1: sub-resource; Data 2: 0; Data 3: 0 File (3) Data 1: File ID; Data 2: 0; Data 3: 0 Index (4) Data 1: Object ID; Data 2: sub-resource; Data 3: Index ID Table (5) Data 1: Object ID; Data 2: sub-resource; Data 3: 0. Page (6) Data 1: Page Number; Data 3: 0. Key (7) Data 1: Object ID; Data 2: Index ID; Data 3: Hashed Key Extent (8) Data 1: Extent ID; Data 3: 0. RID (9) Data 1: RID; Data 3: 0. Application (10) Data 1: Application resource name The rsc_bin column of master..syslockinfo contains the resource block in hexadecimal format. For an example of how to decode value from this column using the information above, let us assume we have the following value: 0x000705001F83D775010002014F0BEC4E With byte swapping within each field, this can be decoded as: Byte 0: Flag – 0x00 Byte 1: Resource Type – 0x07 (Key) Byte 2-3: DBID – 0x0005 Byte 4-7: ObjectID – 0x 75D7831F (1977058079) Byte 8-9: IndexID – 0x0001 Byte 10-16: Hash Key value – 0x 02014F0BEC4E For more information about how to decode this value, see also… Inside SQL Server 2000, pages 803 and 806. Key Range Locking Key Range Locking To support SERIALIZABLE transaction semantics, SQL Server needs to lock sets of rows specified by a predicate, such as WHERE salary BETWEEN 30000 AND 50000 SQL Server needs to lock data that does not exist! If no rows satisfy the WHERE condition the first time the range is scanned, no rows should be returned on any subsequent scans. Key range locks are similar to row locks on index keys (whether clustered or not). The locks are placed on individual keys rather than at the node level. The hash value consists of all the key components and the locator. So, for a nonclustered index over a heap, where columns c1 and c2 where indexed, the hash would contain contributions from c1, c2 and the RID. A key range lock applied to a particular key means that all keys between the value locked and the next value would be locked for all data modification. Key range locks can lock a slightly larger range than that implied by the WHERE clause. Suppose the following select was executed in a transaction with isolation level SERIALIZABLE: SELECT * FROM members WHERE first_name between ‘Al’ and ‘Carl’ If 'Al', 'Bob', and 'Dave' are index keys in the table, the first two of these would acquire key range locks. Although this would prevent anyone from inserting either 'Alex' or 'Ben', it would also prevent someone from inserting 'Dan', which is not within the range of the WHERE clause. Prior to SQL Server 7.0, page locking was used to prevent phantoms by locking the entire set of pages on which the phantom would exist. This can be too conservative. Key Range locking lets SQL Server lock only a much more restrictive area of the table. Impact Key-range locking ensures that these scenarios are SERIALIZABLE:  Range scan query  Singleton fetch of nonexistent row  Delete operation  Insert operation However, the following conditions must be satisfied before key-range locking can occur:  The transaction-isolation level must be set to SERIALIZABLE.  The operation performed on the data must use an index range access. Range locking is activated only when query processing (such as the optimizer) chooses an index path to access the data. Key Range Lock Mode Again, the req_mode column of master.dbo.syslockinfo has lock mode code that is one less than the code values shown here. Dynamic Locking When modifying individual rows, SQL Server typically would take row locks to maximize concurrency (for example, OLTP, order-entry application). When scanning larger volumes of data, it would be more appropriate to take page or table locks to minimize the cost of acquiring locks (for example, DSS, data warehouse, reporting). Locking Decision The decision about which unit to lock is made dynamically, taking many factors into account, including other activity on the system. For example, if there are multiple transactions currently accessing a table, SQL Server will tend to favor row locking more so than it otherwise would. It may mean the difference between scanning the table now and paying a bit more in locking cost, or having to wait to acquire a more coarse lock. A preliminary locking decision is made during query optimization, but that decision can be adjusted when the query is actually executed. Lock Escalation When the lock count for the transaction exceeds and is a multiple of ESCALATION_THRESHOLD (1250), the Lock Manager attempts to escalate. For example, when a transaction acquired 1250 locks, lock manager will try to escalate. The number of locks held may continue to increase after the escalation attempt (for example, because new tables are accessed, or the previous lock escalation attempts failed due to incompatible locks held by another spid). If the lock count for this transaction reaches 2500 (1250 * 2), Lock Manager will attempt escalation again. The Lock Manager looks at the lock memory it is using and if it is more than 40 percent of SQL Server’s allocated buffer pool memory, it tries to find a scan (SDES) where no escalation has already been performed. It then repeats the search operation until all scans have been escalated or until the memory used drops under the MEMORY_LOAD_ESCALATION_THRESHOLD (40%) value. If lock escalation is not possible or fails to significantly reduce lock memory footprint, SQL Server can continue to acquire locks until the total lock memory reaches 60 percent of the buffer pool (MAX_LOCK_RESOURCE_MEMORY_PERCENTAGE=60). Lock escalation may be also done when a single scan (SDES) holds more than LOCK_ESCALATION_THRESHOLD (765) locks. There is no lock escalation on temporary tables or system tables. Trace Flag 1211 disables lock escalation. Important Do not relay this to the customer without careful consideration. Lock escalation is a necessary feature, not something to be avoided completely. Trace flags are global and disabling lock escalation could lead to out of memory situations, extremely poor performing queries, or other problems. Lock escalation tracing can be seen using the Profiler or with the general locking trace flag, -T1200. However, Trace Flag 1200 shows all lock activity so it should not be usable on a production system. For more information, see also… SOX000925700237 “TITLE: SQL 7.0 Lock escalation in SQL 7.0” Lock Timeout Application Lock Timeout An application can set lock timeout for a session with the SET option: SET LOCK_TIMEOUT N where N is a number of milliseconds. A value of -1 means that there will be no timeout, which is equivalent to the version 6.5 behavior. A value of 0 means that there will be no waiting; if a process finds a resource locked, it will generate error message 1222 and continue with the next statement. The current value of LOCK_TIMEOUT is stored in the global variable @@lock_timeout. Note After a lock timeout any transaction containing the statement, is rolled back or canceled by SQL Server 2000 (bug#352640 was filed). This behavior is different from that of SQL Server 7.0. With SQL Server 7.0, the application must have an error handler that can trap error 1222 and if an application does not trap the error, it can proceed unaware that an individual statement within a transaction has been canceled, and errors can occur because statements later in the transaction may depend on the statement that was never executed. Bug#352640 is fixed in hotfix build 8.00.266 whereby a lock timeout will only Internal Lock Timeout At time, internal operations within SQL Server will attempt to acquire locks via lock manager. Typically, these lock requests are issued with “no waiting.” For example, the ghost record processing might try to clean up rows on a particular page, and before it can do that, it needs to lock the page. Thus, the ghost record manager will request a page lock with no wait so that if it cannot lock the page, it will just move on to other pages; it can always come back to this page later. If you look at SQL Profiler Lock: Timeout events, internal lock timeout typically have a duration value of zero. Lock Duration Lock Mode and Transaction Isolation Level For REPEATABLE READ transaction isolation level, update locks are held until data is read and processed, unless promoted to exclusive locks. "Data is processed" means that we have decided whether the row in question matched the search criteria; if not then the update lock is released, otherwise, we get an exclusive lock and make the modification. Consider the following query: use northwind go dbcc traceon(3604, 1200, 1211) -- turn on lock tracing -- and disable escalation go set transaction isolation level repeatable read begin tran update dbo.[order details] set discount = convert (real, discount) where discount = 0.0 exec sp_lock Update locks are promoted to exclusive locks when there is a match; otherwise, the update lock is released. The sp_lock output verifies that the SPID does not hold any update locks or shared locks at the end of the query. Lock escalation is turned off so that exclusive table lock is not held at the end. Warning Do not use trace flag 1200 in a production environment because it produces a lot of output and slows down the server. Trace flag 1211 should not be used unless you have done extensive study to make sure it helps with performance. These trace flags are used here for illustration and learning purposes only. Lock Ownership Most of the locking discussion in this lesson relates to locks owned by “transactions.” In addition to transaction, cursor and session can be owners of locks and they both affect how long locks are held. For every row that is fetched, when SCROLL_LOCKS option is used, regardless of the state of a transaction, a cursor lock is held until the next row is fetched or when the cursor is closed. Locks owned by session are outside the scope of a transaction. The duration of these locks are bounded by the connection and the process will continue to hold these locks until the process disconnects. A typical lock owned by session is the database (DB) lock. Locking – Read Committed Scan Under read committed isolation level, when database pages are scanned, shared locks are held when the page is read and processed. The shared locks are released “behind” the scan and allow other transactions to update rows. It is important to note that the shared lock currently acquired will not be released until shared lock for the next page is successfully acquired (this is commonly know as “crabbing”). If the same pages are scanned again, rows may be modified or deleted by other transactions. Locking – Repeatable Read Scan Under repeatable read isolation level, when database pages are scanned, shared locks are held when the page is read and processed. SQL Server continues to hold these shared locks, thus preventing other transactions to update rows. If the same pages are scanned again, previously scanned rows will not change but new rows may be added by other transactions. Locking – Serializable Read Scan Under serializable read isolation level, when database pages are scanned, shared locks are held not only on rows but also on scanned key range. SQL Server continues to hold these shared locks until the end of transaction. Because key range locks are held, not only will this prevent other transactions from modifying the rows, no new rows can be inserted. Prefetch and Isolation Level Prefetch and Locking Behavior The prefetch feature is available for use with SQL Server 7.0 and SQL Server 2000. When searching for data using a nonclustered index, the index is searched for a particular value. When that value is found, the index points to the disk address. The traditional approach would be to immediately issue an I/O for that row, given the disk address. The result is one synchronous I/O per row and, at most, one disk at a time working to evaluate the query. This does not take advantage of striped disk sets. The prefetch feature takes a different approach. It continues looking for more record pointers in the nonclustered index. When it has collected a number of them, it provides the storage engine with prefetch hints. These hints tell the storage engine that the query processor will need these particular records soon. The storage engine can now issue several I/Os simultaneously, taking advantage of striped disk sets to execute multiple operations simultaneously. For example, if the engine is scanning a nonclustered index to determine which rows qualify but will eventually need to visit the data page as well to access columns that are not in the index, it may decide to submit asynchronous page read requests for a group of qualifying rows. The prefetch data pages are then revisited later to avoid waiting for each individual page read to complete in a serial fashion. This data access path requires that a lock be held between the prefetch request and the row lookup to stabilize the row on the page so it is not to be moved by a page split or clustered key update. For our example, the isolation level of the query is escalated to REPEATABLE READ, overriding the transaction isolation level. With SQL Server 7.0 and SQL Server 2000, portions of a transaction can execute at a different transaction isolation level than the entire transaction itself. This is implemented as lock classes. Lock classes are used to control lock lifetime when portions of a transaction need to execute at a stricter isolation level than the underlying transaction. Unfortunately, in SQL Server 7.0 and SQL Server 2000, the lock class is created at the topmost operator of the query and hence released only at the end of the query. Currently there is no support to release the lock (lock class) after the row has been discarded or fetched by the filter or join operator. This is because isolation level can be set at the query level via a lock class, but no lower. Because of this, locks acquired during the query will not be released until the query completes. If prefetch is occurring you may see a single SPID that holds hundreds of Shared KEY or PAG locks even though the connection’s isolation level is READ COMMITTED. Isolation level can be determined from DBCC PSS output. For details about this behavior see “SOX001109700040 INF: Queries with PREFETCH in the plan hold lock until the end of transaction”. Other Locking Mechanism Lock manager does not manage latches and spinlocks. Latches Latches are internal mechanisms used to protect pages while doing operations such as placing a row physically on a page, compressing space on a page, or retrieving rows from a page. Latches can roughly be divided into I/O latches and non-I/O latches. If you see a high number of non-I/O related latches, SQL Server is usually doing a large number of hash or sort operations in tempdb. You can monitor latch activities via DBCC SQLPERF(‘WAITSTATS’) command. Spinlock A spinlock is an internal data structure that is used to protect vital information that is shared within SQL Server. On a multi-processor machine, when SQL Server tries to access a particular resource protected by a spinlock, it must first acquire the spinlock. If it fails, it executes a loop that will check to see if the lock is available and if not, decrements a counter. If the counter reaches zero, it yields the processor to another thread and goes into a “sleep” (wait) state for a pre-determined amount of time. When it wakes, hopefully, the lock is free and available. If not, the loop starts again and it is terminated only when the lock is acquired. The reason for implementing a spinlock is that it is probably less costly to “spin” for a short time rather than yielding the processor. Yielding the processor will force an expensive context switch where:  The old thread’s state must be saved  The new thread’s state must be reloaded  The data stored in the L1 and L2 cache are useless to the processor On a single-processor computer, the loop is not useful because no other thread can be running and thus, no one can release the spinlock for the currently executing thread to acquire. In this situation, the thread yields the processor immediately. Lesson 2: Concepts – Batch and Transaction This lesson outlines some of the common causes that contribute to the perception of a slow server. What You Will Learn After completing this lesson, you will be able to:  Review batch processing and error checking.  Review explicit, implicit and autocommit transactions and transaction nesting level.  Discuss how commit and rollback transaction done in stored procedure and trigger affects transaction nesting level.  Discuss various transaction isolation level and their impact on locking.  Discuss the difference between aborting a statement, a transaction, and a batch.  Describe how @@error, @@transcount, and @@rowcount can be used for error checking and handling. Recommended Reading  Charter 12 “Transactions and Triggers”, Inside SQL Server 2000 by Kalen Delaney Batch Definition SQL Profiler Statements and Batches To help further your understanding of what is a batch and what is a statement, you can use SQL Profiler to study the definition of batch and statement.  Try This: Using SQL Profiler to Analyze Batch 1. Log on to a server with Query Analyzer 2. Startup the SQL Profiler against the same server 3. Start a trace using the “StandardSQLProfiler” template 4. Execute the following using Query Analyzer: SELECT @@VERSION SELECT @@SPID The ‘SQL:BatchCompleted’ event is captured by the trace. It shows both the statements as a single batch. 5. Now execute the following using Query Analyzer {call sp_who()} What shows up? The ‘RPC:Completed’ with the sp_who information. RPC is simply another entry point to the SQL Server to call stored procedures with native data types. This allows one to avoid parsing. The ‘RPC:Completed’ event should be considered the same as a batch for the purposes of this discussion. Stop the current trace and start a new trace using the “SQLProfilerTSQL_SPs” template. Issue the same command as outlines in step 5 above. Looking at the output, not only can you see the batch markers but each statement as executed within the batch. Autocommit, Explicit, and Implicit Transaction Autocommit Transaction Mode (Default) Autocommit mode is the default transaction management mode of SQL Server. Every Transact-SQL statement, whether it is a standalone statement or part of a batch, is committed or rolled back when it completes. If a statement completes successfully, it is committed; if it encounters any error, it is rolled back. A SQL Server connection operates in autocommit mode whenever this default mode has not been overridden by either explicit or implicit transactions. Autocommit mode is also the default mode for ADO, OLE DB, ODBC, and DB-Library. A SQL Server connection operates in autocommit mode until a BEGIN TRANSACTION statement starts an explicit transaction, or implicit transaction mode is set on. When the explicit transaction is committed or rolled back, or when implicit transaction mode is turned off, SQL Server returns to autocommit mode. Explicit Transaction Mode An explicit transaction is a transaction that starts with a BEGIN TRANSACTION statement. An explicit transaction can contain one or more statements and must be terminated by either a COMMIT TRANSACTION or a ROLLBACK TRANSACTION statement. Implicit Transaction Mode SQL Server can automatically or, more precisely, implicitly start a transaction for you if a SET IMPLICIT_TRANSACTIONS ON statement is run or if the implicit transaction option is turned on globally by running sp_configure ‘user options’ 2. (Actually, the bit mask 0x2 must be turned on for the user option so you might have to perform an ‘OR’ operation with the existing user option value.) See SQL Server 2000 Books Online on how to turn on implicit transaction under ODBC and OLE DB (acdata.chm::/ac_8_md_06_2g6r.htm). Transaction Nesting Explicit transactions can be nested. Committing inner transactions is ignored by SQL Server other than to decrements @@TRANCOUNT. The transaction is either committed or rolled back based on the action taken at the end of the outermost transaction. If the outer transaction is committed, the inner nested transactions are also committed. If the outer transaction is rolled back, then all inner transactions are also rolled back, regardless of whether the inner transactions were individually committed. Each call to COMMIT TRANSACTION applies to the last executed BEGIN TRANSACTION. If the BEGIN TRANSACTION statements are nested, then a COMMIT statement applies only to the last nested transaction, which is the innermost transaction. Even if a COMMIT TRANSACTION transaction_name statement within a nested transaction refers to the transaction name of the outer transaction, the commit applies only to the innermost transaction. If a ROLLBACK TRANSACTION statement without a transaction_name parameter is executed at any level of a set of nested transaction, it rolls back all the nested transactions, including the outermost transaction. The @@TRANCOUNT function records the current transaction nesting level. Each BEGIN TRANSACTION statement increments @@TRANCOUNT by one. Each COMMIT TRANSACTION statement decrements @@TRANCOUNT by one. A ROLLBACK TRANSACTION statement that does not have a transaction name rolls back all nested transactions and decrements @@TRANCOUNT to 0. A ROLLBACK TRANSACTION that uses the transaction name of the outermost transaction in a set of nested transactions rolls back all the nested transactions and decrements @@TRANCOUNT to 0. When you are unsure if you are already in a transaction, SELECT @@TRANCOUNT to determine whether it is 1 or more. If @@TRANCOUNT is 0 you are not in a transaction. You can also find the transaction nesting level by checking the sysprocess.open_tran column. See SQL Server 2000 Books Online topic “Nesting Transactions” (acdata.chm::/ac_8_md_06_66nq.htm) for more information. Statement, Transaction, and Batch Abort One batch can have many statements and one transaction can have multiple statements, also. One transaction can span multiple batches and one batch can have multiple transactions. Statement Abort Currently executing statement is aborted. This can be a bit confusing when you start talking about statements in a trigger or stored procedure. Let us look closely at the following trigger: CREATE TRIGGER TRG8134 ON TBL8134 AFTER INSERT AS BEGIN SELECT 1/0 SELECT 'Next command in trigger' END To fire the INSERT trigger, the batch could be as simple as ‘INSERT INTO TBL8134 VALUES(1)’. However, the trigger contains two statements that must be executed as part of the batch to satisfy the clients insert request. When the ‘SELECT 1/0’ causes the divide by zero error, a statement abort is issued for the ‘SELECT 1/0’ statement. Batch and Transaction Abort On SQL Server 2000 (and SQL Server 7.0) whenever a non-informational error is encountered in a trigger, the statement abort is promoted to a batch and transactional abort. Thus, in the example the statement abort for ‘select 1/0’ promotion results in an entire batch abort. No further statements in the trigger or batch will be executed and a rollback is issued. On SQL Server 6.5, the statement aborts immediately and results in a transaction abort. However, the rest of the statements within the trigger are executed. This trigger could return ‘Next command in trigger’ as a result set. Once the trigger completes the batch abort promotion takes effect. Conversely, submitting a similar set of statements in a standalone batch can result in different behavior. SELECT 1/0 SELECT 'Next command in batch' Not considering the set option possibilities, a divide by zero error generally results in a statement abort. Since it is not in a trigger, the promotion to a batch abort is avoided and subsequent SELECT statement can execute. The programmer should add an “if @@ERROR” check immediately after the ‘select 1/0’ to T-SQL execution to control the flow correctly. Aborting and Set Options ARITHABORT If SET ARITHABORT is ON, these error conditions cause the query or batch to terminate. If the errors occur in a transaction, the transaction is rolled back. If SET ARITHABORT is OFF and one of these errors occurs, a warning message is displayed, and NULL is assigned to the result of the arithmetic operation. When an INSERT, DELETE, or UPDATE statement encounters an arithmetic error (overflow, divide-by-zero, or a domain error) during expression evaluation when SET ARITHABORT is OFF, SQL Server inserts or updates a NULL value. If the target column is not nullable, the insert or update action fails and the user receives an error. XACT_ABORT When SET XACT_ABORT is ON, if a Transact-SQL statement raises a run-time error, the entire transaction is terminated and rolled back. When OFF, only the Transact-SQL statement that raised the error is rolled back and the transaction continues processing. Compile errors, such as syntax errors, are not affected by SET XACT_ABORT. For example: CREATE TABLE t1 (a int PRIMARY KEY) CREATE TABLE t2 (a int REFERENCES t1(a)) GO INSERT INTO t1 VALUES (1) INSERT INTO t1 VALUES (3) INSERT INTO t1 VALUES (4) INSERT INTO t1 VALUES (6) GO SET XACT_ABORT OFF GO BEGIN TRAN INSERT INTO t2 VALUES (1) INSERT INTO t2 VALUES (2) /* Foreign key error */ INSERT INTO t2 VALUES (3) COMMIT TRAN SELECT 'Continue running batch 1...' GO SET XACT_ABORT ON GO BEGIN TRAN INSERT INTO t2 VALUES (4) INSERT INTO t2 VALUES (5) /* Foreign key error */ INSERT INTO t2 VALUES (6) COMMIT TRAN SELECT 'Continue running batch 2...' GO /* Select shows only keys 1 and 3 added. Key 2 insert failed and was rolled back, but XACT_ABORT was OFF and rest of transaction succeeded. Key 5 insert error with XACT_ABORT ON caused all of the second transaction to roll back. Also note that 'Continue running batch 2...' is not Returned to indicate that the batch is aborted. */ SELECT * FROM t2 GO DROP TABLE t2 DROP TABLE t1 GO Compile and Run-time Errors Compile Errors Compile errors are encountered during syntax checks, security checks, and other general operations to prepare the batch for execution. These errors can prevent the optimization of the query and thus lead to immediate abort. The statement is not run and the batch is aborted. The transaction state is generally left untouched. For example, assume there are four statements in a particular batch. If the third statement has a syntax error, none of the statements in the batch is executed. Optimization Errors Optimization errors would include rare situations where the statement encounters a problem when attempting to build an optimal execution plan. Example: “too many tables referenced in the query” error is reported because a “work table” was added to the plan. Runtime Errors Runtime errors are those that are encountered during the execution of the query. Consider the following batch: SELECT * FROM pubs.dbo.titles UPDATE pubs.dbo.authors SET au_lname = au_lname SELECT * FROM foo UPDATE pubs.dbo.authors SET au_lname = au_lname If you run the above statements in a batch, the first two statements will be executed, the third statement will fail because table foo does not exist, and the batch will terminate. Deferred Name Resolution is the feature that allows this batch to start executing before resolving the object foo. This feature allows SQL Server to delay object resolution and place a “placeholder” in the query’s execution. The object referenced by the placeholder is resolved until the query is executed. In our example, the execution of the statement “SELECT * FROM foo” will trigger another compile process to resolve the name again. This time, error message 208 is returned. Error: 208, Level 16, State 1, Line 1 Invalid object name 'foo'. Message 208 can be encountered as a runtime or compile error depending on whether the Deferred Name Resolution feature is available. In SQL Server 6.5 this would be considered a compile error and on SQL Server 2000 (and SQL Server7.0) as a runtime error due to Deferred Name Resolution. In the following example, if a trigger referenced authors2, the error is detected as SQL Server attempts to execute the trigger. However, under SQL Server 6.5 the create trigger statement fails because authors2 does not exist at compile time. When errors are encountered in a trigger, generally, the statement, batch, and transaction are aborted. You should be able to observe this by running the following script in pubs database: Create table tblTest(iID int) go create trigger trgInsert on tblTest for INSERT as begin select * from authors select * from authors2 select * from titles end go begin tran select 'Before' insert into tblTest values(1) select 'After' go select @@TRANCOUNT go When run in a batch, the statement and the batch are aborted but the transaction remains active. The follow script illustrates this: begin tran select 'Before' select * from authors2 select 'After' go select @@TRANCOUNT go One other factor in a compile versus runtime error is implicit data type conversions. If you were to run the following statements on SQL Server 6.5 and SQL Server 2000 (and SQL Server 7.0): create table tblData(dtData datetime) go select 1 insert into tblData values(12/13/99) go On SQL Server 6.5, you get an error before execution of the batch begins so no statements are executed and the batch is aborted. Error: 206, Level 16, State 2, Line 2 Operand type clash: int is incompatible with datetime On SQL Server 2000, you get the default value (1900-01-01 00:00:00.000) inserted into the table. SQL Server 2000 implicit data type conversion treats this as integer division. The integer division of 12/13/99 is 0, so the default date and time value is inserted, no error returned. To correct the problem on either version is to wrap the date string with quotes. See Bug #56118 (sqlbug_70) for more details about this situation. Another example of a runtime error is a 605 message. Error: 605 Attempt to fetch logical page %S_PGID in database '%.*ls' belongs to object '%.*ls', not to object '%.*ls'. A 605 error is always a runtime error. However, depending on the transaction isolation level, (e.g. using the NOLOCK lock hint), established by the SPID the handling of the error can vary. Specifically, a 605 error is considered an ACCESS error. Errors associated with buffer and page access are found in the 600 series of errors. When the error is encountered, the isolation level of the SPID is examined to determine proper handling based on information or fatal error level. Transaction Error Checking Not all errors cause transactions to automatically rollback. Although it is difficult to determine exactly which errors will rollback transactions and which errors will not, the main idea here is that programmers must perform error checking and handle errors appropriately. Error Handling Raiserror Details Raiserror seems to be a source of confusion but is really rather simple. Raiserror with severity levels of 20 or higher will terminate the connection. Of course, when the connection is terminated a full rollback of any open transaction will immediately be instantiated by the SQL Server (except distributed transaction with DTC involved). Severity levels lower than 20 will simply result in the error message being returned to the client. They do not affect the transaction scope of the connection. Consider the following batch: use pubs begin tran update authors set au_lname = 'smith' raiserror ('This is bad', 19, 1) with log select @@trancount With severity set at 19, the 'select @@trancount' will be executed after the raiserror statement and will return a value of 1. If severity is changed to 20, then the select statement will not run and the connection is broken. Important Error handling must occur not only in T-SQL batches and stored procedures, but also in application program code. Transactions and Triggers (1 of 2) Basic behavior assumes the implicit transactions setting is set to OFF. This behavior makes it possible to identify business logic errors in a trigger, raise an error, rollback the action, and add an audit table entry. Logically, the insert to the audit table cannot take place before the ROLLBACK action and you would not want to build in the audit table insert into every applications error handler that violated the business rule of the trigger. For more information, see also… SQL Server 2000 Books Online topic “Rollbacks in stored procedure and triggers“ (acdata.chm::/ac_8_md_06_4qcz.htm) IMPLICIT_TRANSACTIONS ON Behavior The behavior of firing other triggers on the same table can be tricky. Say you added a trigger that checks the CODE field. Read only versions of the rows contain the code ‘RO’ and read/write versions use ‘RW.’ Whenever someone tries to delete a row with a code ‘RO’ the trigger issues the rollback and logs an audit table entry. However, you also have a second trigger that is responsible for cascading delete operations. One client could issue the delete without implicit transactions on and only the current trigger would execute and then terminate the batch. However, a second client with implicit transactions on could issue the same delete and the secondary trigger would fire. You end up with a situation in which the cascading delete operations can take place (are committed) but the initial row remains in the table because of the rollback operation. None of the delete operations should be allowed but because the transaction scope was restarted because of the implicit transactions setting, they did. Transactions and Triggers (2 of 2) It is extremely difficult to determine the execution state of a trigger when using explicit rollback statements in combination with implicit transactions. The RETURN statement is not allowed to return a value. The only way I have found to set the @@ERROR is using a ‘raiserror’ as the last execution statement in the last trigger to execute. If you modify the example, this following RAISERROR statement will set @@ERROR to 50000: CREATE TRIGGER trgTest on tblTest for INSERT AS BEGIN ROLLBACK INSERT INTO tblAudit VALUES (1) RAISERROR('This is bad', 14,1) END However, this value does not carry over to a secondary trigger for the same table. If you raise an error at the end of the first trigger and then look at @@ERROR in the secondary trigger the @@ERROR remains 0. Carrying Forward an Active/Open Transaction It is possible to exit from a trigger and carry forward an open transaction by issuing a BEGIN TRAN or by setting implicit transaction on and doing INSERT, UPDATE, or DELETE. Warning It is never recommended that a trigger call BEGIN TRANSACTION. By doing this you increment the transaction count. Invalid code logic, not calling commit transaction, can lead to a situation where the transaction count remains elevated upon exit of the trigger. Transaction Count The behavior is better explained by understanding how the server works. It does not matter whether you are in a transaction, when a modification takes place the transaction count is incremented. So, in the simplest form, during the processing of an insert the transaction count is 1. On completion of the insert, the server will commit (and thus decrement the transaction count). If the commit identifies the transaction count has returned to 0, the actual commit processing is completed. Issuing a commit when the transaction count is greater than 1 simply decrements the nested transaction counter. Thus, when we enter a trigger, the transaction count is 1. At the completion of the trigger, the transaction count will be 0 due to the commit issued at the end of the modification statement (insert). In our example, if the connection was already in a transaction and called the second INSERT, since implicit transaction is ON, the transaction count in the trigger will be 2 as long as the ROLLBACK is not executed. At the end of the insert, the commit is again issued to decrement the transaction reference count to 1. However, the value does not return to 0 so the transaction remains open/active. Subsequent triggers are only fired if the transaction count at the end of the trigger remains greater than or equal to 1. The key to continuation of secondary triggers and the batch is the transaction count at the end of a trigger execution. If the trigger that performs a rollback has done an explicit begin transaction or uses implicit transactions, subsequent triggers and the batch will continue. If the transaction count is not 1 or greater, subsequent triggers and the batch will not execute. Warning Forcing the transaction count after issuing a rollback is dangerous because you can easily loose track of your transaction nesting level. When performing an explicit rollback in a trigger, you should immediately issue a return statement to maintain consistent behavior between a connection with and without implicit transaction settings. This will force the trigger(s) and batch to terminate immediately. One of the methods of dealing with this issue is to run ‘SET IMPLICIT_TRANSACTIONS OFF’ as the first statement of any trigger. Other methods may entails checking @@TRANCOUNT at the end of the trigger and continue to COMMIT the transaction as long as @@TRANCOUNT is greater than 1. Examples The following examples are based on this table: create table tbl50000Insert (iID int NOT NULL) go Note If more than one trigger is used, to guarantee the trigger firing sequence, the sp_settriggerorder command should be used. This command is omitted in these examples to simplify the complexity of the statements. First Example In the first example, the second trigger was never fired and the batch, starting with the insert statement, was aborted. Thus, the print statement was never issued. print('Trigger issues rollback - cancels batch') go create trigger trg50000Insert on tbl50000Insert for INSERT as begin select 'Inserted', * from inserted rollback tran select 'End of trigger', @@TRANCOUNT as 'TRANCOUNT' end go create trigger trg50000Insert2 on tbl50000Insert for INSERT as begin select 'In Trigger2' select 'Trigger 2 Inserted', * from inserted end go insert into tbl50000Insert values(1) print('---------------------- In same batch') select * from tbl50000Insert go -- Cleanup drop trigger trg50000Insert drop trigger trg50000Insert2 go delete from tbl50000Insert Second Example The next example shows that since a new transaction is started, the second trigger will be fired and the print statement in the batch will be executed. Note that the insert is rolled back. print('Trigger issues rollback - increases tran count to continue batch') go create trigger trg50000Insert on tbl50000Insert for INSERT as begin select 'Inserted', * from inserted rollback tran begin tran end go create trigger trg50000Insert2 on tbl50000Insert for INSERT as begin select 'In Trigger2' select 'Trigger 2 Inserted', * from inserted end go insert into tbl50000Insert values(2) print('---------------------- In same batch') select * from tbl50000Insert go -- Cleanup drop trigger trg50000Insert drop trigger trg50000Insert2 go delete from tbl50000Insert Third Example In the third example, the raiserror statement is used to set the @@ERROR value and the BEGIN TRAN statement is used in the trigger to allow the batch to continue to run. print('Trigger issues rollback - uses raiserror to set @@ERROR') go create trigger trg50000Insert on tbl50000Insert for INSERT as begin select 'Inserted', * from inserted rollback tran begin tran -- Increase @@trancount to allow -- batch to continue select @@trancount as ‘Trancount’ raiserror('This is from the trigger', 14,1) end go insert into tbl50000Insert values(3) select @@ERROR as 'ERROR', @@TRANCOUNT as 'Trancount' go -- Cleanup drop trigger trg50000Insert go delete from tbl50000Insert Fourth Example For the fourth example, a second trigger is added to illustrate the fact that @@ERROR value set in the first trigger will not be seen in the second trigger nor will it show up in the batch after the second trigger is fired. print('Trigger issues rollback - uses raiserror to set @@ERROR, not seen in second trigger and cleared in batch') go create trigger trg50000Insert on tbl50000Insert for INSERT as begin select 'Inserted', * from inserted rollback begin tran -- Increase @@trancount to -- allow batch to continue select @@TRANCOUNT as 'Trancount' raiserror('This is from the trigger', 14,1) end go create trigger trg50000Insert2 on tbl50000Insert for INSERT as begin select @@ERROR as 'ERROR', @@TRANCOUNT as 'Trancount' end go insert into tbl50000Insert values(4) select @@ERROR as 'ERROR', @@TRANCOUNT as 'Trancount' go -- Cleanup drop trigger trg50000Insert drop trigger trg50000Insert2 go delete from tbl50000Insert Lesson 3: Concepts – Locks and Applications This lesson outlines some of the common causes that contribute to the perception of a slow server. What You Will Learn After completing this lesson, you will be able to:  Explain how lock hints are used and their impact.  Discuss the effect on locking when an application uses Microsoft Transaction Server.  Identify the different kinds of deadlocks including distributed deadlock. Recommended Reading  Charter 14 “Locking”, Inside SQL Server 2000 by Kalen Delaney  Charter 16 “Query Tuning”, Inside SQL Server 2000 by Kalen Delaney Q239753 – Deadlock Situation Not Detected by SQL Server Q288752 – Blocked SPID Not Participating in Deadlock May Incorrectly be Chosen as victim Locking Hints UPDLOCK If update locks are used instead of shared locks while reading a table, the locks are held until the end of the statement or transaction. UPDLOCK has the advantage of allowing you to read data (without blocking other readers) and update it later with the assurance that the data has not changed since you last read it. READPAST READPAST is an optimizer hint for use with SELECT statements. When this hint is used, SQL Server will read past locked rows. For example, assume table T1 contains a single integer column with the values of 1, 2, 3, 4, and 5. If transaction A changes the value of 3 to 8 but has not yet committed, a SELECT * FROM T1 (READPAST) yields values 1, 2, 4, 5. Tip READPAST only applies to transactions operating at READ COMMITTED isolation and only reads past row-level locks. This lock hint can be used to implement a work queue on a SQL Server table. For example, assume there are many external work requests being thrown into a table and they should be serviced in approximate insertion order but they do not have to be completely FIFO. If you have 4 worker threads consuming work items from the queue they could each pick up a record using read past locking and then delete the entry from the queue and commit when they're done. If they fail, they could rollback, leaving the entry on the queue for the next worker thread to pick up. Caution The READPAST hint is not compatible with HOLDLOCK.  Try This: Using Locking Hints 1. Open a Query Window and connect to the pubs database. 2. Execute the following statements (--Conn 1 is optional to help you keep track of each connection): BEGIN TRANSACTION -- Conn 1 UPDATE titles SET price = price * 0.9 WHERE title_id = 'BU1032' 3. Open a second connection and execute the following statements: SELECT @@lock_timeout -- Conn 2 GO SELECT * FROM titles SELECT * FROM authors 4. Open a third connection and execute the following statements: SET LOCK_TIMEOUT 0 -- Conn 3 SELECT * FROM titles SELECT * FROM authors 5. Open a fourth connection and execute the following statement: SELECT * FROM titles (READPAST) -- Conn 4 WHERE title_ID < 'C' SELECT * FROM authors How many records were returned? 3 6. Open a fifth connection and execute the following statement: SELECT * FROM titles (NOLOCK) -- Conn 5 WHERE title_ID 0 the lock manager also checks for deadlocks every time a SPID gets blocked. So a single deadlock will trigger 20 seconds of more immediate deadlock detection, but if no additional deadlocks occur in that 20 seconds, the lock manager no longer checks for deadlocks at each block and detection again only happens every 5 seconds. Although normally not needed, you may use trace flag -T1205 to trace the deadlock detection process. Note Please note the distinction between application lock and other locks’ deadlock detection. For application lock, we do not rollback the transaction of the deadlock victim but simply return a -3 to sp_getapplock, which the application needs to handle itself. Deadlock Resolution How is a deadlock resolved? SQL Server picks one of the connections as a deadlock victim. The victim is chosen based on either which is the least expensive transaction (calculated using the number and size of the log records) to roll back or in which process “SET DEADLOCK_PRIORITY LOW” is specified. The victim’s transaction is rolled back, held locks are released, and SQL Server sends error 1205 to the victim’s client application to notify it that it was chosen as a victim. The other process can then obtain access to the resource it was waiting on and continue. Error 1205: Your transaction (process ID #%d) was deadlocked with another process and has been chosen as the deadlock victim. Rerun your transaction. Symptoms of deadlocking Error 1205 usually is not written to the SQL Server errorlog. Unfortunately, you cannot use sp_altermessage to cause 1205 to be written to the errorlog. If the client application does not capture and display error 1205, some of the symptoms of deadlock occurring are:  Clients complain of mysteriously canceled queries when using certain features of an application.  May be accompanied by excessive blocking. Lock contention increases the chances that a deadlock will occur. Triggers and Deadlock Triggers promote the deadlock priority of the SPID for the life of the trigger execution when the DEADLOCK PRIORITY is not set to low. When a statement in a trigger causes a deadlock to occur, the SPID executing the trigger is given preferential treatment and will not become the victim. Warning Bug 235794 is filed against SQL Server 2000 where a blocked SPID that is not a participant of a deadlock may incorrectly be chosen as a deadlock victim if the SPID is blocked by one of the deadlock participants and the SPID has the least amount of transaction logging. See KB article Q288752: “Blocked Spid Not Participating in Deadlock May Incorrectly be Chosen as victim” for more information. Distributed Deadlock – Scenario 1 Distributed Deadlocks The term distributed deadlock is ambiguous. There are many types of distributed deadlocks. Scenario 1 Client application opens connection A, begins a transaction, acquires some locks, opens connection B, connection B gets blocked by A but the application is designed to not commit A’s transaction until B completes. Note SQL Server has no way of knowing that connection A is somehow dependent on B – they are two distinct connections with two distinct transactions. This situation is discussed in scenario #4 in “Q224453 INF: Understanding and Resolving SQL Server 7.0 Blocking Problems”. Distributed Deadlock – Scenario 2 Scenario 2 Distributed deadlock involving bound connections. Two connections can be bound into a single transaction context with sp_getbindtoken/sp_bindsession or via DTC. Spid 60 enlists in a transaction with spid 61. A third spid 62 is blocked by spid 60, but spid 61 is blocked by spid 62. Because they are doing work in the same transaction, spid 60 cannot commit until spid 61 finishes his work, but spid 61 is blocked by 62 who is blocked by 60. This scenario is described in article “Q239753 - Deadlock Situation Not Detected by SQL Server.” Note SQL Server 6.5 and 7.0 do not detect this deadlock. The SQL Server 2000 deadlock detection algorithm has been enhanced to detect this type of distributed deadlock. The diagram in the slide illustrates this situation. Resources locked by a spid are below that spid (in a box). Arrows indicate blocking and are drawn from the blocked spid to the resource that the spid requires. A circle represents a transaction; spids in the same transaction are shown in the same circle. Distributed Deadlock – Scenario 3 Scenario 3 Distributed deadlock involving linked servers or server-to-server RPC. Spid 60 on Server 1 executes a stored procedure on Server 2 via linked server. This stored procedure does a loopback linked server query against a table on Server 1, and this connection is blocked by a lock held by Spid 60. Note No version of SQL Server is currently designed to detect this distributed deadlock. Lesson 4: Information Collection and Analysis This lesson outlines some of the common causes that contribute to the perception of a slow server. What You Will Learn After completing this lesson, you will be able to:  Identify specific information needed for troubleshooting issues.  Locate and collect information needed for troubleshooting issues.  Analyze output of DBCC Inputbuffer, DBCC PSS, and DBCC Page commands.  Review information collected from master.dbo.sysprocesses table.  Review information collected from master.dbo.syslockinfo table.  Review output of sp_who, sp_who2, sp_lock.  Analyze Profiler log for query usage pattern.  Review output of trace flags to help troubleshoot deadlocks. Recommended Reading Q244455 - INF: Definition of Sysprocesses Waittype and Lastwaittype Fields Q244456 - INF: Description of DBCC PSS Command for SQL Server 7.0 Q271509 - INF: How to Monitor SQL Server 2000 Blocking Q251004 - How to Monitor SQL Server 7.0 Blocking Q224453 - Understanding and Resolving SQL Server 7.0 Blocking Problem Q282749 – BUG: Deadlock information reported with SQL Server 2000 Profiler Locking and Blocking  Try This: Examine Blocked Processes 1. Open a Query Window and connect to the pubs database. Execute the following statements: BEGIN TRAN -- connection 1 UPDATE titles SET price = price + 1 2. Open another connection and execute the following statement: SELECT * FROM titles-- connection 2 3. Open a third connection and execute sp_who; note the process id (spid) of the blocked process. (Connection 3) 4. In the same connection, execute the following: SELECT spid, cmd, waittype FROM master..sysprocesses WHERE waittype 0 -- connection 3 5. Do not close any of the connections! What was the wait type of the blocked process?  Try This: Look at locks held Assumes all your connections are still open from the previous exercise. • Execute sp_lock -- Connection 3 What locks is the process from the previous example holding? Make sure you run ROLLBACK TRAN in Connection 1 to clean up your transaction. Collecting Information See Module 2 for more about how to gather this information using various tools. Recognizing Blocking Problems How to Recognize Blocking Problems  Users complain about poor performance at a certain time of day, or after a certain number of users connect.  SELECT * FROM sysprocesses or sp_who2 shows non-zero values in the blocked or BlkBy column.  More severe blocking incidents will have long blocking chains or large sysprocesses.waittime values for blocked spids.  Possibl

13,826

社区成员

发帖
与我相关
我的任务
社区描述
C++ Builder相关内容讨论区
社区管理员
  • 基础类社区
加入社区
  • 近7日
  • 近30日
  • 至今
社区公告
暂无公告

试试用AI创作助手写篇文章吧