为什么用ODBC调一个Update的存储过程出现异常(Function sequence error),该怎样解决?

思危 2003-04-23 10:17:19
我用ODBC调一个SQL SERVER 2000存储过程,如下:
CREATE PROCEDURE UpdateProduct @ProductID INTEGER,@ProductName VARCHAR(20)
AS
UPDATE tblProduct
SET ProductName=@ProductName
WHERE ProductID=@ProductID
GO
修改成功,但出现异常(Function sequence error),为什么?我该如何处理呢?
...全文
197 15 打赏 收藏 转发到动态 举报
写回复
用AI写文章
15 条回复
切换为时间正序
请发表友善的回复…
发表回复
思危 2003-07-01
  • 打赏
  • 举报
回复
这个问题我通过过载MoveNext解决了。
谢谢指教。
结帖子
思危 2003-05-10
  • 打赏
  • 举报
回复
再up一下就结帖吧,谁还能讲讲
思危 2003-04-29
  • 打赏
  • 举报
回复
那Insert为什么可以,它会返回什么吗?
思危 2003-04-28
  • 打赏
  • 举报
回复
异常是在Open函数中的MoveNext里调用SQLFetch产生的
思危 2003-04-28
  • 打赏
  • 举报
回复
我想知道的是为什么CRcordset调用只有INSERT语句的存储过程没问题,而UPDATE却会抛出异常,他们都没有返回什么呀?而给UPDATE加上一个有返回的语句(如SELECT)就好了,二者之间有什么不同
small_wei 2003-04-28
  • 打赏
  • 举报
回复
哦!如果存储过程没有返回结果不能用Open打开的,因为open后creocrdset要执行一个MoveNext,所以会出错。这只用调executesql 来调用。
丁淇石头 2003-04-27
  • 打赏
  • 举报
回复
在Open语句中调用可能不行,用以下代码:
CString strSQL;
CDatabase * pDatabase = new CDatabase;
try
{
pDatabase->OpenEx("DSN=OdbcName; UID=***;PWD=***", NULL);
}
catch(CDBException* e)
{
e->ReportError();
pDatabase->Close();
delete pDatabase;
return;
}
strSQL = "exec UpdateProduct (?, ?, ?)";
pDatabase->ExecuteSQL(strSQL);

pDatabase->Close();
delete pDatabase;

思危 2003-04-26
  • 打赏
  • 举报
回复
我的调用过程
long strID = 20;
CString strName = "芒果";
long nPrice = 100;

CString strSQL = "{CALL UpdateProduct (?, ?, ?)}";

CUpdateProc Proc;
Proc.m_ProductID = strID;
Proc.m_ProductName = strName;
Proc.m_Price = nPrice;

try
{
Proc.Open(AFX_DB_USE_DEFAULT_TYPE, strSQL);
}
catch(CException* e)
{
e->ReportError();
}
small_wei 2003-04-25
  • 打赏
  • 举报
回复
你的调用过程程序?
思危 2003-04-24
  • 打赏
  • 举报
回复
没有,只要我在下面添上带返回的语句后就好了如
CREATE PROCEDURE UpdateProduct @ProductID INTEGER,@ProductName VARCHAR(20)
AS
UPDATE tblProduct
SET ProductName=@ProductName
WHERE ProductID=@ProductID

SELECT *
FROM tblProduct
GO
丁淇石头 2003-04-23
  • 打赏
  • 举报
回复
先在SQL Server Query Analys中调用一下存储过程,试试存储过程对不对
small_wei 2003-04-23
  • 打赏
  • 举报
回复
是VC调用程序出错?还是SQL SERVER 2000存储过程出错?
small_wei 2003-04-23
  • 打赏
  • 举报
回复
有无内存泻漏?
思危 2003-04-23
  • 打赏
  • 举报
回复
用ADO在VC和Delphi中调用都正常,而且虽然有异常但对表进行了修改
思危 2003-04-23
  • 打赏
  • 举报
回复
在SQL SERVER中正确调用正常
oracle学习文档 笔记 全面 深刻 详细 通俗易懂 doc word格式 清晰 第一章 Oracle入门 一、 数据库概述 数据库(Database)是按照数据结构来组织、存储和管理数据的仓库,它产生于距今五十年前。简单来说是本身可视为电子化的文件柜——存储电子文件的处所,用户可以对文件中的数据运行新增、截取、更新、删除等操作。 常见的数据模型 1. 层次结构模型: 层次结构模型实质上是一种有根结点的定向有序树,IMS(Information Manage-mentSystem)是其典型代表。 2. 网状结构模型:按照网状数据结构建立的数据库系统称为网状数据库系统,其典型代表是DBTG(Data Base Task Group)。 3. 关系结构模型:关系式数据结构把一些复杂的数据结构归结为简单的二元关系(即二维表格形式)。常见的有Oracle、mssql、mysql等 二、 主流数据库 数据库名 公司 特点 工作环境 mssql 微软 只能能运行在windows平台,体积比较庞大,占用许多系统资源, 但使用很方便,支持命令和图形化管理,收费。 中型企业 Mysql 甲骨文 是个开源的数据库server,可运行在多种平台, 特点是响应速度特别快,主要面向中小企业 中小型企业 PostgreSQL 号称“世界上最先进的开源数据库“,可以运行在多种平台下,是tb级数据库,而且性能也很好 中大型企业 oracle 甲骨文 获得最高认证级别的ISO标准安全认证,性能最高, 保持开放平台下的TPC-D和TPC-C的世界记录。但价格不菲 大型企业 db2 IBM DB2在企业级的应用最为广泛, 在全球的500家最大的企业中,几乎85%以上用DB2数据库服务器。收费 大型企业 Access 微软 Access是一种桌面数据库,只适合数据量少的应用,在处理少量 数据和单机访问的数据库时是很好的,效率也很高 小型企业 三、 Oracle数据库概述 ORACLE数据库系统是美国ORACLE公司(甲骨文)提供的以分布式数据库为核心的一组软件产品,是目前最流行的客户/服务器(CLIENT/SERVER)或B/S体系结构的数据库之一。  拉里•埃里森  就业前景 从就业与择业的角度来讲,计算机相关专业的大学生从事oracle方面的技术是职业发展中的最佳选择。 其一、就业面广:全球前100强企业99家都在使用ORACLE相关技术,中国政府机构,大中型企事业单位都能有ORACLE技术的工程师岗位。 其二、技术层次深:如果期望进入IT服务或者产品公司(类似毕博、DELL、IBM等),Oracle技术能够帮助提高就业的深度。 其三、职业方向多:Oracle数据库管理方向、Oracle开发及系统架构方向、Oracle数据建模数据仓库等方向。 四、 如何学习 认真听课、多思考问题、多动手操作、有问题一定要问、多参与讨论、多帮组同学 五、 体系结构 oracle的体系很庞大,要学习它,首先要了解oracle的框架。oracle的框架主要由物理结构、逻辑结构、内存分配、后台进程、oracle例程、系统改变号 (System Change Number)组成  物理结构 物理结构包含三种数据文件: 1) 控制文件 2) 数据文件 3) 在线重做日志文件  逻辑结构 功能:数据库如何使用物理空间 组成:表空间、段、区、块的组成层次 六、 oracle安装、卸载和启动  硬件要求 物理内存:1GB 可用物理内存:50M 交换空间大小:3.25GB 硬盘空间:10GB  安装 1. 安装程序成功下载,将会得到如下2个文件: 解压文件将得到database文件夹,文件组织如下: 点击setup.exe执行安装程序,开始安装。 2. 点击安装程序将会出现如下安装界面,步骤 1/9:配置安全更新 填写电子邮件地址(可以不填),去掉复选框,点击下一步 3. 步骤2/9:选择安装选项 勾选第一个,安装和配置数据库,点击下一步 4. 步骤3/8:选择系统类 勾选第一个:桌面类,点击下一步 5. 步骤4/8:配置数据库安装 选择安装路径,选择数据库版本(企业版),选择字符集(默认值) 填写全局数据库名,管理口令 6. 步骤5/8:先决条件检查 如果你的电脑满足要求但仍然显示检查失败,这时候直接忽略,勾选全部忽略 7. 步骤6/8:概要信息 核对将要安装数据的详细信息,并保存响应文件,以备以后查看。然后点击完成数据库安装 8. 步骤7/8:安装产品 产品安装过程中将会出现以上2个界面 9. 步骤8/8:完成安装  卸载Oracle 1. 在运行services.msc打开服务,停止Oracle的所有服务。 2. oracle11G自带一个卸载批处理\app\Administrator\product\11.2.0\dbhome_1\deinstall\deinstall.bat 3. 运行该批处理程序将自动完成oracle卸载工作,最后手动删除\app文件夹(可能需要重启才能删除) 4. 运行regedit命令,打开注册表窗口。删除注册表中与Oracle相关的内容,具体如下:  删除HKEY_LOCAL_MACHINE/SOFTWARE/ORACLE目录。  删除HKEY_LOCAL_MACHINE/SYSTEM/CurrentControlSet/Services中所有以oracle或OraWeb为开头的键。  删除HKEY_LOCAL_MACHINE/SYSETM/CurrentControlSet/Services/Eventlog/application中所有以oracle开头的键。  删除HKEY_CLASSES_ROOT目录下所有以Ora、Oracle、Orcl或EnumOra为前缀的键。  删除HKEY_CURRENT_USER/SOFTWARE/Microsoft/windows/CurrentVersion/Explorer/MenuOrder/Start Menu/Programs中所有以oracle 开头的键。  删除HKDY_LOCAL_MACHINE/SOFTWARE/ODBC/ODBCINST.INI中除Microsoft ODBC for Oracle注册表键以外的所有含有Oracle的键。  删除环境变量中的PATHT CLASSPATH中包含Oracle的值。  删除“开始”/“程序”中所有Oracle的组和图标。  删除所有与Oracle相关的目录,包括: C:\Program file\Oracle目录。 ORACLE_BASE目录。 C:\Documents and Settings\系统用户名、LocalSettings\Temp目录下的临时文件。 七、 oracle中的数据库 八、 常用的工具  Sql Plus  Sql Developer  Oracle Enterprise Manager   第二章 用户和权限 一、 用户介绍 ORACLE用户是学习ORACLE数据库中的基础知识,下面就介绍下类系统常用的默认ORACLE用户: 1. sys用户:超级用户,完全是个SYSDBA(管理数据库的人)。拥有dba,sysdba,sysoper等角色或权限。是oracle权限最高的用户,登录时不能用normal。 2. system用户:超级用户,默认是SYSOPT(操作数据库的人),不过它也能以SYSDBA的权限登陆。拥有普通dba角色权限。 3. scott用户:是个演示用户,是让你学习Oracle用的。 二、 常用命令 学习oracle,首先我们必须要掌握常用的基本命令,oracle中的命令比较多,常用的命令如下: 1. 登录命令(sqlplus) 说明:用于登录到oracle数据库 用法:sqlplus 用户名/密码 [as sysdba/sysoper] 注意:当用特权用户登录时,必须带上sysdba或sysoper 例子: 普通用户登录 sys用户登录 操作系统的身份登录 2. 连接命令(conn) 说明:用于连接到oracle数据库,也可实现用户的切换 用法:conn 用户名/密码 [as sysdba/sysoper] 注意:当用特权用户连接时,必须带上sysdba或sysoper 例子: 3. 断开连接(disc) 说明:断开与当前数据库的连接 用法:disc 4. 显示用户名(show user) 说明:显示当前用户名 用法:show user 5. 退出(exit) 说明:断开与当前数据库的连接并会退出 用法:exit 6. 编辑脚本(edit/ed) 说明:编辑指定或缓冲区的sql脚本 用法:edit [文件名] 列子: 7. 运行脚本 (start/@) 说明:运行指定的sql脚本 用法:start/@ 文件名 列子: 8. 印刷屏幕 (spool) 说明:将sql*plus屏幕中的内容输出到指定的文件 用法:开始印刷->spool 文件名 结束印刷->spool off 列子: 文件内容 9. 显示宽度 (linesize) 说明:设置显示行的宽度,默认是80个字符 用法:set linesize 120 10. 显示页数 (pagesize) 说明:设置每页显示的行数,默认是14页 用法:set pagesize 20 三、 用户管理 1. 创建用户 说明:Oracle中需要创建用户一定是要具有dba(数据库管理员)权限的用户才能创建,而且创建的新用户不具备任何权限,连登录都不可以。 用法:create user 新用户名 identified by 密码 例子: 2. 修改密码 说明:修改用户密码一般有两种方式,一种是通过命令password修改,另一种是通过语句alter user实现,如果要修改他人的密码,必须要具有相关的权限才可以 用法: 方式一 password [用户名] 方式二 alert user 用户名 identified by 新密码 例子: 修改当前用户(方式一) 修改当前用户(方式二) 修改其他用户(方式一) 修改其他用户(方式二) 3. 用户禁用与启用 说明:Oracle中想要禁用或启用一个账户也同样是使用alter user 命令来完成,只是语法和修改密码有所不同。 用法: 禁用 alert user 用户名 account lock 启用 alert user 用户名 account unlock 4. 删除用户 说明:Oracle中要删除一个用户,必须要具有dba的权限。而且不能删除当前用户,如果删除的用户有数据对象,那么必须加上关键字cascade。 用法:drop user 用户名 [cascade] 四、 用户权限与角色 1. 权限 Oracle中权限主要分为两种,系统权限和实体权限。  系统权限:系统规定用户使用数据库的权限。(系统权限是对用户而言)。  DBA: 拥有全部特权,是系统最高权限,只有DBA才可以创建数据库结构。  RESOURCE:拥有Resource权限的用户只可以创建实体,不可以创建数据库结构。  CONNECT:拥有Connect权限的用户只可以登录Oracle,不可以创建实体,不可以创建数据库结构。 注意: 对于普通用户:授予connect, resource权限。 对于DBA管理用户:授予connect,resource, dba权限。  授予系统权限 说明:要实现授予系统权限只能由DBA用户授出。 用法:grant 系统权限1[,系统权限2]… to 用户名1[,用户名2]…. 例子:  系统权限回收: 说明:系统权限只能由DBA用户回收 用法:revoke 系统权限 from 用户名 例子:  实体权限:某种权限用户对其它用户的表或视图的存取权限。(是针对表或视图而言的)。主要包括select, update, insert, alter, index, delete, all其中all包括所有权限。  授予实体权限 用法:grant 实体权限1[,实体权限2]… on 表名 to用户名1[,用户名2]…. 例子:  实体权限回收 用法:revoke 实体权限 on 表名from 用户名 例子:  查询用户拥有哪里权限: SQL> select * from role_tab_privs;//查询授予角色的对象权限 SQL> select * from role_role_privs;//查询授予另一角色的角色 SQL> select * from DBA_tab_privs;//查询直接授予用户的对象权限 SQL> select * from dba_role_privs;//查询授予用户的角色 SQL> select * from dba_sys_privs;//查询授予用户的系统权限 SQL> select * from role_sys_privs;//查询授予角色的系统权限 SQL> Select * from session_privs;// 查询当前用户所拥有的权限 2. 角色 角色。角色是一组权限的集合,将角色赋给一个用户,这个用户就拥有了这个角色中的所有权限。  系统预定义角色 预定义角色是在数据库安装后,系统自动创建的一些常用的角色。下面我们就简单介绍些系统角色:  CONNECT, RESOURCE, DBA这些预定义角色主要是为了向后兼容。其主要是用于数据库管理。oracle建议用户自己设计数据库管理和安全的权限规划,而不要简单的使用这些预定角色。将来的版本中这些角色可能不会作为预定义角色。  DELETE_CATALOG_ROLE, EXECUTE_CATALOG_ROLE,SELECT_CATALOG_ROLE这些角色主要用于访问数据字典视图和包。  EXP_FULL_DATABASE, IMP_FULL_DATABASE这两个角色用于数据导入导出工具的使用。  自定义角色 Oracle建议我们自定义自己的角色,使我们更加灵活方便去管理用户  创建角色 SQL> create role admin;  授权给角色 SQL> grant connect,resource to admin;  撤销角色的权限 SQL> revoke connect from admin;  删除角色 SQL> drop role admin;   第三章 Sql查询与函数 一、 SQL概述 SQL(Structured Query Language)结构化查询语言,是一种数据库查询和程序设计语言,用于存取数据以及查询、更新和管理关系数据库系统。同时也是数据库脚本文件的扩展名。  SQL语言主要包含5个部分  数据定义语言Data Definition Language(DDL),用来建立数据库、数据对象和定义其列。例如:CREATE、DROP、ALTER等语句。  数据操作语言Data Manipulation Language(DML),用来插入、修改、删除、查询,可以修改数据库中的数据。例如:INSERT(插入)、UPDATE(修改)、DELETE(删除)语句  数据查询语言 (Data Query Language, DQL) 是SQL语言中,负责进行数据查询而不会对数据本身进行修改的语句,这是最基本的SQL语句。例如:SELECT(查询)  数据控制语言Data Controlling Language(DCL),用来控制数据库组件的存取允许、存取权限等。例如:GRANT、REVOKE、COMMIT、ROLLBACK等语句。  事务控制语言(Transactional Control Language,TCL),用于维护数据的一致性,包括COMMIT(提交事务)、ROLLBACK(回滚事务)和SAVEPOINT(设置保存点)3条语句 二、 Oracle的数据类型 类型 参数 描述 字符类型 char 1~2000字节 固定长度字符串,长度不够的用空格补充 varchar2 1~4000字节 可变长度字符串,与CHAR类型相比,使用VARCHAR2可以节省磁盘空间,但查询效率没有char类型高 数值类型 Number(m,n) m(1~38) n(-84~127) 可以存储正数、负数、零、定点数和精度为38位的浮点数,其中,M表示精度,代表数字的总位数;N表示小数点右边数字的位数 日期类型 date 7字节 用于存储表中的日期和时间数据,取值范围是公元前4712年1月1日至公元9999年12月31日,7个字节分别表示世纪、年、月、日、时、分和秒 二进制数据类型 row 1~2000字节 可变长二进制数据,在具体定义字段的时候必须指明最大长度n long raw 1~2GB 可变长二进制数据 LOB数据类型 clob 1~4GB 只能存储字符数据 nclob 1~4GB 保存本地语言字符集数据 blob 1~4GB 以二进制信息保存数据 三、 DDL语言 1. Create table命令 用于创建表。在创建表时,经常会创建该表的主键、外键、唯一约束、Check约束等  语法结构 create table 表名( [字段名] [类型] [约束] ……….. CONSTRAINT fk_column FOREIGN KEY(column1,column2,…..column_n) REFERENCES tablename(column1,column2,…..column_n) )  例子: create table student( stuNo char(32) primary key,--主键约束 stuName varchar2(20) not null,--非空约束 cardId char(20) unique,--唯一约束 sex char(2) check(sex='男' or sex='女'),--检查约束 address varchar2(100) default '地址不详'--默认约束 ) create table mark( mid int primary key,--主键约束 stuNo char(32) not null, courseName varchar2(20) not null,--非空约束 score number(3) not null check(score>=0 and scoreselect * from em--查询所有数据 SQL>select ename,job from em--查询指定的字段数据 SQL> select * from emp where sal>1000--加条件 2. 聚合函数 聚合函数对一组值执行计算并返回单一的值。聚合函数忽略空值。聚合函数经常与 SELECT 语句的 GROUP BY 子句一同使用。不能在 WHERE 子句中使用组函数。  AVG(expression): 返回集合中各值的平均值 --查询所有人都的平均工资 select avg(sal) from emp  COUNT(expression): 以 Int32 形式返回集合中的项数 --查询工资低于2000的人数 select count(*) from emp where sal2000 5. 连接查询 连接查询是关系数据库中最主要的查询,主要包括内连接、外连接和交叉连接等。通过连接运算符可以实现多个表查询。  内连接 内连接也叫连接,是最早的一种连接。还可以被称为普通连接或者自然连接,内连接是从结果表中删除与其他被连接表中没有匹配行的所有行,所以内连接可能会丢失信息。  等值连接: select * from emp inner join dept on emp.deptno=dept.deptno select * from emp,dept where emp.deptno=dept.deptno  不等值连接: select * from emp inner join dept on emp.deptno!=dept.deptno  外连接 外连接分为三种:左外连接,右外连接,全外连接。对应SQL:LEFT/RIGHT/FULL OUTER JOIN。通常我们省略outer 这个关键字。写成:LEFT/RIGHT/FULL JOIN。  左外连接(left join): 是以左表的记录为基础的 select * from emp left join dept on emp.deptno=dept.deptno  右外连接(right join): 和left join的结果刚好相反,是以右表(BL)为基础的 select * from emp right join dept on emp.deptno=dept.deptno  全外连接(full join): 左表和右表都不做限制,所有的记录都显示,两表不足的地方用null 填充 select * from emp full join dept on emp.deptno=dept.deptno  交叉连接 交叉连接即笛卡儿乘积,是指两个关系中所有元组的任意组合。一般情况下,交叉查询是没有实际意义的。 select * from cross full join dept 6. 常用查询  like模糊查询 --查询姓名首字母为S开始的员工信息 select * from emp where ename like 'S%' --查询姓名第三个字母为A的员工信息 select * from emp where ename like '__A%'  is null/is not null 查询 --查询没有奖金的雇员信息 select * from emp where comm is null --查询有奖金的雇员信息 select * from emp where comm is not null  in查询 --查询雇员编号为7566、7499、7844的雇员信息 select * from emp where empno in(7566,7499,7844)  exists/not exists查询(效率高于in) --查询有上级领导的雇员信息 select * from emp e where exists (select * from emp where empno=e.mgr) --查询没有上级领导的雇员信息 select * from emp e where not exists (select * from emp where empno=e.mgr)  all查询 --查询比部门编号为20的所有雇员工资都高的雇员信息 select * from emp where sal > all(select sal from emp where deptno=20)  union合并不重复 select * from emp where comm is not null union select * from emp where sal>3000  union all合并重复 select * from emp where comm is not null union all select * from emp where sal>3000 7. 子查询 当一个查询是另一个查询的条件时,称之为子查询。子查询是一个 SELECT 语句,它嵌套在一个 SELECT、SELECT...INTO 语句、INSERT...INTO 语句、DELETE 语句、或 UPDATE 语句或嵌套在另一子查询中。  在CREATE TABLE语句中使用子查询 --创建表并拷贝数据 create table temp(id,name,sal) as select empno,ename,sal from emp  在INSERT语句中使用子查询 --当前表拷贝 insert into temp(id,name,sal) select * from temp --从其他表指定字段拷贝 insert into temp(id,name,sal) select empno,ename,sal from emp  在DELETE语句中使用子查询 --删除SALES部门中的所有雇员 delete from emp where deptno in (select deptno from dept where dname='SALES')  在UPDATE语句中使用子查询 --修改scott用户的工资和smith的工资一致 update emp set sal=(select sal from emp where ename='SMITH') where ename='SCOTT' --修改black用户的工作,工资,奖金和scott一致 update emp set(job,sal,comm)=(select job,sal,comm from emp where ename='SCOTT') where ename='BLAKE'  在SELECT语句中使用子查询 --查询和ALLEN同一部门的员工信息 select * from emp where deptno in (select deptno from emp where ename='ALLEN') --查询工资大于部门平均工资的雇员信息 select * from emp e (select avg(sal) asal,deptno from emp group by deptno) t where e.deptno=t.deptno and e.sal>t.asal 六、 TCL语言 1. COMMIT commit --提交事务 2. ROLLBACK rollback to p1 --回滚到指定的保存点 rollback --回滚所有的保存点 3. SAVEPOINT savepoint p1 --设置保存点 4. 只读事务 只读事务是指只允许执行查询的操作,而不允许执行任何其它dml操作的事务,它的作用是确保用户只能取得某时间点的数据。 set transaction read only 七、 oracle函数 1. 字符串函数 字符串函数是oracle中比较常用的,下面我们就介绍些常用的字符串函数:  concat:字符串连接函数,也可以使用’||’ --将职位和雇员名称显示在一列中 select concat(ename,concat('(',concat(job,')'))) from emp select ename || '(' || job || ')' from emp  length:返回字符串的长度 --查询雇员名字长度为5个字符的信息 select * from emp where length(ename)=5  lower:将字符串转换成小写 --以小写方式显示雇员名 select lower(ename) from emp  upper:将字符串转换成大写 --以大写方式显示雇员名 select upper (ename) from emp  substr:截取字符串 --只显示雇员名的前3个字母 select substr(ename,0,3) from emp  replace:替换字符串 --将雇员的金额显示为*号 select ename,replace(sal,sal,’*’) from emp  instr:查找字符串 --查找雇员名含有’LA’字符的信息 select * from emp where instr(ename,’LA’)>0 2. 日期函数  sysdate:返回当前session所在时区的默认时间 --获取当前系统时间 select sysdate from dual  add_months:返回指定日期月份+n之后的值,n可以为任何整数 --查询当前系统月份+2的时间 select add_months(sysdate,2) from dual --查询当前系统月份-2的时间 select add_months(sysdate,-2) from dual  last_day:返回指定时间所在月的最后一天 --获取当前系统月份的最后一天 select last_day(sysdate) from dual  months_between:返回月份差,结果可正可负,当然也有可能为0 --获取入职日期距离当前时间多少天 select months_between(sysdate, hiredate) from emp  trunc:为指定元素而截去的日期值 --获取当前系统年,其他默认 select trunc(sysdate,'yy') from dual --查询81年2月份入职的雇员 select * from emp where trunc(hiredate,'mm')=trunc(to_date('1981-02','yyyy-mm'),'mm') 3. 转换函数  to_char:将任意类型转换成字符串 --日期转换 select to_char(sysdate, 'yyyy-mm-dd hh24:mi:ss') from dual --数字转换 select to_char(-100.789999999999,'L99G999D999') from dual  数字格式控制符 符号 描述 9 代表一位数字,如果当前位有数字,显示数字,否则不显示(小数部分仍然会强制显示) 0 强制显示该位,如果当前位有数字,显示数字,否则显示0 $ 增加美元符号显示 L 增加本地货币符号显示 . 小数点符号显示 , 千分位符号显示  to_date:将字符串转换成日期对象 --字符转换成日期 select to_date('2011-11-11 11:11:11', 'yyyy-mm-dd hh24:mi:ss') from dual  to_number:将字符转换成数字对象 --字符转换成数字对象 select to_number('209.976')*5 from dual select to_number('209.976', '9G999D999')*5 from dual 4. 数学函数  abs:返回数字的绝对值 select abs(-1999) from dual  ceil:返回大于或等于n的最小的整数值 select ceil(2.48) from dual  floor:返回小于等于n的最大整数值 select floor(2.48) from dual  round:四舍五入 select round(2.48) from dual select round(2.485,2) from dual  bin_to_num:二进制转换成十进制 select bin_to_num(1,0,0,1,0) from dual   第四章 锁 一、 概述 锁是实现数据库并发控制的一个非常重要的技术。当事务在对某个数据对象进行操作前,先向系统发出请求,对其加锁。加锁后事务就对该数据对象有了一定的控制,在该事务释放锁之前,其他的事务不能对此数据对象进行更新操作。 在数据库中有两种基本的锁类型:排它锁(Exclusive Locks,即X锁)和共享锁(Share Locks,即S锁)。当数据对象被加上排它锁时,其他的事务不能对它读取和修改。加了共享锁的数据对象可以被其他事务读取,但不能修改。 根据保护的对象不同,Oracle数据库锁可以分为以下几大类:  DML锁(data locks,数据锁),用于保护数据的完整性  DDL锁(dictionary locks,字典锁),用于保护数据库对象的结构,如表、索引等的结构定义  内部锁和闩(internal locks and latches),保护数据库的内部结构 二、 DML锁 DML锁的目的在于保证并发情况下的数据完整性,在Oracle数据库中,DML锁主要包括TM锁和TX锁,其中TM锁称为表级锁,TX锁称为事务锁或行级锁。 1. 行级锁 当事务执行数据库插入、更新、删除操作时,该事务自动获得操作表中操作行的排它锁 --不允许其他用户对雇员表的部门编号为20的数据进行修改 select * from emp where deptno=20 for update --不允许其他用户对雇员表的所有数据进行修改 select * from emp for update --如果已经被锁定,就不用等待 select * from emp for update nowait --如果已经被锁定,更新的时候等待5秒 select * from emp for update wait 5 2. 锁模式  0(none)  1(null)  2(rs):行共享  3(rx):行排他  4(s):共享  5(srx):共享行排他  6(x):排他 数字越大,锁级别越高 3. 表级锁 当事务获得行锁后,此事务也将自动获得该行的表锁(行排他),以防止其它事务进行DDL语句影响记录行的更新  行共享锁(RS锁):允许用户进行任何操作,禁止排他锁 lock table emp in row share mode  行排他锁(RX锁):允许用户进行任何操作,禁止共享锁 lock table emp in row exclusive mode  共享锁(R锁):其他用户只能看,不能修改 lock table emp in share mode  排他锁(X锁):其他用户只能看,不能修改,不能加其他锁 lock table emp in exclusive mode  共享行排他(SRX锁):比行排他和共享锁级别高,不能添加共享锁 lock table emp in share row exclusive mode 4. 锁兼容性 S X RS RX SRX N/A S Y N Y N N Y X N N N N N Y RS Y N Y Y Y Y RX N N Y Y N Y SRX N N Y N N Y N/Y Y Y Y Y Y Y 5. 死锁 当两个事务需要一组有冲突的锁,而不能将事务继续下去的话,就出现死锁。 1) 用户A修改A表,事务不提交 2) 用户B修改B表,事务不提交 3) 用户A修改B表,阻塞 4) 用户B修改A表,阻塞 Oracle系统能自动发现死锁,并会自动选择工作量最少的事务进行撤销和释放所有锁 6. 悲观锁和乐观锁 数据的锁定分为两种方法,第一种叫做悲观锁,第二种叫做乐观锁  悲观锁:就是对数据的冲突采取一种悲观的态度,也就是说假设数据肯定会冲突,所以在数据开始读取的时候就把数据锁定住。  乐观锁:就是认为数据一般情况下不会造成冲突,所以在数据进行提交更新的时候,才会正式对数据的冲突与否进行检测,如果发现冲突了,则让用户返回错误的信息,让用户决定如何去做。 三、 DDL锁 1. 排它DDL锁 创建、修改、删除一个数据库对象的DDL语句获得操作对象的排它锁。 2. 共享DDL锁 需在数据库对象之间建立相互依赖关系的DDL语句通常需共享获得DDL锁 3. 分析锁 分析锁是一种独特的DDL锁类型,ORACLE使用它追踪共享池对象及它所引用数据库对象之间的依赖关系 四、 内部锁和闩 这是ORACLE中的一种特殊锁,用于顺序访问内部系统结构。当事务需向缓冲区写入信息时,为了使用此块内存区域,ORACLE首先必须取得这块内存区域的闩锁,才能向此块内存写入信息。   第五章 数据库对象 一、 概述 ORACLE数据库主要有如下数据库对象:  tablespace and datafile(表空间和数据文件)  table(表)  constraints(约束)  index(索引)  view(试图)  sequence(序列)  synonyms(同义词)  DB-link(数据库链路) 二、 表空间和数据文件 表空间是数据库的逻辑组成部分,从物理上讲,数据库数据是存放在数据文件中,从逻辑上讲数据库则是存放在表空间中,表空间是由一个或多个数据文件组成。  表空间  某一时刻只能属于一个数据库  由一个或多个数据文件组成  可进一步划分为逻辑存储  表空间主要分为两种  System表空间  随数据库创建  包含数据字典  包含system还原段  非system表空间  用于分开存储段  易于空间管理  控制分配给用户的空间量  数据文件  只能属于一个表空间和一个数据库  是方案对象数据的资料档案库  创建表空间  语法 CREATE TABLESPACE tablespacename [DATAFILE clause] [MINIMUM EXTENT integer[k|m]] [BLOCKSIZE integer[k]] [LOGGING|NOLOGGING] [DEFAULT storage_clause] [ONLINE|OFFLINE] [PERMANENT|TEMPORARY] [extent_management_clause] [segment_management_clause]  例子 --创建本地管理表空间 create tablespace firstSpance datafile 'e:/firstspance.dbf'size 100M extent management local uniform size 256k --修改文件大小 alter database datafile 'e:/firstspance.dbf' resize 110m --删除表空间 drop tablespace firstSpance INCLUDING CONTENTS and datafiles --使用数据库表空间 --创建用户指定表空间 create user guest identified by 123456 default tablespace firstSpance --表中指定表空间 create table account( accountid number(4), accountName varchar2(20) )tablespace firstSpance --表空间脱机 alter tablespace firstSpance offline --表空间联机 alter tablespace firstSpance online --表空间只读,不能进行dml操作 alter tablespace firstSpance read only 三、 同义词 Oracle数据库中提供了同义词管理的功能。同义词是数据库方案对象的一个别名,经常用于简化对象访问和提高对象访问的安全性。Oracle同义词有两种类型,分别是公用Oracle同义词与私有Oracle同义词。  公有同义词  语法 CREATE [OR REPLACE] PUBLIC SYNONYM sys_name FOR [SCHEMA.] object_name  创建(需拥有CREATE PUBLIC SYNONYM权限才可以创建) --创建同义词 create public synonym syn_emp for scott.emp --访问同义词 select * from syn_emp  删除 drop public synonym syn_emp  私有同义词  语法 CREATE [OR REPLACE] SYNONYM sys_name FOR [SCHEMA.] object_name  创建 --创建同义词 create synonym syn_pri_emp for emp --访问同义词 select * from syn_ pri _emp  删除 drop public synonym syn_emp 四、 表分区 当表中的数据量不断增大,查询数据的速度就会变慢,应用程序的性能就会下降,这时就应该考虑对表进行分区。表进行分区后,逻辑上表仍然是一张完整的表,只是将表中的数据在物理上存放到多个表空间(物理文件上),这样查询数据时,不至于每次都扫描整张表。  优点:  改善查询性能:对分区对象的查询可以仅搜索自己关心的分区,提高检索速度。  增强可用性:如果表的某个分区出现故障,表在其他分区的数据仍然可用;  维护方便:如果表的某个分区出现故障,需要修复数据,只修复该分区即可;  均衡I/O:可以把不同的分区映射到磁盘以平衡I/O,改善整个系统性能。  使用场合  表的大小超过2GB  表中包含历史数据,新的数据被增加都新的分区中  常见分区方法:  范围 --- 8  Hash --- 8i  列表 --- 9i  组合 --- 8i 1. 范围分区 范围分区将数据基于范围映射到每一个分区,这个范围是你在创建分区时指定的分区键决定的。这种分区方式是最为常用的,并且分区键经常采用日期。  特点:  最早、最经典的分区算法  Range分区通过对分区字段值的范围进行分区  Range分区特别适合于按时间周期进行数据的存储。日、周、月、年等。  数据管理能力强(数据迁移、数据备份、数据交换)  范围分区的数据可能不均匀  范围分区与记录值相关,实施难度和可维护性相对较差  例子  按值划分 --创建 CREATE TABLE book ( bookid NUMBER(5), bookname VARCHAR2(30), price NUMBER(8) )PARTITION BY RANGE (price)--分区字段 ( PARTITION P1 VALUES LESS THAN (4) TABLESPACE system, PARTITION P2 VALUES LESS THAN (8) TABLESPACE system, PARTITION P3 VALUES LESS THAN (maxvalue) TABLESPACE system, ) --MAXVALUE代表了一个不确定的值,这个值高于其它分区中的任何分区键的值  按日期划分 CREATE TABLE student ( stuno NUMBER(5), stuname VARCHAR2(30), birthday date )PARTITION BY RANGE (birthday)--分区字段 ( PARTITION P1990 VALUES LESS THAN (to_date('1990-01-01','yyyy-mm-dd')) TABLESPACE system, PARTITION P1991 VALUES LESS THAN (to_date('1991-01-01','yyyy-mm-dd')) TABLESPACE system ); 2. Hash分区(散列分区) 这类分区是在列值上使用散列算法,以确定将行放入哪个分区中。当列的值没有合适的条件时,建议使用散列分区。散列分区为通过指定分区编号来均匀分布数据的一种分区类型。如果你要使用hash分区,只需指定分区的数量即可。建议分区的数量采用2的n次方,这样可以使得各个分区间数据分布更加均匀。  特点  基于分区字段的HASH值,自动将记录插入到指定分区。  分区数一般是2的幂  易于实施  总体性能最佳  适合于静态数据  HASH分区适合于数据的均匀存储  数据管理能力弱  HASH分区对数据值无法控制  例子 CREATE TABLE classes ( clsno NUMBER(5), clsname VARCHAR2(30) )PARTITION BY HASH(clsno)--分区字段 ( PARTITION ph1 tablespace system, PARTITION ph2 tablespace system ) 3. List分区(列表分区) 该分区的特点是某列的值只有几个,基于这样的特点我们可以采用列表分区。  特点  List分区通过对分区字段的离散值进行分区  List分区是不排序的,而且分区之间也没有关联  List分区适合于对数据离散值进行控制  List分区只支持单个字段  List分区具有与range分区相似的优缺点  数据管理能力强  各分区的数据可能不均匀  例子 CREATE TABLE users ( userid NUMBER(5), username VARCHAR2(30), province char(5) )PARTITION BY list(province)--分区字段 ( PARTITION pl1 values('广东') tablespace system, PARTITION pl2 values('江西') tablespace system, PARTITION pl3 values('广西') tablespace system, PARTITION pl4 values('湖南') tablespace system ); 4. 组合分区 常见的组合分区主要有范围散列分区和范围列表分区  特点  既适合于历史数据,又适合于数据均匀分布  与范围分区一样提供高可用性和管理性  实现粒度更细的操作  组合范围列表分区 这种分区是基于范围分区和列表分区,表首先按某列进行范围分区,然后再按某列进行列表分区,分区之中的分区被称为子分区。  例子 CREATE TABLE student ( stuno NUMBER(5), stuname VARCHAR2(30), birthday date, province char(5) )PARTITION BY RANGE (birthday) --主分区字段 subpartition BY LIST(province)--子分区字符 ( PARTITION P1990 VALUES LESS THAN(to_date('1990-01-01','yyyy-mm-dd')) TABLESPACE system ( SUBPARTITION pl1 values('广东') tablespace system, SUBPARTITION pl2 values('江西') tablespace system, SUBPARTITION pl3 values('广西') tablespace system, SUBPARTITION pl4 values('湖南') tablespace system ), PARTITION P1991 VALUES LESS THAN(to_date('1991-01-01','yyyy-mm-dd')) TABLESPACE system ( SUBPARTITION p21 values('广东') tablespace system, SUBPARTITION p22 values('江西') tablespace system, SUBPARTITION p23 values('广西') tablespace system, SUBPARTITION p24 values('湖南') tablespace system ) );  组合范围散列分区 这种分区是基于范围分区和散列分区,表首先按某列进行范围分区,然后再按某列进行散列分区。  例子 CREATE TABLE student ( stuno NUMBER(5), stuname VARCHAR2(30), birthday date )PARTITION BY RANGE(birthday) --主分区字段 SUBPARTITION BY HASH(stuno)--子分区字符 ( PARTITION P1990 VALUES LESS THAN(to_date('1990-01-01','yyyy-mm-dd')) TABLESPACE system ( SUBPARTITION ph12 tablespace system, SUBPARTITION ph13 tablespace system ), PARTITION P1991 VALUES LESS THAN(to_date('1991-01-01','yyyy-mm-dd')) TABLESPACE system ( SUBPARTITION ph21 tablespace system, SUBPARTITION ph22 tablespace system ) ); 5. 表分区常用操作  添加分区 --添加主分区 alter table book add partition p4 values less than(maxvalue) tablespace system --添加子分区 ALTER TABLE student MODIFY PARTITION P1990 ADD SUBPARTITION pl5 values('福建')  删除分区 --删除主分区 ALTER TABLE student DROP PARTITION P1990 --删除子分区 ALTER TABLE student DROP SUBPARTITION p15  重命名表分区 ALTER TABLE student RENAME PARTITION P21 TO P2  显示数据库所有分区表的信息 select * from DBA_PART_TABLES  显示当前用户所有分区表的信息 select * from USER_PART_TABLES  查询指定表分区数据 select * from users partition(pl2)--主分区 select * from users subpartition(phl2)--子分区  删除分区表一个分区的数据 alter table book truncate partition p11   第六章 视图 一、 概述 视图是基于一个表或多个表或视图的逻辑表,本身不包含数据,通过它可以对表里面的数据进行查询和修改。视图基于的表称为基表。视图是存储在数据字典里的一条select语句。 通过创建视图可以提取数据的逻辑上的集合或组合。  为什么使用视图  控制数据访问  简化查询  数据独立性  避免重复访问相同的数据  使用修改基表的最大好处是安全性,即保证那些能被任意人修改的列的安全性  Oracle中视图分类  关系视图  内嵌视图  对象视图  物化视图 二、 关系视图 关系视图是作为数据库对象存在的,创建之后也可以通过工具或数据字典来查看视图的相关信息。关系视图是4种视图中最简单,同时也最常用的视图。  语法 CREATE [OR REPLACE] [FORCE|NOFORCE] VIEW view_name [(alias[, alias]...)] AS subquery [WITH CHECK OPTION [CONSTRAINT constraint]] [WITH READ ONLY] 1. OR REPLACE:若所创建的试图已经存在,ORACLE自动重建该视图 2. FORCE:不管基表是否存在ORACLE都会自动创建该视图 3. NOFORCE:只有基表都存在ORACLE才会创建该视图 4. Alias:为视图产生的列定义的别名 5. subquery:一条完整的SELECT语句,可以在该语句中定义别名 6. WITH CHECK OPTION:插入或修改的数据行必须满足视图定义的约束 7. WITH READ ONLY:该视图上不能进行任何DML操作  例子 create or replace view view_Account_dept as select * from emp where deptno=10 --只读视图 create or replace view view_Account_dept as select * from emp where deptno=10 order by sal with read only --约束视图 create or replace view view_Account_dept as select * from emp where deptno=10 with check option  查询视图 select * from emp where view_Account_dept  修改视图 通过OR REPLACE 重新创建同名视图即可  删除视图 DROP VIEW VIEW_NAME语句删除视图  视图上的DML 操作原则 1. 简单视图可以执行DML操作; 2. 在视图包含GROUP函数,GROUP BY子句,DISTINCT关键字时不能执行delete语句 3. 在视图包含GROUP函数,GROUP BY子句,DISTINCT关键字,ROWNUM为例,列定义为表达式时不能执行update语句 4. 在视图包含GROUP函数,GROUP BY子句,DISTINCT关键字,ROWNUM为例,列定义为表达式,表中非空的列子视图定义中未包括时不能执行insert语句 5. 可以使用WITH READ ONLY来屏蔽DML操作 三、 内嵌视图 内嵌视图是在from语句中的可以把表改成一个子查询。内嵌视图不属于任何用户,也不是对象,内嵌视图是子查询的一种。  例子 Select * from (select * from emp where deptno=10) where sal>2000 四、 对象视图 对象类型在数据库编程中有许多好处,但有时,应用程序已经开发完成。为了迎合对象类型而重建数据表是不现实的。对象视图正是解决这一问题的优秀策略。 五、 物化视图 常用于数据库的容灾,不是传统意义上虚拟视图,是实体化视图,和表一样可以存储数据、查询数据。主备数据库数据同步通过物化视图实现,主备数据库通过data link连接,在主备数据库物化视图进行数据复制。当主数据库垮掉时,备数据库接管,实现容灾。  语法 create materialized view materialized_view_name build [immediate|deferred] --1.创建方式 refresh [complete|fast|force|never] --2.物化视图刷新方式 on [commit|demand] --3.刷新触发方式 start with (start_date) --4.开始时间 next (interval_date) --5.间隔时间 with [primary key|rowid] --默认 primary key ENABLE QUERY REWRITE --7.是否启用查询重写 as --8.关键字 select statement; --9.基表选取数据的select语句 1. 创建方式  immediate(默认):立即  deferred:延迟,至第一次refresh时,才生效 2. 物化视图刷新方式  force(默认):如果可以快速刷新,就执行快速刷新,否则,执行完全刷新  complete:完全刷新,即刷新时更新全部数据,包括视图中已经生成的原有数据  fast:快速刷新,只刷新增量部分。前提是,需要在基表上创建物化视图日志。该日志记录基表数据变化情况,所以才能实现增量刷新  never:从不刷新 3. 刷新触发方式  on commit:基表有commit动作时,刷新视图,不能跨库执行(因为不知道别的库的提交动作)  on demand,在需要时刷新,根据后面设定的起始时间和时间间隔进行刷新,或者手动用dbms_mview包中的过程刷新时再执行刷新。 4. 开始时间和间隔时间  4和5即开始刷新时间和下次刷新的时间间隔。如:start with sysdate next sysdate+1/1440表示马上开始,刷新间隔为1分钟。(与 on commit选项冲突) 5. 创建模式  primary key(默认):基于基表的主键创建  rowed:不能对基表执行分组函数、多表连结等需要把多个rowid合成一行的操作 6. 是否启用查询重写  如果设置了初始化参数query_rewrite_enabled=true则默认就会启用查询重写。但是,数据库默认该参数为false。并且,不是什么时候都应该启用查询重写。所以,该参数应该设置为false,而在创建特定物化视图时,根据需要开启该功能。 7. 注意  如果选择使用了上面第4,5选项,则不支持查询重写功能(原因很简单,所谓重写,就是将对基表的查询定位到了物化视图上,而4、5选项会造成物化视图上部分数据延迟,所以,不能重写)。  例子 --创建增量刷新的物化视图时应先创建存储的日志空间 --在scott.emp表中创建物化视图日志 create materialized view log on emp tablespace users with rowid; --开始创建物化视图 --方式一 create materialized view mv_emp tablespace users --指定表空间 build immediate --创建视图时即生成数据 refresh fast --基于增量刷新 on commit --数据DML操作提交就刷新 with rowid --基于ROWID刷新 as select * from emp --方式二 create materialized view mv_emp2 tablespace users --指定表空间 refresh fast --基于增量刷新 start with sysdate --创建视图时即生成数据 next sysdate+1/1440 /*每隔一分钟刷新一次*/ with rowid --基于ROWID刷新 as select * from emp --删除物化视图日志 drop materialized view mv_emp   第七章 索引 一、 概述 索引是建立在表上的可选对象,设计索引的目的是为了提高查询的速度。但同时索引也会增加系统的负担,进行影响系统的性能。 索引一旦建立后,当在表上进行DML操作时,Oracle会自动维护索引,并决定何时使用索引。 索引的使用对用户是透明的,用户不需要在执行SQL语句时指定使用哪个索引及如何使用索引,也就是说,无论表上是否创建有索引,SQL语句的用法不变。用户在进行操作时,不需要考虑索引的存在,索引只与系统性能相关。  索引的原理 当在一个没有创建索引的表中查询符合某个条件的记录时,DBMS会顺序地逐条读取每个记录与查询条件进行匹配,这种方式称为全表扫描。全表扫描方式需要遍历整个表,效率很低。  索引的类型 Oracle支持多种类型的索引,可以按列的多少、索引值是否唯一和索引数据的组织形式对索引进行分类,以满足各种表和查询条件的要求。  单列索引和复合索引  B树索引  位图索引  函数索引  创建索引 CREATE [UNIQUE] | [BITMAP] INDEX index_name ON table_name([column1 [ASC|DESC],column2 [ASC|DESC],…] | [express]) [TABLESPACE tablespace_name] [PCTFREE n1] [STORAGE (INITIAL n2)] [NOLOGGING] [NOLINE] [NOSORT]  UNIQUE:表示唯一索引,默认情况下,不使用该选项。  BITMAP:表示创建位图索引,默认情况下,不使用该选项。  PCTFREE:指定索引在数据块中的空闲空间。对于经常插入数据的表,应该为表中索引指定一个较大的空闲空间。  NOLOGGING:表示在创建索引的过程中不产生任何重做日志信息。默认情况下,不使用该选项。  ONLINE:表示在创建或重建索引时,允许对表进行DML操作。默认情况下,不使用该选项。  NOSORT:默认情况下,不使用该选项。则Oracle在创建索引时对表中记录进行排序。如果表中数据已经是按该索引顺序排列的,则可以使用该选项。 二、 单列索引和复合索引 一个索引可以由一个或多个列组成。基于单个列所创建的索引称为单列索引,基于两列或多列所创建的索引称为多列索引。 三、 B树索引 B树索引是Oracle数据库中最常用的一种索引。当使用CREATE INDEX语句创建索引时,默认创建的索引就是B树索引。B树索引就是一棵二叉树,它由根、分支节点和叶子节点三部分构成。叶子节点包含索引列和指向表中每个匹配行的ROWID值。叶子节点是一个双向链表,因此可以对其进行任何方面的范围扫描。 B树索引中所有叶子节点都具有相同的深度,所以不管查询条件如何,查询速度基本相同。另外,B树索引能够适应各种查询条件,包括精确查询、模糊查询和比较查询。  例子 --创建B树索引,属于单列索引 create index idx_emp_job on emp(job) --创建B树索引,属于复合索引 create index idx_emp_nameorsal on emp(ename,sal) --创建唯一的B树索引,属于单列索引 create unique index idx_emp_ename on emp(ename) --删除索引 drop index idx_emp_job drop index idx_emp_nameorsal drop index idx_emp_ename --如果表已存在大量的数据,需要规划索引段 create index idx_emp_nameorsal on emp(ename,sal) pctfree 30 tablespace system 四、 位图索引 在B树索引中,保存的是经排序过的索引列及其对应的ROWID值。但是对于一些基数很小的列来说,这样做并不能显著提高查询的速度。所谓基数,是指某个列可能拥有的不重复值的个数。比如性别列的基数为2(只有男和女)。 因此,对于象性别、婚姻状况、政治面貌等只具有几个固定值的字段而言,如果要建立索引,应该建立位图索引,而不是默认的B树索引。  例子 --创建位图索引,单列索引 create bitmap index idx_bm_job on emp(job) --创建位图索引,复合索引 create bitmap index idx_bm_jobordeptno on emp(job,deptno) --删除位图索引 drop index idx_bm_job drop index idx_bm_jobordeptno 五、 函数索引 函数索引既可以使用B树索引,也可以使用位图索引,可以根据函数或表达式的结果的基数大小来进行选择,当函数或表达式的结果不确定时采用B树索引,当函数或表达式的结果是固定的几个值时采用位图索引。  例子 --合并索引 alter index idx_emp_ename COALESCE 六、 并和重建索引 表在使用一段时间后,由于用户不断对其进行更新操作,而每次对表的更新必然伴随着索引的改变,因此,在索引中会产生大量的碎片,从而降低索引的使用效率。有两种方法可以清理碎片:合并索引和重建索引。  合并索引就是将B树叶子节点中的存储碎片合并在一起,从而提高存取效率,但这种合并并不会改变索引的物理组织结构。 --创建B树类型的函数索引 create index idx_fun_emp_hiredate on emp(to_char(hiredate,'yyyy-mm-dd')) --创建位图类型的函数索引 create index idx_fun_emp_job on emp(upper(job))  重建索引相当于删除原来的索引,然后再创建一个新的索引,因此,CREAT INDEX语句中的选项同样适用于重建索引。如果在索引列上频繁进行UPDATE和DELETE操作,为了提高空间的利用率,应该定期重建索引。 七、 管理索引的原则 使用索引的目的是为了提高系统的效率,但同时它也会增加系统的负担,进行影响系统的性能,因为系统必须在进行DML操作后维护索引数据。 在新的SQL标准中并不推荐使用索引,而是建议在创建表的时候用主键替代。因此,为了防止使用索引后反而降低系统的性能,应该遵循一些基本的原则: 1. 小表不需要建立索引。 2. 对于大表而言,如果经常查询的记录数目少于表中总记录数目的15%时,可以创建索引。这个比例并不绝对,它与全表扫描速度成反比。 3. 对于大部分列值不重复的列可建立索引。 4. 对于基数大的列,适合建立B树索引,而对于基数小的列适合建立位图索引。 5. 对于列中有许多空值,但经常查询所有的非空值记录的列,应该建立索引。 6. LONG和LONG RAW列不能创建索引。 7. 经常进行连接查询的列上应该创建索引。 8. 在使用CREATE INDEX语句创建查询时,将最常查询的列放在其他列前面。 9. 维护索引需要开销,特别时对表进行插入和删除操作时,因此要限制表中索引的数量。对于主要用于读的表,则索引多就有好处,但是,一个表如果经常被更改,则索引应少点。 10. 在表中插入数据后创建索引。如果在装载数据之前创建了索引,那么当插入每行时,Oracle都必须更改每个索引。 八、 ROWID和ROWNUM 1. ROWID rowid是一个伪列,是用来确保表中行的唯一性,它并不能指示出行的物理位置,但可以用来定位行。rowid是存储在索引中的一组既定的值(当行确定后)。我们可以像表中普通的列一样将它选出来, 利用rowid是访问表中一行的最快方式。rowid的是基于64位编码的18个字符显示(数据对象编号(6)+文件编号(3) +块编号(6)+行编号(3)=18位) select rowid from emp  ROWID的使用 --快速删除重复的记录 delete from temp t where rowid not in( select max(rowid) from temp where t.id=id and t.name=name and t.sal = sal ) 2. ROWNUM ROWNUM是一个序列,是oracle数据库从数据文件或缓冲区中读取数据的顺序。它取得第一条记录则rownum值为1,第二条为2,依次类推。 select rownum,emp.* from emp  ROWID的使用 --取前3条记录 select * from emp where rownum<=3--方式一 select * from emp where rownum!=4--方式二 --分页 select * from emp where empno not in( select empno from emp where rownum<5--方式一 ) and rownum <4   第八章 PL/SQL编程 一、 介绍 PL/SQL是oracle在标准sql语言上的扩展,PL/SQL不仅允许嵌入sql语言,还可以定义变量和常量,允许使用例外处理各种错误,这样使它的功能变得更加强大。 PL/SQL也是一种语言,叫做过程化sql语言(procedural language/sql),通过此语言可以实现复杂功能或者复杂的计算。  优点 1. 提高应用程序的运行性能 2. 模块化的设计思想 3. 减少网络传输量 4. 提高安全性  缺点 1. 可移植性差 2. 违反MVC设计模式 3. 无法进行面向对象编程 4. 无法做成通用的业务逻辑框架 5. 代码可读性差,相当难维护  分类 二、 PL/SQL基础 1. 编写规范 1) 注释 --单行注释 /*块注释*/ 2) 标识符的命名规范  定义变量:建议用v_作为前缀v_price  定义常量:建议用c_作为前缀c_pi  定义游标:建议用_cursor作为后缀emp_cursor  定义例外:建议用e_作为前缀e_error 2. 块结构 PL/SQL块由三个部分组成:定义部分、执行部分、例外处理部分 Declare /* 定义部分(可选):定义常量、变量、游标、例外,复杂数据类型 */ begin /* 执行部分(必须):要执行的PL/SQL语句和SQL语句 */ exception /*例外部分(可选):处理运行各种错误*/ end 案例一 :只定义执行部分 begin /* dbms_output是oracle提供的包(类似java开发包) 该包包含一些过程,put_line就是其一个过程 */ dbms_output.put_line('HELLO WORLD'); --控制台输出 end; 案例二 :定义声明部分和执行部分 declare --声明变量 v_name varchar2(20); v_sal number(7,2); begin --执行查询 select ename,sal into v_name,v_sal from emp where rownum=1; --控制台输出 dbms_output.put_line('用户名:' || v_name); dbms_output.put_line('工资:' || v_sal); end; 案例三 :定义声明部分、执行部分和例外部分 declare --声明变量 v_name varchar2(20); v_sal number(7,2); begin --执行查询,条件中的&表示从控制接受数据 select ename,sal into v_name,v_sal from emp where empno=&no; --控制台输出 dbms_output.put_line('用户名:' || v_name); dbms_output.put_line('工资:' || v_sal); exception --例外处理(no_data_found) when no_data_found then dbms_output.put_line('执行查询没有结果'); end; 3. 预定义例外 1) case_not_found预定义例外 在开发pl/sql块中编写case语句时,如果在when子句中没有包含必须的条件分支,就会触发case_not_found例外。 2) cursor_already_open预定义例外 当重新打开已经打开的游标时,会隐含的触发cursor_already_open例外。 3) dup_val_on_index预定义例外 在唯一索引所对应的列上插入重复的值时,会隐含的触发例外 4) invalid_cursorn预定义例外 当试图在不合法的游标上执行操作时,会触发该例外 5) invalid_number预定义例外 当输入的数据有误时,会触发该例外 6) no_data_found预定义例外 当执行select into没有返回行,就会触发该例外 7) too_many_rows预定义例外 当执行select into语句时,如果返回超过了一行,则会触发该例外 8) zero_divide预定义例外 当执行2/0语句时,则会触发该例外 9) value_error预定义例外 当在执行赋值操作时,如果变量的长度不足以容纳实际数据,则会触发该例外value_error 10) others 4. 变量类型分类 在编写PL/SQL时,可以定义变量和常量,常用的类型主要有:  标量类型(scalar)  复合类型(composite)  参照类型(reference)  lob(large object) 5. 标量类型:常用类型 declare --定义一个变长字符串 v_name varchar2(20); --定义小数,并赋值 v_sal number(7,2) :=9.8; --定义整数 v_num number(4); --定义日期 v_birthday date; --定义布尔类型,不能为空,初始值为false v_flg boolean not null default false; --使用%type类型 v_job emp.job%type; begin v_flg := true; v_birthday :=sysdate; dbms_output.put_line('当前时间:' || v_birthday); end; 6. 复合类型:可以存放多个值。主要包括PL/SQL记录、PL/SQL表、嵌入表和varray这四种类型 记录类型:类似于c中的结构体 declare --定义记录类型 type emp_record_type is record( empno emp.empno%type, ename emp.ename%type, sal emp.sal%type ); --定义变量引用记录类型 v_record emp_record_type; begin --使用记录类型 select empno,ename,sal into v_record from emp where rownum=1; --控制台输出 dbms_output.put_line('雇员编号:' || v_record.empno); dbms_output.put_line('雇员姓名:' || v_record.ename); dbms_output.put_line('雇员工资:' || v_record.sal); end; 表类型:类似于java语言中的数组 declare --声明表类型 type emp_table_type is table of varchar2(20) index by PLS_INTEGER;--表示表按整数来排序 v_enames emp_table_type;--定义变量引用表类型 begin select ename into v_enames(0) from emp where rownum=1; select ename into v_enames(1) from emp where empno=7499; select ename into v_enames(2) from emp where empno=7698; --输出 dbms_output.put_line('下标0:' || v_enames(0)); dbms_output.put_line('下标1:' || v_enames(1)); dbms_output.put_line('下标2:' || v_enames(2)); end; varray类型:可变长数组 declare --定义varray类型 type varray_list is varray(20) of number(4); --定义变量引用varray类型 v_list varray_list:=varray_list(7369,7499,7566); begin --for i in v_list.first..v_list.last for i in 1..v_list.count loop dbms_output.put_line(v_list(i)); end loop; end; PL/SQL集合方法 1) exists():用于确定特定集合元素是否存在 2) count:用于返回集合变量的元素总个数 3) limit:用于返回varray变量所允许的最大元素个数 4) first:用于返回集合变量中的一个元素的下标 5) last:用于返回集合变量中最后一个元素的下标 6) prior():返回当前元素前一个元素的下标 7) next():返回当前元素后一个元素的下标 8) extend:为集合变量添加元素,此方法适合用于嵌套表和varray 9) trim:从集合变量尾部删除元素,此方法适用于嵌套表和varray 10) delete:从集合变量中删除特定的元素,此方法适用于嵌套表和index-by表 7. 参照类型:类似c语言中的指针,oracle的游标 三、 PL/SQL控制语句 1. 条件分支语句 1) if—then declare --声明变量 v_empno emp.empno%type; v_sal emp.sal%type; begin --根据雇员编号查询工资 select empno,sal into v_empno,v_sal from emp where empno=&no; --如果工资小于2000就加100 if v_sal<2000 then --工资加100 update emp set sal = sal+100 where empno=v_empno; --提交 commit; end if; end; 2) if—then—else declare --声明变量 v_loginname varchar2(10); v_password varchar2(10); begin --从控制台接收数据 v_loginname := '&ln'; v_password := '&pw'; if v_loginname = 'admin' and v_password = '123456' then dbms_output.put_line('用户登录成功!'); else dbms_output.put_line('用户登录失败!'); end if; end; 3) if—then—elsif—else declare --声明变量 v_empno emp.empno%type; v_job emp.job%type; begin --根据雇员编号查询职位 select empno,job into v_empno,v_job from emp where empno=&no; /*如果雇员所属职位是manager工资加1000 职位是salesman工资加500 其他职位加200 */ if v_job = 'MANAGER' then --MANAGER职位工资加1000 update emp set sal = sal+1000 where empno=v_empno; elsif v_job = 'SALESMAN' then --SALESMAN职位工资加500 update emp set sal = sal+500 where empno=v_empno; else --其他职位工资加200 update emp set sal = sal+200 where empno=v_empno; end if; --提交 commit; end; 4) case declare --声明变量 v_mark number(4); v_outstr varchar2(40); begin --从控制台接收成绩 v_mark := &m; case when v_mark=90 then v_outstr := '优秀'; when v_mark=80 then v_outstr := '良好'; when v_mark=70 then v_outstr := '中等'; when v_mark=60 then v_outstr := '及格'; when v_mark=0 then v_outstr := '不及格'; else v_outstr := '成绩输入有误'; end case; --控制台输出 dbms_output.put_line(v_outstr); end; 2. 循环语句 1) loop LOOP 要执行的语句; EXIT WHEN /*条件满足,退出循环语句*/ END LOOP; 其中:EXIT WHEN 子句是必须的,否则循环将无法停止。 declare v_num number(4):=1; begin --从控制台接收数据并插入到account表中 loop insert into account values(v_num,'&name'); exit when v_num =10; v_num :=v_num+1; end loop; end; 2) while WHILE LOOP要执行的语句;END LOOP; 其中:  循环语句执行的顺序是先判断的真假,如果为真则循环执行,否则退出循环  在WHILE循环语
Contents Overview 1 Lesson 1: Concepts – Locks and Lock Manager 3 Lesson 2: Concepts – Batch and Transaction 31 Lesson 3: Concepts – Locks and Applications 51 Lesson 4: Information Collection and Analysis 63 Lesson 5: Concepts – Formulating and Implementing Resolution 81 Module 4: Troubleshooting Locking and Blocking Overview At the end of this module, you will be able to:  Discuss how lock manager uses lock mode, lock resources, and lock compatibility to achieve transaction isolation.  Describe the various transaction types and how transactions differ from batches.  Describe how to troubleshoot blocking and locking issues.  Analyze the output of blocking scripts and Microsoft® SQL Server™ Profiler to troubleshoot locking and blocking issues.  Formulate hypothesis to resolve locking and blocking issues. Lesson 1: Concepts – Locks and Lock Manager This lesson outlines some of the common causes that contribute to the perception of a slow server. What You Will Learn After completing this lesson, you will be able to:  Describe locking architecture used by SQL Server.  Identify the various lock modes used by SQL Server.  Discuss lock compatibility and concurrent access.  Identify different types of lock resources.  Discuss dynamic locking and lock escalation.  Differentiate locks, latches, and other SQL Server internal “locking” mechanism such as spinlocks and other synchronization objects. Recommended Reading  Chapter 14 “Locking”, Inside SQL Server 2000 by Kalen Delaney  SOX000821700049 – SQL 7.0 How to interpret lock resource Ids  SOX000925700237 – TITLE: Lock escalation in SQL 7.0  SOX001109700040 – INF: Queries with PREFETCH in the plan hold lock until the end of transaction Locking Concepts Delivery Tip Prior to delivering this material, test the class to see if they fully understand the different isolation levels. If the class is not confident in their understanding, review appendix A04_Locking and its accompanying PowerPoint® file. Transactions in SQL Server provide the ACID properties: Atomicity A transaction either commits or aborts. If a transaction commits, all of its effects remain. If it aborts, all of its effects are undone. It is an “all or nothing” operation. Consistency An application should maintain the consistency of a database. For example, if you defer constraint checking, it is your responsibility to ensure that the database is consistent. Isolation Concurrent transactions are isolated from the updates of other incomplete transactions. These updates do not constitute a consistent state. This property is often called serializability. For example, a second transaction traversing the doubly linked list mentioned above would see the list before or after the insert, but it will see only complete changes. Durability After a transaction commits, its effects will persist even if there are system failures. Consistency and isolation are the most important in describing SQL Server’s locking model. It is up to the application to define what consistency means, and isolation in some form is needed to achieve consistent results. SQL Server uses locking to achieve isolation. Definition of Dependency: A set of transactions can run concurrently if their outputs are disjoint from the union of one another’s input and output sets. For example, if T1 writes some object that is in T2’s input or output set, there is a dependency between T1 and T2. Bad Dependencies These include lost updates, dirty reads, non-repeatable reads, and phantoms. ANSI SQL Isolation Levels An isolation level determines the degree to which data is isolated for use by one process and guarded against interference from other processes. Prior to SQL Server 7.0, REPEATABLE READ and SERIALIZABLE isolation levels were synonymous. There was no way to prevent non-repeatable reads while not preventing phantoms. By default, SQL Server 2000 operates at an isolation level of READ COMMITTED. To make use of either more or less strict isolation levels in applications, locking can be customized for an entire session by setting the isolation level of the session with the SET TRANSACTION ISOLATION LEVEL statement. To determine the transaction isolation level currently set, use the DBCC USEROPTIONS statement, for example: USE pubs GO SET TRANSACTION ISOLATION LEVEL REPEATABLE READ GO DBCC USEROPTIONS GO Multigranular Locking Multigranular Locking In our example, if one transaction (T1) holds an exclusive lock at the table level, and another transaction (T2) holds an exclusive lock at the row level, each of the transactions believe they have exclusive access to the resource. In this scenario, since T1 believes it locks the entire table, it might inadvertently make changes to the same row that T2 thought it has locked exclusively. In a multigranular locking environment, there must be a way to effectively overcome this scenario. Intent lock is the answer to this problem. Intent Lock Intent Lock is the term used to mean placing a marker in a higher-level lock queue. The type of intent lock can also be called the multigranular lock mode. An intent lock indicates that SQL Server wants to acquire a shared (S) lock or exclusive (X) lock on some of the resources lower down in the hierarchy. For example, a shared intent lock placed at the table level means that a transaction intends on placing shared (S) locks on pages or rows within that table. Setting an intent lock at the table level prevents another transaction from subsequently acquiring an exclusive (X) lock on the table containing that page. Intent locks improve performance because SQL Server examines intent locks only at the table level to determine whether a transaction can safely acquire a lock on that table. This removes the requirement to examine every row or page lock on the table to determine whether a transaction can lock the entire table. Lock Mode The code shown in the slide represents how the lock mode is stored internally. You can see these codes by querying the master.dbo.spt_values table: SELECT * FROM master.dbo.spt_values WHERE type = N'L' However, the req_mode column of master.dbo.syslockinfo has lock mode code that is one less than the code values shown here. For example, value of req_mode = 3 represents the Shared lock mode rather than the Schema Modification lock mode. Lock Compatibility These locks can apply at any coarser level of granularity. If a row is locked, SQL Server will apply intent locks at both the page and the table level. If a page is locked, SQL Server will apply an intent lock at the table level. SIX locks imply that we have shared access to a resource and we have also placed X locks at a lower level in the hierarchy. SQL Server never asks for SIX locks directly, they are always the result of a conversion. For example, suppose a transaction scanned a page using an S lock and then subsequently decided to perform a row level update. The row would obtain an X lock, but now the page would require an IX lock. The resultant mode on the page would be SIX. Another type of table lock is a schema stability lock (Sch-S) and is compatible with all table locks except the schema modification lock (Sch-M). The schema modification lock (Sch-M) is incompatible with all table locks. Locking Resources Delivery Tip Note the differences between Key and Key Range locks. Key Range locks will be covered in a couple of slides. SQL Server can lock these resources: Item Description DB A database. File A database file Index An entire index of a table. Table An entire table, including all data and indexes. Extent A contiguous group of data pages or index pages. Page An 8-KB data page or index page. Key Row lock within an index. Key-range A key-range. Used to lock ranges between records in a table to prevent phantom insertions or deletions into a set of records. Ensures serializable transactions. RID A Row Identifier. Used to individually lock a single row within a table. Application A lock resource defined by an application. The lock manager knows nothing about the resource format. It simply compares the 'strings' representing the lock resources to determine whether it has found a match. If a match is found, it knows that resource is already locked. Some of the resources have “sub-resources.” The followings are sub-resources displayed by the sp_lock output: Database Lock Sub-Resources: Full Database Lock (default) [BULK-OP-DB] – Bulk Operation Lock for Database [BULK-OP-LOG] – Bulk Operation Lock for Log Table Lock Sub-Resources: Full Table Lock (default) [UPD-STATS] – Update statistics Lock [COMPILE] – Compile Lock Index Lock sub-Resources: Full Index Lock (default) [INDEX_ID] – Index ID Lock [INDEX_NAME] – Index Name Lock [BULK_ALLOC] – Bulk Allocation Lock [DEFRAG] – Defragmentation Lock For more information, see also… SOX000821700049 SQL 7.0 How to interpret lock resource Ids Lock Resource Block The resource type has the following resource block format: Resource Type (Code) Content DB (2) Data 1: sub-resource; Data 2: 0; Data 3: 0 File (3) Data 1: File ID; Data 2: 0; Data 3: 0 Index (4) Data 1: Object ID; Data 2: sub-resource; Data 3: Index ID Table (5) Data 1: Object ID; Data 2: sub-resource; Data 3: 0. Page (6) Data 1: Page Number; Data 3: 0. Key (7) Data 1: Object ID; Data 2: Index ID; Data 3: Hashed Key Extent (8) Data 1: Extent ID; Data 3: 0. RID (9) Data 1: RID; Data 3: 0. Application (10) Data 1: Application resource name The rsc_bin column of master..syslockinfo contains the resource block in hexadecimal format. For an example of how to decode value from this column using the information above, let us assume we have the following value: 0x000705001F83D775010002014F0BEC4E With byte swapping within each field, this can be decoded as: Byte 0: Flag – 0x00 Byte 1: Resource Type – 0x07 (Key) Byte 2-3: DBID – 0x0005 Byte 4-7: ObjectID – 0x 75D7831F (1977058079) Byte 8-9: IndexID – 0x0001 Byte 10-16: Hash Key value – 0x 02014F0BEC4E For more information about how to decode this value, see also… Inside SQL Server 2000, pages 803 and 806. Key Range Locking Key Range Locking To support SERIALIZABLE transaction semantics, SQL Server needs to lock sets of rows specified by a predicate, such as WHERE salary BETWEEN 30000 AND 50000 SQL Server needs to lock data that does not exist! If no rows satisfy the WHERE condition the first time the range is scanned, no rows should be returned on any subsequent scans. Key range locks are similar to row locks on index keys (whether clustered or not). The locks are placed on individual keys rather than at the node level. The hash value consists of all the key components and the locator. So, for a nonclustered index over a heap, where columns c1 and c2 where indexed, the hash would contain contributions from c1, c2 and the RID. A key range lock applied to a particular key means that all keys between the value locked and the next value would be locked for all data modification. Key range locks can lock a slightly larger range than that implied by the WHERE clause. Suppose the following select was executed in a transaction with isolation level SERIALIZABLE: SELECT * FROM members WHERE first_name between ‘Al’ and ‘Carl’ If 'Al', 'Bob', and 'Dave' are index keys in the table, the first two of these would acquire key range locks. Although this would prevent anyone from inserting either 'Alex' or 'Ben', it would also prevent someone from inserting 'Dan', which is not within the range of the WHERE clause. Prior to SQL Server 7.0, page locking was used to prevent phantoms by locking the entire set of pages on which the phantom would exist. This can be too conservative. Key Range locking lets SQL Server lock only a much more restrictive area of the table. Impact Key-range locking ensures that these scenarios are SERIALIZABLE:  Range scan query  Singleton fetch of nonexistent row  Delete operation  Insert operation However, the following conditions must be satisfied before key-range locking can occur:  The transaction-isolation level must be set to SERIALIZABLE.  The operation performed on the data must use an index range access. Range locking is activated only when query processing (such as the optimizer) chooses an index path to access the data. Key Range Lock Mode Again, the req_mode column of master.dbo.syslockinfo has lock mode code that is one less than the code values shown here. Dynamic Locking When modifying individual rows, SQL Server typically would take row locks to maximize concurrency (for example, OLTP, order-entry application). When scanning larger volumes of data, it would be more appropriate to take page or table locks to minimize the cost of acquiring locks (for example, DSS, data warehouse, reporting). Locking Decision The decision about which unit to lock is made dynamically, taking many factors into account, including other activity on the system. For example, if there are multiple transactions currently accessing a table, SQL Server will tend to favor row locking more so than it otherwise would. It may mean the difference between scanning the table now and paying a bit more in locking cost, or having to wait to acquire a more coarse lock. A preliminary locking decision is made during query optimization, but that decision can be adjusted when the query is actually executed. Lock Escalation When the lock count for the transaction exceeds and is a multiple of ESCALATION_THRESHOLD (1250), the Lock Manager attempts to escalate. For example, when a transaction acquired 1250 locks, lock manager will try to escalate. The number of locks held may continue to increase after the escalation attempt (for example, because new tables are accessed, or the previous lock escalation attempts failed due to incompatible locks held by another spid). If the lock count for this transaction reaches 2500 (1250 * 2), Lock Manager will attempt escalation again. The Lock Manager looks at the lock memory it is using and if it is more than 40 percent of SQL Server’s allocated buffer pool memory, it tries to find a scan (SDES) where no escalation has already been performed. It then repeats the search operation until all scans have been escalated or until the memory used drops under the MEMORY_LOAD_ESCALATION_THRESHOLD (40%) value. If lock escalation is not possible or fails to significantly reduce lock memory footprint, SQL Server can continue to acquire locks until the total lock memory reaches 60 percent of the buffer pool (MAX_LOCK_RESOURCE_MEMORY_PERCENTAGE=60). Lock escalation may be also done when a single scan (SDES) holds more than LOCK_ESCALATION_THRESHOLD (765) locks. There is no lock escalation on temporary tables or system tables. Trace Flag 1211 disables lock escalation. Important Do not relay this to the customer without careful consideration. Lock escalation is a necessary feature, not something to be avoided completely. Trace flags are global and disabling lock escalation could lead to out of memory situations, extremely poor performing queries, or other problems. Lock escalation tracing can be seen using the Profiler or with the general locking trace flag, -T1200. However, Trace Flag 1200 shows all lock activity so it should not be usable on a production system. For more information, see also… SOX000925700237 “TITLE: SQL 7.0 Lock escalation in SQL 7.0” Lock Timeout Application Lock Timeout An application can set lock timeout for a session with the SET option: SET LOCK_TIMEOUT N where N is a number of milliseconds. A value of -1 means that there will be no timeout, which is equivalent to the version 6.5 behavior. A value of 0 means that there will be no waiting; if a process finds a resource locked, it will generate error message 1222 and continue with the next statement. The current value of LOCK_TIMEOUT is stored in the global variable @@lock_timeout. Note After a lock timeout any transaction containing the statement, is rolled back or canceled by SQL Server 2000 (bug#352640 was filed). This behavior is different from that of SQL Server 7.0. With SQL Server 7.0, the application must have an error handler that can trap error 1222 and if an application does not trap the error, it can proceed unaware that an individual statement within a transaction has been canceled, and errors can occur because statements later in the transaction may depend on the statement that was never executed. Bug#352640 is fixed in hotfix build 8.00.266 whereby a lock timeout will only Internal Lock Timeout At time, internal operations within SQL Server will attempt to acquire locks via lock manager. Typically, these lock requests are issued with “no waiting.” For example, the ghost record processing might try to clean up rows on a particular page, and before it can do that, it needs to lock the page. Thus, the ghost record manager will request a page lock with no wait so that if it cannot lock the page, it will just move on to other pages; it can always come back to this page later. If you look at SQL Profiler Lock: Timeout events, internal lock timeout typically have a duration value of zero. Lock Duration Lock Mode and Transaction Isolation Level For REPEATABLE READ transaction isolation level, update locks are held until data is read and processed, unless promoted to exclusive locks. "Data is processed" means that we have decided whether the row in question matched the search criteria; if not then the update lock is released, otherwise, we get an exclusive lock and make the modification. Consider the following query: use northwind go dbcc traceon(3604, 1200, 1211) -- turn on lock tracing -- and disable escalation go set transaction isolation level repeatable read begin tran update dbo.[order details] set discount = convert (real, discount) where discount = 0.0 exec sp_lock Update locks are promoted to exclusive locks when there is a match; otherwise, the update lock is released. The sp_lock output verifies that the SPID does not hold any update locks or shared locks at the end of the query. Lock escalation is turned off so that exclusive table lock is not held at the end. Warning Do not use trace flag 1200 in a production environment because it produces a lot of output and slows down the server. Trace flag 1211 should not be used unless you have done extensive study to make sure it helps with performance. These trace flags are used here for illustration and learning purposes only. Lock Ownership Most of the locking discussion in this lesson relates to locks owned by “transactions.” In addition to transaction, cursor and session can be owners of locks and they both affect how long locks are held. For every row that is fetched, when SCROLL_LOCKS option is used, regardless of the state of a transaction, a cursor lock is held until the next row is fetched or when the cursor is closed. Locks owned by session are outside the scope of a transaction. The duration of these locks are bounded by the connection and the process will continue to hold these locks until the process disconnects. A typical lock owned by session is the database (DB) lock. Locking – Read Committed Scan Under read committed isolation level, when database pages are scanned, shared locks are held when the page is read and processed. The shared locks are released “behind” the scan and allow other transactions to update rows. It is important to note that the shared lock currently acquired will not be released until shared lock for the next page is successfully acquired (this is commonly know as “crabbing”). If the same pages are scanned again, rows may be modified or deleted by other transactions. Locking – Repeatable Read Scan Under repeatable read isolation level, when database pages are scanned, shared locks are held when the page is read and processed. SQL Server continues to hold these shared locks, thus preventing other transactions to update rows. If the same pages are scanned again, previously scanned rows will not change but new rows may be added by other transactions. Locking – Serializable Read Scan Under serializable read isolation level, when database pages are scanned, shared locks are held not only on rows but also on scanned key range. SQL Server continues to hold these shared locks until the end of transaction. Because key range locks are held, not only will this prevent other transactions from modifying the rows, no new rows can be inserted. Prefetch and Isolation Level Prefetch and Locking Behavior The prefetch feature is available for use with SQL Server 7.0 and SQL Server 2000. When searching for data using a nonclustered index, the index is searched for a particular value. When that value is found, the index points to the disk address. The traditional approach would be to immediately issue an I/O for that row, given the disk address. The result is one synchronous I/O per row and, at most, one disk at a time working to evaluate the query. This does not take advantage of striped disk sets. The prefetch feature takes a different approach. It continues looking for more record pointers in the nonclustered index. When it has collected a number of them, it provides the storage engine with prefetch hints. These hints tell the storage engine that the query processor will need these particular records soon. The storage engine can now issue several I/Os simultaneously, taking advantage of striped disk sets to execute multiple operations simultaneously. For example, if the engine is scanning a nonclustered index to determine which rows qualify but will eventually need to visit the data page as well to access columns that are not in the index, it may decide to submit asynchronous page read requests for a group of qualifying rows. The prefetch data pages are then revisited later to avoid waiting for each individual page read to complete in a serial fashion. This data access path requires that a lock be held between the prefetch request and the row lookup to stabilize the row on the page so it is not to be moved by a page split or clustered key update. For our example, the isolation level of the query is escalated to REPEATABLE READ, overriding the transaction isolation level. With SQL Server 7.0 and SQL Server 2000, portions of a transaction can execute at a different transaction isolation level than the entire transaction itself. This is implemented as lock classes. Lock classes are used to control lock lifetime when portions of a transaction need to execute at a stricter isolation level than the underlying transaction. Unfortunately, in SQL Server 7.0 and SQL Server 2000, the lock class is created at the topmost operator of the query and hence released only at the end of the query. Currently there is no support to release the lock (lock class) after the row has been discarded or fetched by the filter or join operator. This is because isolation level can be set at the query level via a lock class, but no lower. Because of this, locks acquired during the query will not be released until the query completes. If prefetch is occurring you may see a single SPID that holds hundreds of Shared KEY or PAG locks even though the connection’s isolation level is READ COMMITTED. Isolation level can be determined from DBCC PSS output. For details about this behavior see “SOX001109700040 INF: Queries with PREFETCH in the plan hold lock until the end of transaction”. Other Locking Mechanism Lock manager does not manage latches and spinlocks. Latches Latches are internal mechanisms used to protect pages while doing operations such as placing a row physically on a page, compressing space on a page, or retrieving rows from a page. Latches can roughly be divided into I/O latches and non-I/O latches. If you see a high number of non-I/O related latches, SQL Server is usually doing a large number of hash or sort operations in tempdb. You can monitor latch activities via DBCC SQLPERF(‘WAITSTATS’) command. Spinlock A spinlock is an internal data structure that is used to protect vital information that is shared within SQL Server. On a multi-processor machine, when SQL Server tries to access a particular resource protected by a spinlock, it must first acquire the spinlock. If it fails, it executes a loop that will check to see if the lock is available and if not, decrements a counter. If the counter reaches zero, it yields the processor to another thread and goes into a “sleep” (wait) state for a pre-determined amount of time. When it wakes, hopefully, the lock is free and available. If not, the loop starts again and it is terminated only when the lock is acquired. The reason for implementing a spinlock is that it is probably less costly to “spin” for a short time rather than yielding the processor. Yielding the processor will force an expensive context switch where:  The old thread’s state must be saved  The new thread’s state must be reloaded  The data stored in the L1 and L2 cache are useless to the processor On a single-processor computer, the loop is not useful because no other thread can be running and thus, no one can release the spinlock for the currently executing thread to acquire. In this situation, the thread yields the processor immediately. Lesson 2: Concepts – Batch and Transaction This lesson outlines some of the common causes that contribute to the perception of a slow server. What You Will Learn After completing this lesson, you will be able to:  Review batch processing and error checking.  Review explicit, implicit and autocommit transactions and transaction nesting level.  Discuss how commit and rollback transaction done in stored procedure and trigger affects transaction nesting level.  Discuss various transaction isolation level and their impact on locking.  Discuss the difference between aborting a statement, a transaction, and a batch.  Describe how @@error, @@transcount, and @@rowcount can be used for error checking and handling. Recommended Reading  Charter 12 “Transactions and Triggers”, Inside SQL Server 2000 by Kalen Delaney Batch Definition SQL Profiler Statements and Batches To help further your understanding of what is a batch and what is a statement, you can use SQL Profiler to study the definition of batch and statement.  Try This: Using SQL Profiler to Analyze Batch 1. Log on to a server with Query Analyzer 2. Startup the SQL Profiler against the same server 3. Start a trace using the “StandardSQLProfiler” template 4. Execute the following using Query Analyzer: SELECT @@VERSION SELECT @@SPID The ‘SQL:BatchCompleted’ event is captured by the trace. It shows both the statements as a single batch. 5. Now execute the following using Query Analyzer {call sp_who()} What shows up? The ‘RPC:Completed’ with the sp_who information. RPC is simply another entry point to the SQL Server to call stored procedures with native data types. This allows one to avoid parsing. The ‘RPC:Completed’ event should be considered the same as a batch for the purposes of this discussion. Stop the current trace and start a new trace using the “SQLProfilerTSQL_SPs” template. Issue the same command as outlines in step 5 above. Looking at the output, not only can you see the batch markers but each statement as executed within the batch. Autocommit, Explicit, and Implicit Transaction Autocommit Transaction Mode (Default) Autocommit mode is the default transaction management mode of SQL Server. Every Transact-SQL statement, whether it is a standalone statement or part of a batch, is committed or rolled back when it completes. If a statement completes successfully, it is committed; if it encounters any error, it is rolled back. A SQL Server connection operates in autocommit mode whenever this default mode has not been overridden by either explicit or implicit transactions. Autocommit mode is also the default mode for ADO, OLE DB, ODBC, and DB-Library. A SQL Server connection operates in autocommit mode until a BEGIN TRANSACTION statement starts an explicit transaction, or implicit transaction mode is set on. When the explicit transaction is committed or rolled back, or when implicit transaction mode is turned off, SQL Server returns to autocommit mode. Explicit Transaction Mode An explicit transaction is a transaction that starts with a BEGIN TRANSACTION statement. An explicit transaction can contain one or more statements and must be terminated by either a COMMIT TRANSACTION or a ROLLBACK TRANSACTION statement. Implicit Transaction Mode SQL Server can automatically or, more precisely, implicitly start a transaction for you if a SET IMPLICIT_TRANSACTIONS ON statement is run or if the implicit transaction option is turned on globally by running sp_configure ‘user options’ 2. (Actually, the bit mask 0x2 must be turned on for the user option so you might have to perform an ‘OR’ operation with the existing user option value.) See SQL Server 2000 Books Online on how to turn on implicit transaction under ODBC and OLE DB (acdata.chm::/ac_8_md_06_2g6r.htm). Transaction Nesting Explicit transactions can be nested. Committing inner transactions is ignored by SQL Server other than to decrements @@TRANCOUNT. The transaction is either committed or rolled back based on the action taken at the end of the outermost transaction. If the outer transaction is committed, the inner nested transactions are also committed. If the outer transaction is rolled back, then all inner transactions are also rolled back, regardless of whether the inner transactions were individually committed. Each call to COMMIT TRANSACTION applies to the last executed BEGIN TRANSACTION. If the BEGIN TRANSACTION statements are nested, then a COMMIT statement applies only to the last nested transaction, which is the innermost transaction. Even if a COMMIT TRANSACTION transaction_name statement within a nested transaction refers to the transaction name of the outer transaction, the commit applies only to the innermost transaction. If a ROLLBACK TRANSACTION statement without a transaction_name parameter is executed at any level of a set of nested transaction, it rolls back all the nested transactions, including the outermost transaction. The @@TRANCOUNT function records the current transaction nesting level. Each BEGIN TRANSACTION statement increments @@TRANCOUNT by one. Each COMMIT TRANSACTION statement decrements @@TRANCOUNT by one. A ROLLBACK TRANSACTION statement that does not have a transaction name rolls back all nested transactions and decrements @@TRANCOUNT to 0. A ROLLBACK TRANSACTION that uses the transaction name of the outermost transaction in a set of nested transactions rolls back all the nested transactions and decrements @@TRANCOUNT to 0. When you are unsure if you are already in a transaction, SELECT @@TRANCOUNT to determine whether it is 1 or more. If @@TRANCOUNT is 0 you are not in a transaction. You can also find the transaction nesting level by checking the sysprocess.open_tran column. See SQL Server 2000 Books Online topic “Nesting Transactions” (acdata.chm::/ac_8_md_06_66nq.htm) for more information. Statement, Transaction, and Batch Abort One batch can have many statements and one transaction can have multiple statements, also. One transaction can span multiple batches and one batch can have multiple transactions. Statement Abort Currently executing statement is aborted. This can be a bit confusing when you start talking about statements in a trigger or stored procedure. Let us look closely at the following trigger: CREATE TRIGGER TRG8134 ON TBL8134 AFTER INSERT AS BEGIN SELECT 1/0 SELECT 'Next command in trigger' END To fire the INSERT trigger, the batch could be as simple as ‘INSERT INTO TBL8134 VALUES(1)’. However, the trigger contains two statements that must be executed as part of the batch to satisfy the clients insert request. When the ‘SELECT 1/0’ causes the divide by zero error, a statement abort is issued for the ‘SELECT 1/0’ statement. Batch and Transaction Abort On SQL Server 2000 (and SQL Server 7.0) whenever a non-informational error is encountered in a trigger, the statement abort is promoted to a batch and transactional abort. Thus, in the example the statement abort for ‘select 1/0’ promotion results in an entire batch abort. No further statements in the trigger or batch will be executed and a rollback is issued. On SQL Server 6.5, the statement aborts immediately and results in a transaction abort. However, the rest of the statements within the trigger are executed. This trigger could return ‘Next command in trigger’ as a result set. Once the trigger completes the batch abort promotion takes effect. Conversely, submitting a similar set of statements in a standalone batch can result in different behavior. SELECT 1/0 SELECT 'Next command in batch' Not considering the set option possibilities, a divide by zero error generally results in a statement abort. Since it is not in a trigger, the promotion to a batch abort is avoided and subsequent SELECT statement can execute. The programmer should add an “if @@ERROR” check immediately after the ‘select 1/0’ to T-SQL execution to control the flow correctly. Aborting and Set Options ARITHABORT If SET ARITHABORT is ON, these error conditions cause the query or batch to terminate. If the errors occur in a transaction, the transaction is rolled back. If SET ARITHABORT is OFF and one of these errors occurs, a warning message is displayed, and NULL is assigned to the result of the arithmetic operation. When an INSERT, DELETE, or UPDATE statement encounters an arithmetic error (overflow, divide-by-zero, or a domain error) during expression evaluation when SET ARITHABORT is OFF, SQL Server inserts or updates a NULL value. If the target column is not nullable, the insert or update action fails and the user receives an error. XACT_ABORT When SET XACT_ABORT is ON, if a Transact-SQL statement raises a run-time error, the entire transaction is terminated and rolled back. When OFF, only the Transact-SQL statement that raised the error is rolled back and the transaction continues processing. Compile errors, such as syntax errors, are not affected by SET XACT_ABORT. For example: CREATE TABLE t1 (a int PRIMARY KEY) CREATE TABLE t2 (a int REFERENCES t1(a)) GO INSERT INTO t1 VALUES (1) INSERT INTO t1 VALUES (3) INSERT INTO t1 VALUES (4) INSERT INTO t1 VALUES (6) GO SET XACT_ABORT OFF GO BEGIN TRAN INSERT INTO t2 VALUES (1) INSERT INTO t2 VALUES (2) /* Foreign key error */ INSERT INTO t2 VALUES (3) COMMIT TRAN SELECT 'Continue running batch 1...' GO SET XACT_ABORT ON GO BEGIN TRAN INSERT INTO t2 VALUES (4) INSERT INTO t2 VALUES (5) /* Foreign key error */ INSERT INTO t2 VALUES (6) COMMIT TRAN SELECT 'Continue running batch 2...' GO /* Select shows only keys 1 and 3 added. Key 2 insert failed and was rolled back, but XACT_ABORT was OFF and rest of transaction succeeded. Key 5 insert error with XACT_ABORT ON caused all of the second transaction to roll back. Also note that 'Continue running batch 2...' is not Returned to indicate that the batch is aborted. */ SELECT * FROM t2 GO DROP TABLE t2 DROP TABLE t1 GO Compile and Run-time Errors Compile Errors Compile errors are encountered during syntax checks, security checks, and other general operations to prepare the batch for execution. These errors can prevent the optimization of the query and thus lead to immediate abort. The statement is not run and the batch is aborted. The transaction state is generally left untouched. For example, assume there are four statements in a particular batch. If the third statement has a syntax error, none of the statements in the batch is executed. Optimization Errors Optimization errors would include rare situations where the statement encounters a problem when attempting to build an optimal execution plan. Example: “too many tables referenced in the query” error is reported because a “work table” was added to the plan. Runtime Errors Runtime errors are those that are encountered during the execution of the query. Consider the following batch: SELECT * FROM pubs.dbo.titles UPDATE pubs.dbo.authors SET au_lname = au_lname SELECT * FROM foo UPDATE pubs.dbo.authors SET au_lname = au_lname If you run the above statements in a batch, the first two statements will be executed, the third statement will fail because table foo does not exist, and the batch will terminate. Deferred Name Resolution is the feature that allows this batch to start executing before resolving the object foo. This feature allows SQL Server to delay object resolution and place a “placeholder” in the query’s execution. The object referenced by the placeholder is resolved until the query is executed. In our example, the execution of the statement “SELECT * FROM foo” will trigger another compile process to resolve the name again. This time, error message 208 is returned. Error: 208, Level 16, State 1, Line 1 Invalid object name 'foo'. Message 208 can be encountered as a runtime or compile error depending on whether the Deferred Name Resolution feature is available. In SQL Server 6.5 this would be considered a compile error and on SQL Server 2000 (and SQL Server7.0) as a runtime error due to Deferred Name Resolution. In the following example, if a trigger referenced authors2, the error is detected as SQL Server attempts to execute the trigger. However, under SQL Server 6.5 the create trigger statement fails because authors2 does not exist at compile time. When errors are encountered in a trigger, generally, the statement, batch, and transaction are aborted. You should be able to observe this by running the following script in pubs database: Create table tblTest(iID int) go create trigger trgInsert on tblTest for INSERT as begin select * from authors select * from authors2 select * from titles end go begin tran select 'Before' insert into tblTest values(1) select 'After' go select @@TRANCOUNT go When run in a batch, the statement and the batch are aborted but the transaction remains active. The follow script illustrates this: begin tran select 'Before' select * from authors2 select 'After' go select @@TRANCOUNT go One other factor in a compile versus runtime error is implicit data type conversions. If you were to run the following statements on SQL Server 6.5 and SQL Server 2000 (and SQL Server 7.0): create table tblData(dtData datetime) go select 1 insert into tblData values(12/13/99) go On SQL Server 6.5, you get an error before execution of the batch begins so no statements are executed and the batch is aborted. Error: 206, Level 16, State 2, Line 2 Operand type clash: int is incompatible with datetime On SQL Server 2000, you get the default value (1900-01-01 00:00:00.000) inserted into the table. SQL Server 2000 implicit data type conversion treats this as integer division. The integer division of 12/13/99 is 0, so the default date and time value is inserted, no error returned. To correct the problem on either version is to wrap the date string with quotes. See Bug #56118 (sqlbug_70) for more details about this situation. Another example of a runtime error is a 605 message. Error: 605 Attempt to fetch logical page %S_PGID in database '%.*ls' belongs to object '%.*ls', not to object '%.*ls'. A 605 error is always a runtime error. However, depending on the transaction isolation level, (e.g. using the NOLOCK lock hint), established by the SPID the handling of the error can vary. Specifically, a 605 error is considered an ACCESS error. Errors associated with buffer and page access are found in the 600 series of errors. When the error is encountered, the isolation level of the SPID is examined to determine proper handling based on information or fatal error level. Transaction Error Checking Not all errors cause transactions to automatically rollback. Although it is difficult to determine exactly which errors will rollback transactions and which errors will not, the main idea here is that programmers must perform error checking and handle errors appropriately. Error Handling Raiserror Details Raiserror seems to be a source of confusion but is really rather simple. Raiserror with severity levels of 20 or higher will terminate the connection. Of course, when the connection is terminated a full rollback of any open transaction will immediately be instantiated by the SQL Server (except distributed transaction with DTC involved). Severity levels lower than 20 will simply result in the error message being returned to the client. They do not affect the transaction scope of the connection. Consider the following batch: use pubs begin tran update authors set au_lname = 'smith' raiserror ('This is bad', 19, 1) with log select @@trancount With severity set at 19, the 'select @@trancount' will be executed after the raiserror statement and will return a value of 1. If severity is changed to 20, then the select statement will not run and the connection is broken. Important Error handling must occur not only in T-SQL batches and stored procedures, but also in application program code. Transactions and Triggers (1 of 2) Basic behavior assumes the implicit transactions setting is set to OFF. This behavior makes it possible to identify business logic errors in a trigger, raise an error, rollback the action, and add an audit table entry. Logically, the insert to the audit table cannot take place before the ROLLBACK action and you would not want to build in the audit table insert into every applications error handler that violated the business rule of the trigger. For more information, see also… SQL Server 2000 Books Online topic “Rollbacks in stored procedure and triggers“ (acdata.chm::/ac_8_md_06_4qcz.htm) IMPLICIT_TRANSACTIONS ON Behavior The behavior of firing other triggers on the same table can be tricky. Say you added a trigger that checks the CODE field. Read only versions of the rows contain the code ‘RO’ and read/write versions use ‘RW.’ Whenever someone tries to delete a row with a code ‘RO’ the trigger issues the rollback and logs an audit table entry. However, you also have a second trigger that is responsible for cascading delete operations. One client could issue the delete without implicit transactions on and only the current trigger would execute and then terminate the batch. However, a second client with implicit transactions on could issue the same delete and the secondary trigger would fire. You end up with a situation in which the cascading delete operations can take place (are committed) but the initial row remains in the table because of the rollback operation. None of the delete operations should be allowed but because the transaction scope was restarted because of the implicit transactions setting, they did. Transactions and Triggers (2 of 2) It is extremely difficult to determine the execution state of a trigger when using explicit rollback statements in combination with implicit transactions. The RETURN statement is not allowed to return a value. The only way I have found to set the @@ERROR is using a ‘raiserror’ as the last execution statement in the last trigger to execute. If you modify the example, this following RAISERROR statement will set @@ERROR to 50000: CREATE TRIGGER trgTest on tblTest for INSERT AS BEGIN ROLLBACK INSERT INTO tblAudit VALUES (1) RAISERROR('This is bad', 14,1) END However, this value does not carry over to a secondary trigger for the same table. If you raise an error at the end of the first trigger and then look at @@ERROR in the secondary trigger the @@ERROR remains 0. Carrying Forward an Active/Open Transaction It is possible to exit from a trigger and carry forward an open transaction by issuing a BEGIN TRAN or by setting implicit transaction on and doing INSERT, UPDATE, or DELETE. Warning It is never recommended that a trigger call BEGIN TRANSACTION. By doing this you increment the transaction count. Invalid code logic, not calling commit transaction, can lead to a situation where the transaction count remains elevated upon exit of the trigger. Transaction Count The behavior is better explained by understanding how the server works. It does not matter whether you are in a transaction, when a modification takes place the transaction count is incremented. So, in the simplest form, during the processing of an insert the transaction count is 1. On completion of the insert, the server will commit (and thus decrement the transaction count). If the commit identifies the transaction count has returned to 0, the actual commit processing is completed. Issuing a commit when the transaction count is greater than 1 simply decrements the nested transaction counter. Thus, when we enter a trigger, the transaction count is 1. At the completion of the trigger, the transaction count will be 0 due to the commit issued at the end of the modification statement (insert). In our example, if the connection was already in a transaction and called the second INSERT, since implicit transaction is ON, the transaction count in the trigger will be 2 as long as the ROLLBACK is not executed. At the end of the insert, the commit is again issued to decrement the transaction reference count to 1. However, the value does not return to 0 so the transaction remains open/active. Subsequent triggers are only fired if the transaction count at the end of the trigger remains greater than or equal to 1. The key to continuation of secondary triggers and the batch is the transaction count at the end of a trigger execution. If the trigger that performs a rollback has done an explicit begin transaction or uses implicit transactions, subsequent triggers and the batch will continue. If the transaction count is not 1 or greater, subsequent triggers and the batch will not execute. Warning Forcing the transaction count after issuing a rollback is dangerous because you can easily loose track of your transaction nesting level. When performing an explicit rollback in a trigger, you should immediately issue a return statement to maintain consistent behavior between a connection with and without implicit transaction settings. This will force the trigger(s) and batch to terminate immediately. One of the methods of dealing with this issue is to run ‘SET IMPLICIT_TRANSACTIONS OFF’ as the first statement of any trigger. Other methods may entails checking @@TRANCOUNT at the end of the trigger and continue to COMMIT the transaction as long as @@TRANCOUNT is greater than 1. Examples The following examples are based on this table: create table tbl50000Insert (iID int NOT NULL) go Note If more than one trigger is used, to guarantee the trigger firing sequence, the sp_settriggerorder command should be used. This command is omitted in these examples to simplify the complexity of the statements. First Example In the first example, the second trigger was never fired and the batch, starting with the insert statement, was aborted. Thus, the print statement was never issued. print('Trigger issues rollback - cancels batch') go create trigger trg50000Insert on tbl50000Insert for INSERT as begin select 'Inserted', * from inserted rollback tran select 'End of trigger', @@TRANCOUNT as 'TRANCOUNT' end go create trigger trg50000Insert2 on tbl50000Insert for INSERT as begin select 'In Trigger2' select 'Trigger 2 Inserted', * from inserted end go insert into tbl50000Insert values(1) print('---------------------- In same batch') select * from tbl50000Insert go -- Cleanup drop trigger trg50000Insert drop trigger trg50000Insert2 go delete from tbl50000Insert Second Example The next example shows that since a new transaction is started, the second trigger will be fired and the print statement in the batch will be executed. Note that the insert is rolled back. print('Trigger issues rollback - increases tran count to continue batch') go create trigger trg50000Insert on tbl50000Insert for INSERT as begin select 'Inserted', * from inserted rollback tran begin tran end go create trigger trg50000Insert2 on tbl50000Insert for INSERT as begin select 'In Trigger2' select 'Trigger 2 Inserted', * from inserted end go insert into tbl50000Insert values(2) print('---------------------- In same batch') select * from tbl50000Insert go -- Cleanup drop trigger trg50000Insert drop trigger trg50000Insert2 go delete from tbl50000Insert Third Example In the third example, the raiserror statement is used to set the @@ERROR value and the BEGIN TRAN statement is used in the trigger to allow the batch to continue to run. print('Trigger issues rollback - uses raiserror to set @@ERROR') go create trigger trg50000Insert on tbl50000Insert for INSERT as begin select 'Inserted', * from inserted rollback tran begin tran -- Increase @@trancount to allow -- batch to continue select @@trancount as ‘Trancount’ raiserror('This is from the trigger', 14,1) end go insert into tbl50000Insert values(3) select @@ERROR as 'ERROR', @@TRANCOUNT as 'Trancount' go -- Cleanup drop trigger trg50000Insert go delete from tbl50000Insert Fourth Example For the fourth example, a second trigger is added to illustrate the fact that @@ERROR value set in the first trigger will not be seen in the second trigger nor will it show up in the batch after the second trigger is fired. print('Trigger issues rollback - uses raiserror to set @@ERROR, not seen in second trigger and cleared in batch') go create trigger trg50000Insert on tbl50000Insert for INSERT as begin select 'Inserted', * from inserted rollback begin tran -- Increase @@trancount to -- allow batch to continue select @@TRANCOUNT as 'Trancount' raiserror('This is from the trigger', 14,1) end go create trigger trg50000Insert2 on tbl50000Insert for INSERT as begin select @@ERROR as 'ERROR', @@TRANCOUNT as 'Trancount' end go insert into tbl50000Insert values(4) select @@ERROR as 'ERROR', @@TRANCOUNT as 'Trancount' go -- Cleanup drop trigger trg50000Insert drop trigger trg50000Insert2 go delete from tbl50000Insert Lesson 3: Concepts – Locks and Applications This lesson outlines some of the common causes that contribute to the perception of a slow server. What You Will Learn After completing this lesson, you will be able to:  Explain how lock hints are used and their impact.  Discuss the effect on locking when an application uses Microsoft Transaction Server.  Identify the different kinds of deadlocks including distributed deadlock. Recommended Reading  Charter 14 “Locking”, Inside SQL Server 2000 by Kalen Delaney  Charter 16 “Query Tuning”, Inside SQL Server 2000 by Kalen Delaney Q239753 – Deadlock Situation Not Detected by SQL Server Q288752 – Blocked SPID Not Participating in Deadlock May Incorrectly be Chosen as victim Locking Hints UPDLOCK If update locks are used instead of shared locks while reading a table, the locks are held until the end of the statement or transaction. UPDLOCK has the advantage of allowing you to read data (without blocking other readers) and update it later with the assurance that the data has not changed since you last read it. READPAST READPAST is an optimizer hint for use with SELECT statements. When this hint is used, SQL Server will read past locked rows. For example, assume table T1 contains a single integer column with the values of 1, 2, 3, 4, and 5. If transaction A changes the value of 3 to 8 but has not yet committed, a SELECT * FROM T1 (READPAST) yields values 1, 2, 4, 5. Tip READPAST only applies to transactions operating at READ COMMITTED isolation and only reads past row-level locks. This lock hint can be used to implement a work queue on a SQL Server table. For example, assume there are many external work requests being thrown into a table and they should be serviced in approximate insertion order but they do not have to be completely FIFO. If you have 4 worker threads consuming work items from the queue they could each pick up a record using read past locking and then delete the entry from the queue and commit when they're done. If they fail, they could rollback, leaving the entry on the queue for the next worker thread to pick up. Caution The READPAST hint is not compatible with HOLDLOCK.  Try This: Using Locking Hints 1. Open a Query Window and connect to the pubs database. 2. Execute the following statements (--Conn 1 is optional to help you keep track of each connection): BEGIN TRANSACTION -- Conn 1 UPDATE titles SET price = price * 0.9 WHERE title_id = 'BU1032' 3. Open a second connection and execute the following statements: SELECT @@lock_timeout -- Conn 2 GO SELECT * FROM titles SELECT * FROM authors 4. Open a third connection and execute the following statements: SET LOCK_TIMEOUT 0 -- Conn 3 SELECT * FROM titles SELECT * FROM authors 5. Open a fourth connection and execute the following statement: SELECT * FROM titles (READPAST) -- Conn 4 WHERE title_ID < 'C' SELECT * FROM authors How many records were returned? 3 6. Open a fifth connection and execute the following statement: SELECT * FROM titles (NOLOCK) -- Conn 5 WHERE title_ID 0 the lock manager also checks for deadlocks every time a SPID gets blocked. So a single deadlock will trigger 20 seconds of more immediate deadlock detection, but if no additional deadlocks occur in that 20 seconds, the lock manager no longer checks for deadlocks at each block and detection again only happens every 5 seconds. Although normally not needed, you may use trace flag -T1205 to trace the deadlock detection process. Note Please note the distinction between application lock and other locks’ deadlock detection. For application lock, we do not rollback the transaction of the deadlock victim but simply return a -3 to sp_getapplock, which the application needs to handle itself. Deadlock Resolution How is a deadlock resolved? SQL Server picks one of the connections as a deadlock victim. The victim is chosen based on either which is the least expensive transaction (calculated using the number and size of the log records) to roll back or in which process “SET DEADLOCK_PRIORITY LOW” is specified. The victim’s transaction is rolled back, held locks are released, and SQL Server sends error 1205 to the victim’s client application to notify it that it was chosen as a victim. The other process can then obtain access to the resource it was waiting on and continue. Error 1205: Your transaction (process ID #%d) was deadlocked with another process and has been chosen as the deadlock victim. Rerun your transaction. Symptoms of deadlocking Error 1205 usually is not written to the SQL Server errorlog. Unfortunately, you cannot use sp_altermessage to cause 1205 to be written to the errorlog. If the client application does not capture and display error 1205, some of the symptoms of deadlock occurring are:  Clients complain of mysteriously canceled queries when using certain features of an application.  May be accompanied by excessive blocking. Lock contention increases the chances that a deadlock will occur. Triggers and Deadlock Triggers promote the deadlock priority of the SPID for the life of the trigger execution when the DEADLOCK PRIORITY is not set to low. When a statement in a trigger causes a deadlock to occur, the SPID executing the trigger is given preferential treatment and will not become the victim. Warning Bug 235794 is filed against SQL Server 2000 where a blocked SPID that is not a participant of a deadlock may incorrectly be chosen as a deadlock victim if the SPID is blocked by one of the deadlock participants and the SPID has the least amount of transaction logging. See KB article Q288752: “Blocked Spid Not Participating in Deadlock May Incorrectly be Chosen as victim” for more information. Distributed Deadlock – Scenario 1 Distributed Deadlocks The term distributed deadlock is ambiguous. There are many types of distributed deadlocks. Scenario 1 Client application opens connection A, begins a transaction, acquires some locks, opens connection B, connection B gets blocked by A but the application is designed to not commit A’s transaction until B completes. Note SQL Server has no way of knowing that connection A is somehow dependent on B – they are two distinct connections with two distinct transactions. This situation is discussed in scenario #4 in “Q224453 INF: Understanding and Resolving SQL Server 7.0 Blocking Problems”. Distributed Deadlock – Scenario 2 Scenario 2 Distributed deadlock involving bound connections. Two connections can be bound into a single transaction context with sp_getbindtoken/sp_bindsession or via DTC. Spid 60 enlists in a transaction with spid 61. A third spid 62 is blocked by spid 60, but spid 61 is blocked by spid 62. Because they are doing work in the same transaction, spid 60 cannot commit until spid 61 finishes his work, but spid 61 is blocked by 62 who is blocked by 60. This scenario is described in article “Q239753 - Deadlock Situation Not Detected by SQL Server.” Note SQL Server 6.5 and 7.0 do not detect this deadlock. The SQL Server 2000 deadlock detection algorithm has been enhanced to detect this type of distributed deadlock. The diagram in the slide illustrates this situation. Resources locked by a spid are below that spid (in a box). Arrows indicate blocking and are drawn from the blocked spid to the resource that the spid requires. A circle represents a transaction; spids in the same transaction are shown in the same circle. Distributed Deadlock – Scenario 3 Scenario 3 Distributed deadlock involving linked servers or server-to-server RPC. Spid 60 on Server 1 executes a stored procedure on Server 2 via linked server. This stored procedure does a loopback linked server query against a table on Server 1, and this connection is blocked by a lock held by Spid 60. Note No version of SQL Server is currently designed to detect this distributed deadlock. Lesson 4: Information Collection and Analysis This lesson outlines some of the common causes that contribute to the perception of a slow server. What You Will Learn After completing this lesson, you will be able to:  Identify specific information needed for troubleshooting issues.  Locate and collect information needed for troubleshooting issues.  Analyze output of DBCC Inputbuffer, DBCC PSS, and DBCC Page commands.  Review information collected from master.dbo.sysprocesses table.  Review information collected from master.dbo.syslockinfo table.  Review output of sp_who, sp_who2, sp_lock.  Analyze Profiler log for query usage pattern.  Review output of trace flags to help troubleshoot deadlocks. Recommended Reading Q244455 - INF: Definition of Sysprocesses Waittype and Lastwaittype Fields Q244456 - INF: Description of DBCC PSS Command for SQL Server 7.0 Q271509 - INF: How to Monitor SQL Server 2000 Blocking Q251004 - How to Monitor SQL Server 7.0 Blocking Q224453 - Understanding and Resolving SQL Server 7.0 Blocking Problem Q282749 – BUG: Deadlock information reported with SQL Server 2000 Profiler Locking and Blocking  Try This: Examine Blocked Processes 1. Open a Query Window and connect to the pubs database. Execute the following statements: BEGIN TRAN -- connection 1 UPDATE titles SET price = price + 1 2. Open another connection and execute the following statement: SELECT * FROM titles-- connection 2 3. Open a third connection and execute sp_who; note the process id (spid) of the blocked process. (Connection 3) 4. In the same connection, execute the following: SELECT spid, cmd, waittype FROM master..sysprocesses WHERE waittype 0 -- connection 3 5. Do not close any of the connections! What was the wait type of the blocked process?  Try This: Look at locks held Assumes all your connections are still open from the previous exercise. • Execute sp_lock -- Connection 3 What locks is the process from the previous example holding? Make sure you run ROLLBACK TRAN in Connection 1 to clean up your transaction. Collecting Information See Module 2 for more about how to gather this information using various tools. Recognizing Blocking Problems How to Recognize Blocking Problems  Users complain about poor performance at a certain time of day, or after a certain number of users connect.  SELECT * FROM sysprocesses or sp_who2 shows non-zero values in the blocked or BlkBy column.  More severe blocking incidents will have long blocking chains or large sysprocesses.waittime values for blocked spids.  Possibl

4,011

社区成员

发帖
与我相关
我的任务
社区描述
VC/MFC 数据库
社区管理员
  • 数据库
加入社区
  • 近7日
  • 近30日
  • 至今
社区公告
暂无公告

试试用AI创作助手写篇文章吧