Oracle11g的logminer是默认打开的,所以不需要额外开启,
1、Logminer环境准备
确定归档日志是否打开
sql> archive log list; Database log mode Archive Mode Automatic archival Enabled Archive destination USE_DB_RECOVERY_FILE_DEST Oldest online log sequence 99 Next log sequence to archive 101 Current log sequence 101 sql> # 如果没有打开,执行如下命令开启归档日志 sql> alter database archivelog; |
如果logminer没有安装,就执行下面的脚本安装logminer
#(1)创建dbms_logmnr包,用来分析归档日志 sql> @$ORACLE_HOME/rdbms/admin/dbmslm.sql; Package created. Grant succeeded. Synonym created. sql> #(2)用来创建DBMS_LOGMNR_D包,该包用来创建数据字典文件。 sql> @$ORACLE_HOME/rdbms/admin/dbmslmd.sql; Package created. Synonym created. sql> |
天才
2,开启logminer账号
创建表空间: create tablespace logminer logging datafile '/home/oradata/powerdes/logminer01.dbf' size 50m autoextend on next 50m extent management local; 创建用户: CREATE USER logminer PROFILE "DEFAULT" IDENTIFIED BY "logminer0418" DEFAULT TABLESPACE "LOGMINER" ACCOUNT UNLOCK; 赋予权限 grant create session to logminer; grant connect,resource to logminer;
|
3,确认数据库打开扩充日志数据模式功能
a.查看是否打开附加日志数据模式功能
sql> select SUPPLEMENTAL_LOG_DATA_MIN from v$database; SUPPLEME -------- YES sql> |
b.若没有打开,则打开,alterdatabase add supplemental log data;
sql> alter database add supplemental log data; Database altered. sql> |
PS:这里如果不打开的话,在分析归档日志的时候,就看不到执行操作的machine、os_name、user_name等等,对分析操作排查问题会产生很大困扰。supplemental lsogging(扩充日志)在通常情况下,redo log 只记录的进行恢复所必需的信息,但是这些信息对于我们使用redo log进行一些其他应用时是不够的,例如在 redo log中使用rowid唯一标识一行而不是通过Primary key,如果我们在另外的数据库分析这些日志并想重新执行某些dml时就可能会有问题,因为不同的数据库其rowid代表的内容是不同的。在这时候就需要一些额外的信息(columns)加入redo log,这就是supplemental logging。
4,分析归档日志
(1)将要分析的归档日志、undo日志等添加到分析队列
execdbms_logmnr.add_logfile('/data2/archivelog/archivelog1_26441_906253421.dbf');
exec dbms_logmnr.add_logfile('/data2/logs/redo01.log')
exec dbms_logmnr.remove_logfile('/log/log1.log')
(3)对添加入队列的日志进行分析
execdbms_logmnr.start_logmnr(options=>dbms_logmnr.dict_from_online_catalog);
(4)从v$logmnr_contents查前滚sql和反算回来的回滚sql
select sql_redo,sql_undo from v$logmnr_contents;
(5)存储数据到临时表
一般线上数据库,一天的归档日志非常多,所以直接查询v$logmnr_contents就比较消耗资源,会很卡,因为执行查询的时候,只一个个从已经添加到队列的归档日志进行数据录入到v$logmnr_contents的。
比如,曾经分析过一家传统的公司,oa系统,每天所有的归档日志分析下来到v$logmnr_contents表,总数据量是16410242条记录,使用create table logminer.Z1 as select * from v$logmnr_contents;这个临时表logminer.Z1占据了54G的磁盘空间。所以为了check方便,需要通过一些检索字段,比如通过sql_redo来过滤表。
createtable logminer.Z_20170319 as select * from v$logmnr_contents t where t.sql_redolike ‘%UC_USER%’ or t.sql_redo like ‘%uc_user%’;
执行select * fromv$logmnr_contents;非常慢,因为是会去归档日志里面查询数据出来然后显示,这个时候你在后台的alert log里面会看到如下信息,从一个个归档日志里面去mining数据出来:
LOGMINER: Begin mining logfile for session -2147483391 thread 1 sequence 26203,/data2/archlogs_tmp/1_26203_906253421.dbfs LOGMINER: End mining logfile for session -2147483391 thread 1 sequence 26203,/data2/archlogs_tmp/1_26203_906253421.dbfs LOGMINER: Begin mining logfile for session -2147483391 thread 1 sequence 26204,/data2/archlogs_tmp/1_26204_906253421.dbf LOGMINER: End mining logfile for session -2147483391 thread 1 sequence 26204,/data2/archlogs_tmp/1_26204_906253421.dbf LOGMINER: Begin mining logfile for session -2147483391 thread 1 sequence 26205,/data2/archlogs_tmp/1_26205_906253421.dbf LOGMINER: End mining logfile for session -2147483391 thread 1 sequence 26205,/data2/archlogs_tmp/1_26205_906253421.dbf ……………………很多不停遍历归档日志文件 |
5、分析在线redo日志文件
查看当前的在线redo文件地址:select t1.GROUP#,t1.STATUS,t2.MEMBER,t2.TYPE from v$log t1 innerjoin v$logfile t2 on t1.GROUP#=t2.GROUP# and t1.STATUS='CURRENT';
#(1) 查看当前正在使用的在线redo日志,并且创建表数据进行操作 sql> select t1.GROUP#,t2.TYPE from v$log t1 inner join v$logfile t2 on t1.GROUP#=t2.GROUP# and t1.STATUS='CURRENT'; GROUP# STATUS MEMBER TYPE ---------- ---------------- -------------------------------------------------------------------------------- ------- 1 CURRENT /home/oradata/powerdes/redo01.log ONLINE
sql>
sql> conn logminer/logminer0418; Connected. sql> create table z1(id number);
Table created.
sql> insert into z1(id) values(1);
1 row created.
sql> insert into z1(id) values(2);
1 row created.
sql> insert into z1(id) values(3);
1 row created.
sql> commit;
Commit complete.
sql> #(2) 然后去分析在线redo日志 sql> exec dbms_logmnr.add_logfile('/home/oradata/powerdes/redo01.log');
PL/sql procedure successfully completed.
sql> exec dbms_logmnr.start_logmnr(options=>dbms_logmnr.dict_from_online_catalog);
PL/sql procedure successfully completed.
sql> create table l1_Z1 as select * from v$logmnr_contents;
Table created.
sql> # 然后去查看分析的在线redo日志,在sql_redo里面可以看到曾经的操作。 sql> select t.timestamp,t.sql_redo from l1_Z1 t where seg_owner='LOGMINER'; TIMESTAMP sql_REDO ----------- -------------------------------------------------------------------------------- 2017/3/29 1 create table z1(id number); 2017/3/29 1 insert into "LOGMINER"."Z1"("ID") values ('1'); 2017/3/29 1 insert into "LOGMINER"."Z1"("ID") values ('2'); 2017/3/29 1 insert into "LOGMINER"."Z1"("ID") values ('3');
sql> |