App Oracle Timesten Concept (Database in Memory)

** Myself
Timeหten

      Application connect  tt datastore and datastore connect check point files and transaction log files .
If check point every hour transaction log keep 1 hour by create multi files between check point and
if check point files is small will create new file for check point is same structure of transaction log files
      In case of instance tt failure when is start again tt will open check point file
and apply transaction log from file for update point in time

Timeten connect dsn,dsn is odbc connect to datastore
Odbc user=.odbc.ini

Odbc system=sys.odbc.ini

In odbc.ini information of datastore and dsn and home tt
Dsn from win not install but configure user and system datasource name
Dsn from unix will be installed from tt install
Share memory of tt =permanent + temp + log buffer + overhead


ttisql = CLI utility
start /stop
ttdaemonadmin -start

ttdaemonadmin -stop

connect
 - ttisql connect “dsn=rundata_tt70”
 - ttisql rundata_tt70
 - Command> connect rundata_tt70;

disconnect
Command> disconnect;

Show connection independent attribute values
            Command> show all;

Status
ttstatus

# show daemon and subdaemon

Daemon process
            ttdaemonadmin –start

            ttdaemonadmin –stop

#which is daemon will spawn child subdaemon
Ram policy
            ttadmin –rampolicy always rundata_tt70

            #manully load and unload

Display ram informantion
            ttadmin –query rundata_tt70


Access control instance admin
-          create and drop instance users(user log in tt instance)
-          grant & revoke user (admin ,connect ,create datastore ,ddl ,write ,select)
define user
-          create ,drop ,grant ,revoke is similar syntax from oracle db
connect from user id and password(user=timesten,password=fast,dsn=rundata_tt70)
-          ttisql “dsn=rundata_tt70;uid=timesten;pwd=fast”

destroy a data store is remove all data store files and updating internal system information which chekpoint files
,transaction log files ,reserve spaces ,daemon files, catalog files
-          ttdestroy rundata_tt70

# not destroy timeten manually

Sql and DDL etc,.. create table,alter tables,view,index,sequence is similar oracle syntax
            ttisql <dsn>

            command> tables;      (show all tables)
            command> indexes;    (show all index and pk,fk)
            command> tables administrator.%; (show all tables of administator)
            command> indexes administrator.%;(show all index and pk,fk of administator)
table consume on memory
ttsize –tbl  <tables> -rows 10000 rundata_tt70

monitor system datastore sizes
sys.monitor
column perm,temp
dssize; (show perm size)
ttisql <dsn>

            command> monitor;   (show all tables)
show schema
ttschema rundata_tt70
Datastore management

Ttbulkcp import,export data between table and flat files
ttbulkcp -i rundata_tt70 table file.dat
Ttbackup,ttrestore backup and restore support full,inc,stream but ttrestore don’t overwrite existing data store
ttbackup –dir /home/backup rundata_tt70
ttrestore –dir /home/backup rundata_tt70
Ttmigrate  is upgrade data store (version 6 to 7)
ttmigrate –c  rundata_tt60 store.dmp

ttmigrate –r  rundata_tt70 store.dmp

Ttschema display ddl of data store
ttschema rundata_tt70 schema_name
Ttadoptstores move data store from one timesten to another instance
ttadoptstores –dspath /home/stores/source

Checkpoint operation

            Checkpoint is snapshot memory to disk , start/stop/recover, by periodic background config by ttckptconfig for interval etc.,

Snmp traps
            Timesten throw snmp trap for critical events as: daemon process died,file write errors,low or no data store space available in snmp.ini

Transaction logging
-          1 transaction log per data store
-          Transaction log used for redo(recovery) and undo(rollback)
-          Log records written to log buffer --> flush to latest log file (logbuffsize,logfilesize from dsn config) # logfilesize=2*logbuffsize
Non-durable commit(in mem to log buffer)
Durable commits (in mem log buffer flushed to log file to disk) and log full

Transaction durablility

-          durable commits ,checkpoint
Locking & granularity level
            Lock held by transaction
            Locklevel  or lock gratularity
-          row
-          data store
Lock timeout & deadlock in data store
Transaction isolation level
-          read commit ,serializable ,isolation dsn attribute
-          if set isolation will be can’t read uncommit(read inconsistency)
example
T1 select for update 1 row
T2 select * from table(blocked by T1)
T3 update another 1 row (blocked by serializable by T2)

Ttxactadmin  is used to monitor transaction Lock
ttxactadmin rundata_tt70

--or—

Sys.monitor (deadlocks,lock_timeouts,xact_begins,xact_commits.,)



Query optimizer column and table statistics

            Column stat in sys.col_stats table:number of non-null,min/max value
            Table stat in sys.tbl_stats table: number rows,timestamp
            Exact stat
                        -ttsql statsupdata or
-ttoptupdatestats  built-in procedure
            Statistical sampling
                        - ttlsql statsestimate or

- ttoptestimatestats built-in procedure
            Example  stat
                        Commad> statestimate jobs 500 rows;
Commad> statestimate 10 percent;
            Viewing execution plan
                        ttlsql explain to view execution plan
Example Viewing execution plan
                        Command> explain select …, from .., where ..,
                        Level=number of depth table refer
            Query optimizer ints
                        Enable or disable
                        - ttoptsetflag in procedure
                        - ttlsql optimizer hint
Example oquery opt hint
            Command> call ttoptsegflag(‘Hash’,0);
Command> call ttoptsegflag(‘NestedLoop,0);
Show timing statement execution
            Command> set timing 1;


Connection of Timesten

            Separate timesten IMDB is more performance than one TT
            Server have TT IMDB and server dsn for connect
                        -install TTDB and define & config server DSN
            Client connect to client dsn(connect server dsn) or logical server names (connect server hostname and port)
-install TTclient and define & config logical server names and and define & config client DSN
- logical server name must be config listen port
InstallDir/info/sys.ttconnect.ini
Local client/server access connect by shared memory IPC or Unix socket
Example configure client connect
Sys DSN-->
Client DSN(DSN)=ttcore_client
Server DSN(TTC_SERVER_DSN)=ttcore
User id=administrator,password=<>
Logical server(TTC_SERVER)=demo_server

clip_image002
           
Example configure local connect server/client connect same machine
Sys DSN-->
Client DSN(DSN)=ttcore_client_ipc
Server DSN(TTC_SERVER_DSN)=ttcore
User id=administrator,password=<>
Logical server(TTC_SERVER)=test_ipc
Network address=ttShmHost
ttShmHost set from Installdir/srv/info/ttendemon.options

-serverShmIpc
-serverShmSize 64

clip_image004

Ttstatus (connect by tcp and ipc)

TimesTen status report as of Thu Apr 26 14:46:23 2007
Daemon pid 3040 port 17000 instance tt70_32
TimesTen server pid 1060 started on port 17002
TimesTen webserver pid 3668 started on port 17004

------------------------------------------------------------------------

Data store c:\progra~1\timesten\demoda~1\TT_tt70_32

There are no connections to the data store
Replication policy  : Manual
Cache agent policy : Manual
------------------------------------------------------------------------

Data store c:\program files\timesten\demodatastores\ttcore

There are 8 connections to the data store
Data store is in shared mode
Shared Memory KEY Global\DBI46302969.1.SHM.37 HANDLE 0x2c4
Type            PID     Context     Connection Name              ConnID
Server          3720    0x00bbe008  ttcore_client                     2
    (Client Information: pid: 3904; IPC: TCP/IP;
        Node: Lek (172.16.1.9))
Server          4044    0x00bbe008  ttcore_client_ipc                 1
    (Client Information: pid: 2752; IPC: shared memory;
        Node: Lek (127.0.0.1))
Subdaemon       3136    0x0066c398  Worker                         2042
Subdaemon       3136    0x00715fa8  Flusher                        2043
Subdaemon       3136    0x1fd42208  Checkpoint                     2044
Subdaemon       3136    0x1fd71690  Aging                          2045
Subdaemon       3136    0x1fda0b18  HistGC                         2046
Subdaemon       3136    0x1fdcffa0  Monitor                        2047
Replication policy  : Manual
Cache agent policy : Manual
------------------------------------------------------------------------
End of report

Cache connect to oracle(product option)

            Cache agent connect between oracle db and TT imdb
-          cache oracle tables in tt
-          user cache individual tables or set of table
-          read-only or updatable tt cache
-          automatic data sync
oracle to tt (async or sync)
tt to oracle (async or sync)
            Cache data from ora db

example customers who have more than500  orders
                                    create cache group premierusers from customer (name var(100) not null) where cust.numoraders > 500;
Read-only cache group

            tt cache table cannot update direct
            option update pass oracle and update is auto refresh to TT  

Updatable cache group

                        sync writethrough committed in oracle
                        async writethrough transaction are batch commit in oracle
                        ## if update oracle will manual load to TT
Multiple updatable cache group

                        Example 2 instance TT and 1 ora will writethrough ora db
On-demand caching

                        On-demand load data from oracle if data is not found in TT(load on select)
                        Update TT cache are propagated to oradb
                        Age out of TT
   Configuration cache group TT run cache connect to oracle

                        Cache group must be one root node

                        Data sync tt to ora by auto propagate or manual flush(flush tt cache to oracle)
                        Data sync ora to tt by incremental autorefresh ,full autorefresh ,manual load,manual refresh (example auto 5 min)
                        Cache connect to oracle

                                    - install ora and env ex ORACLE_HOME,PATH,shared lib
                                    - define DSN that contain cache tables
                        Oracle user

                                    - create user/password for super user cacheadmin  in oracle user
                                    - create user/password for oracle user oratt
                                    - grant
TT user

                                    - create user/password for oracle user oratt which password difference from oracle(oratt password)
                        TT datastore require

                                    - define super user cacheadmin
                                    - start agent
                        Cache admin user

                                    - set user password for cache admin by ttcacheuidpwdset
                        Cache agent

                                    - read-only cache group,async write,autorefresh,load cache group by ttadmin
                                                - ttadmin –cachestart cachedsn

                                                - ttadmin –cachestop cachedsn

                                                - ttadmin –cachepolicy always cachedsn

                                                - ttadmin –cacheuidget cachedsn


   Create cache group TT

                        Read-only cache group
                                    - cannot update by app
Autorefresh cache group
                                    - set interval time to refresh TT from oracle table update
                                    - incremental or full
Statement passthrough from tt to ora
                                    - statement that cannot process in TT will redirect to ora
                                    - set statement passthrough
Sync write cache group
                                    - cache table update are propagated to ora
                                    - new tranaction cannot be issue until ora & tt commit
                        Example sync
                                    App send(insert,up,commit) to tt and tt send to ora then tt commit to logbuffer & logfiles
Async write cache group
                                    - cache table update are commit in TT and propagated to ora
                                    - new tranaction can be issued on cache table without wait ora commit
                                    - call ttrepstart (for update db)
                                    - flush command use to flush to oracle
Example async
                                    App send(insert,up,commit) to tt and tt commit then send to ora commit
Load cache group

                                    - copy new commit rows then cache group from oracle to cache table
Unload cache group

                                    - delete rows which  don’t affect to oracle
Refresh cache group

                                    - is equilvalent to unload but use load cache group
Auto data aging

                                    - remove data is no longer needed by LRU
                                    - time-based aging policy from time value
Auto load on select

                                    - Load data from oracle when select statement not found in TT
User manager cache group

                                    - full control cache table update are propagated to ora
                                    - full control ora update and refresh to TT
                                    - flush cache group cache table on TT flush to oracle table
                                    - propagate cache table attribute is set for sync to oracle tables
                                    - readonly cache table attribute is updated directly by app
                                    - oracle partitioned table can cache in any cache group type

Cache connect oracle with web

http://<ip>:17004/cache

Example Cache group definition "PRODUCTCACHE"

CREATE READONLY CACHE GROUP "PRODUCTCACHE"
AUTOREFRESH MODE INCREMENTAL INTERVAL 15 SECONDS
STATE PAUSED
FROM
  "OE"."PRODUCT_INFORMATION" (
    "PRODUCT_ID" NUMBER(6) NOT NULL,
    "PRODUCT_NAME" VARCHAR2(50 BYTE),
    "PRODUCT_STATUS" VARCHAR2(20 BYTE),
    "LIST_PRICE" NUMBER(8,2),
    PRIMARY KEY("PRODUCT_ID")
  ),
  "OE"."PRODUCT_DESCRIPTIONS" (
    "PRODUCT_ID" NUMBER(6) NOT NULL,
    "LANGUAGE_ID" VARCHAR2(3 BYTE) NOT NULL,
    "TRANSLATED_NAME" NVARCHAR2(50) NOT NULL,
    "TRANSLATED_DESCRIPTION" NVARCHAR2(2000) NOT NULL,
    PRIMARY KEY("PRODUCT_ID","LANGUAGE_ID"),
    FOREIGN KEY("PRODUCT_ID")
      REFERENCES "OE"."PRODUCT_INFORMATION"("PRODUCT_ID")
  )
  WHERE (oe.product_descriptions.product_id between 2001 and 3000);





Replication of TT to TT

-          Master to subscriber(1 direction),Master to Master(bidirection),N-Way
-          Load balancing is perform by replicate
-          Online upgrade if 3-node up is well no downtime
-          Async,sync(2 phase commit)
-          Replication is transparent to app(no change code)
-          Replication N-way rep N-1 and network load
-          Replication Propagator is 1 master send to multi propagate node and multi propagate send to subscriber data store
Step Replication of TT to TT

Data element rep is data store or table rep
Configure master

-          create store
-          create rep schema
-          start rep agent
Configure subscriber

-          run ttrepadmin –duplicate to create subscriber store from master store
-          start rep agent
Unidirectional replication statement

CREATE REPLICATION UniReplSchema
ELEMENT e TABLE reptable
MASTER ttmaster ON "host1"
SUBSCRIBER ttsubscriber ON "host2";
call ttrepstart;

call ttrepstop;
drop replication tlasyncscheme;

Biidirectional  async replication statement

CREATE REPLICATION BiReplSchema
ELEMENT e TABLE reptable
MASTER ttmaster ON "host1"
SUBSCRIBER ttsubscriber ON "host2";
ELEMENT e2 TABLE reptable
MASTER ttsubscriber ON "host2"
SUBSCRIBER ttmaster ON "host1";
            Replication agent

                        Master and bi

                        ttadmin –repstart mastdsn

ttadmin –repstop mastdsn

ttadmin –reppolicy always mastdsn
ttadmin –query mastdsn
ttstatus
Subscriber

ttrepadmin –duplicate –from mastdsn –host “host1” subdsn
ttadmin –repstart subdsn
            Monitor

                        ttrepadmin –receiver –state

ttRepAdmin -showstatus ttmaster
ttRepAdmin -showconfig ttmaster
                        -- or --
                        call ttrepsubscriberStateSet


                    status start/pause/stop/failed

Return receipt sync replication statement(not 2 phase commit but subscriber receipt quickly)

CREATE REPLICATION BiReplSchema
    ELEMENT e TABLE reptable
MASTER ttmaster ON "host1"
SUBSCRIBER ttsubscriber ON "host2";
RETURN RECEIPT
   STORE ttmaster ON "host1"
RETURN SERVICES ON WHEN REPLICATION STOPPED;


Return TWO-SAFE sync replication statement(2 phase commit)

CREATE REPLICATION TWOReplSchema
    ELEMENT e TABLE reptable
MASTER ttmaster ON "host1"
SUBSCRIBER ttsubscriber ON "host2";
RETURN RECEIPT
EXCLUDE TABLE jobs
   STORE ttmaster ON "host1"
………………..;
Altering a replication

            Syntax similar to oracle which is begin call ttrepstop then alter after that call ttrepstart
Replicating cache tablie in a  cache group

            Read-only cache group
Oracle autorefresh from master store if lost master then autorefresh from standby master store
AWT cache group
Standby master store propagate to Oracle if lost standby master then propagate from master store
                                    Step configuration of rep on cache table
                                                Master
                                                - Create cache group/start cache
                                                - Create rep/start rep
Subscriber
                                                - Create dup rep /start rep
                                                - Cache group/start cache group
                                                Replication transmit nondurable,compress traffic
                                                            transmit nondurable (2 phase commit)
compress traffic (not guarantee data because write master before rep)
                                                Multiple NICs
                                                            Aliase on host file
                                                            X.X.X.X         hostA
X.X.X.Y         hostA  hostA_rep       
#hostA_rep for aliase rep
            Timesten monitor

Timestamp-based update conflict resolution(will define binary columnto detect conflict)
Transaction log file threshold is set for tell master store is fail if link down
Replication use TTREP user
Use ttdestroy and ttrepadmin –duplicate for re-create a failed store
If use replication no need cluster manager layer from h/w
If sub data store fail in rep mode then recreate from master
If master store fail in propagate mode then sub become the propagate and propagate become subscriber
If sub data store fail in propagate mode then recreate from master
ttrepadminbookmark for monitor log buffer/files
ttrepadminshowconfig for monitor subscriber and rep element
ttrepadminshowstatus for monitor status rep agent&peer
sys.monitor in log_fs_reads,log_buffer_waits for log buffer and log files which is may be increasing

0 comments:

Loading