Tmax 5 SP1
1. Added features
Key new features added to the Tmax 5 SP1 include:
1.1. HMS
HMS (Hybrid Messaging System) is a Java-based messaging system standard developed by Sun Microsystems. It reflects the concepts and operations of JMS (Java Messaging Service) and is a communication medium that enables loose coupling between senders and receivers.
Since HMS operates on the Tmax system, which is a TP-Monitor, users can flexibly combine the functions of the TP-Monitor and the functions of the messaging system.
1.2. Tuxedo Async Gateway
Tuxedo Async Gateway is a gateway for asynchronous incoming/outgoing communication with Tuxedo’s domain gateway.
1.3. WebTJCA
This is a library package that acts as a resource adapter in the J2EE JCA (J2EE Connector Architecture) specification. Users can communicate with Tmax through this library.
1.4. Async WebT
This is a Java library for asynchronous in/out communication with Tmax’s Async Java gateway and channels.
1.5. TDL feature expansion
The overall TDL (Tmax Dynamic Library) functionality has been improved, and the following TDL functions have been expanded:
-
Dynamic Library Linking
By using the dynamic library linking technique provided by the OS, update information can be provided without interruption to frequently changing business modules.
-
Introduction of indexing techniques
By introducing indexing techniques, the search performance of the module local cache was improved and the search of the shared memory hash table was minimized.
1.6. Providing outbound functionality for the web service gateway
The outbound function was previously intended for use as a gateway to use Tmax services as web services without any specific changes, but starting with Tmax 5 SP1, the outbound function that allows Tmax to call external web services is also provided.
1.8. Engine
1.8.1. IRT transaction support features
Intelligent Routing (IRT) was added in 4.0 SP1, but it only worked for non-transactional server groups and gateways. 5.0 SP1 adds IRT support for transactions, providing more robust routing capabilities.
-
IRT on multi-node
The previous approach to load balancing in a multi-node environment (prior to 4.0 SP1) was as follows:
-
If the local node’s server is down, CLH detects this and reschedules the request to the remote node’s server. This ensures that all client requests are processed without error.
-
If the server on the remote node is down, the CLH on the local node does not know the status of the server on the remote node, so it sends the request to the remote node, and the client receives a TPENOREADY error.
To solve this problem, the IRT function in multi-node is a function that manages the status of remote servers for static load distribution and reschedules them to servers that can handle them.
Therefore, if the server process of a remote node terminates abnormally and cannot process the service, it is rescheduled to a server that can process it.
-
-
IRT in a multi-domain environment
In a multi-domain environment, the TPENOREADY error can occur not only when the gateway server process terminates, but also depending on the connection status of the gateway.
IRT in multi-domain checks the connection status of each domain gateway and reschedules to a gateway that can handle the connection if it is disconnected.
-
IRT Transaction
Previously, IRT only worked for non-transactional server groups and gateways, but in 5.0 SP1, it was made available for transactions as well.
1.8.2. Added ability to change MAX server count while running.
A feature has been added to allow increasing the MAX value in the SERVER section of CFL. Previous versions only allowed decreasing it. This feature is not available for all servers. To change the MAX value, a new server type is created.
Added STD_DYN and UCS_DYN. Only SVRs set to this SVRTYPE can change the MAX value. The rest will operate as before. STD_DYN is TCS, and UCS_DYN is UCS. Existing sites will operate as before, and the settings have been separated so that only the sites with the new settings will be affected.
Caution
-
Changes in settings
-
Added SVRTYPE to server section - STD_DYN, UCS_DYN
-
-
Changes in spri
-
When viewing spri in tmadmin, if you adjust the MAX value, spri will not be continuous.
-
If you adjust the MAX value of an spri that has already been used once in another svr, it can be used in other svrs as well.
-
-
limits
-
This feature is not available in version 16384.
-
When decreasing the MAX value, it cannot be reduced to a value smaller than the MIN value.
-
When decreasing the MAX value, the spr after the spr index corresponding to the value to be decreased must be in a down state.
Example) When the initially set spri is 8193, 8194, 8195, 8192, in that order, 8195 and 8192 must be down to reduce MAX to 2.
-
1.8.3. Tibero transaction support
XA is a standard specified by X/Open for distributed transaction processing through 2PC (Two-Phase Commit). XA is provided by each DBMS vendor, and transactions between heterogeneous systems are guaranteed through this standard protocol.
Tmax currently provides distributed transaction functionality through integration with Oracle, Informix, DB2, and SYBASE, and in 5.0 SP1, it also provides distributed transaction functionality through integration with Tibero.
Configuration
The following shows how to set up Tibero for distributed transactions with a configuration example.
How to set up
*SVRGROUP SVG_TIBERO [DBNAME = TIBERO,] [OPENINFO = "TIBERO_OPEN_INFORMATION",] [CLOSEINFO = "TIBERO_CLOSE_INFORMATION",] [TMSNAME = "TIBEROTMS_PROCESS_NAME"]
Example
*SVRGROUP SVG_TIBERO DBNAME = TIBERO, OPENINFO = "TIBERO_XA:user=tibero,pwd=tmax, sestm=60,db=tibero4,LogDir=/data1/tmax/log/xalog"
New XA STUB library
An XA STUB library has been added for Tibero XA Switch. The file name is libtbs. This library must be linked when creating tms and XA servers.
Here is an example TMS Makefile:
# TMS Makefile for AIX 64bit TBLIBDIR = $(TB_HOME)/client/lib TBLIB =-ltbxa -ltbertl -ltbcli -lm -lpthread TARGET = tms_tbr APOBJ = dumy.o APPDIR = $(TMAXDIR)/appbin TMAXLIBD= $(TMAXDIR)/lib64 TMAXLIBS= -ltms -ltbs TMAXINC = -I$(TMAXDIR) CFLAGS = -q64 LDFLAGS = -brtl all : $(TARGET) $(TARGET): $(APOBJ) $(CC) $(CFLAGS) $(LDFLAGS) -o $(TARGET) $(TMAXINC) -L$(TMAXLIBD) $(TMAXLIBS) $(APOBJ) -L$(TBLIBDIR) $(TBLIB) $(SYSLIBS): mv $(TARGET) $(APPDIR)/. $(APOBJ): $(CC) $(CFLAGS) $(LDFLAGS) -c dumy.c clean: -rm -f *.o core $(APPDIR)/$(TARGET)
1.8.4. Rolling Down feature
If the Tmax system that was processing the client’s request went down, the previous version only processed and delivered responses to the requests currently being processed, and then delivered a TPECLOSE error to the requests that were queued.
However, in Tmax 5 SP1, a function was added to provide normal responses to all requested clients before the Tmax engine is shut down.
How to use
$ tmdown [-R][-n]
This option prevents the loss of client request messages. When shutting down the Tmax system with this option, CLL blocks the listening port from clients and responds only to requests already being processed before shutting down.
Detailed description
Assuming that NODEA and NODEB are configured as multi-nodes (or multi-domains) and a total of 100 clients are connected to NODEA, the processing process is as follows:
-
If you terminate NODEA’s Tmax system
tmdown –R –n NODEA
-
NODEA’s CLL blocks the Listen Port from the client.
-
For requests that were previously being processed in NODEA, the processing result is delivered to the client after processing is completed.
-
NODEA’s Tmax system is shut down. It shuts down after providing normal responses to all Tmax clients connected to NODEA.
-
For requests piling up in the queue, requests are sent to the NODEB set to TMAX_BACKUP_ADDR.
-
After processing the request received from NODEA, NODEB directly delivers the processing result to the client that originally requested the request.
-
All clients connected to NODEA receive a normal response (all 100 clients must receive a normal response).
-
-
When terminating the Tmax system of NODEB
tmdown –R –n NODEB
-
100 client requests are distributed to NODEA and NODEB by NODEA’s CLH at approximately 50:50. Shut down the Tmax system of NODEB with . tmdown -R –n NODEB.
-
NODEB’s CLL blocks the Listen Port from the client.
-
Requests that were previously being processed in NODEB are processed.
-
Since the client is connected to NODEA, it sends the processing result to NODEA’s CLH. NODEA’s CLH sends the processing result to the client.
-
For requests queued in NODEB, requests are sent to NODEA with TMAX_BACKUP_ADDR set.
-
NODEB’s Tmax system is shut down.
-
After processing the request received from NODEB, NODEA delivers the processing result to the client that originally requested the request.
-
All clients connected to NODEA receive a normal response (all 100 clients must receive a normal response).
-
Caution
In order for NODEB to process NODEA requests, the TMAX_BACKUP_ADDR and TMAX_BACKUP_PORT of the client connected to NODEA must be set to NODEB. Otherwise, when NODEA’s Tmax system shuts down, it will send a TPESYSTEM error to any client requests that have not yet been processed.
Configuration
To use this feature, the COUSIN environment configuration between nodes must be set.
Multi-node
*SVRGROUP svg1 NODENAME = NODE1, COUSIN = "svg2" svg2 NODENAME = NODE2
Multi-domain
*SVRGROUP svg1 NODENAME = NODE1, COUSIN = "GW1" *GATEWAY GW1 RGWADDR = "192.168.1.48" // IPADDR of NODEB
Tmax client
TMAXDIR=/EMC01/starbj81/tmax TMAX_HOST_ADDR=192.168.1.43 // NODEA TMAX_HOST_PORT=8350 TMAX_BACKUP_ADDR=192.168.1.48 // NODEB TMAX_BACKUP_PORT=8350
WebT configuration
To use the Rolling Down feature with WebT, all three of the following must be set:
-
USE_ROLLING_DOWN setting
-
<run.sh>
java -classpath ./webt50.jar:. -DUSE_ROLLING_DOWN=true WebtClient
-
<Set from source>
System.setProperty("USE_ROLLING_DOWN", "true");
-
-
WebT Message Header Settings
-
<webt.properties>
// Using connection pool connectionPool.group1.header.type=extendedV4 // Using a single connection headerType=extendedV4
-
<When set in JeusMain.xml>
<header-type>extendedV4</header-type>
-
<When set in source>
connection.setHeaderType("extendedV4");
-
-
Backup Settings
-
<webt.properties>
connectionPool.anylink.hostBackupAddr=192.168.1.48 //NODEB connectionPool.anylink.hostBackupPort=8350
-
<JeusMain.xml>
<backup-host-name>192.168.1.48</backup-host-name> <backup-port>8350</backup-port>
-
1.8.5. Added Rolling Down error codes
Existing Rolling Down feature
If the Tmax system that was processing the client’s request went down, versions prior to 4.0 SP3 Fix#7 would only process and deliver responses to the requests currently being processed, and then deliver a TPECLOSE error to the requests that were queued.
With the addition of the Rolling Down feature in version 4.0 SP3 Fix#7, if you use this feature (tmdown -R), you can normally respond to all clients that requested before shutting down the Tmax engine.
Improved Rolling Down feature
This version adds the TPERDOWN / TPERCLOSE error codes, allowing the Rolling Down function to be used in a more efficient way than previous versions.
Error code | Description |
---|---|
TPERDOWN |
When Rolling Down occurs, when tpgetrply is called for requests piled up in the existing server queue, a TPERDOWN error is set, and when tpacall is called, the transmitted data is put back into tpgetrply’s receive buffer. |
TPERDCLOSE |
If tpgetrply is called when there are no more requests pending in the server queue, the TPERDCLOSE error is set. |
User Guide
Users can use the improved Rolling Down feature in the following ways.
When shutting down NODEA’s Tmax system (tmdown –R –n NODEA):
-
Set the [MAIN] and [BACKUP] configuration files in the client configuration file (tentative name: tmax.env).
[MAIN] TMAXDIR=/user/tmaxqam/tmax TMAX_HOST_ADDR=192.168.1.44 TMAX_HOST_PORT=8155 [BACKUP] TMAXDIR=/user/tmaxqas/tmax TMAX_HOST_ADDR=192.168.1.44 TMAX_HOST_PORT=8255
-
Access [MAIN]. After creating a Send thread responsible for tpacall and a Recv thread responsible for tpgetrply, they share the [MAIN] context.
-
Each thread continuously sends tpacall / tpgetrply requests simultaneously (the service runs for more than 1 second).
-
Requests are backed up in the server queue.
-
Terminate Tmax set to [MAIN] with the tmdown –R option.
-
After Rolling Down is performed, the first Send thread receives a TPERDOWN error and does not send tpacall requests for a while when receiving that error.
-
For tpgetrply requests called after Rolling Down is performed, a TPERDOWN error is received, and when the error is received, the request data is stored in the receive buffer (rcvbuf).
-
The user regards the received data transmitted when tpgetrply as data to be requested again when connecting to the system designated as [BACKUP] and stores all of it.
-
tpgetrply returns a TPERDCLOSE error when there is no more request data in the server queue.
-
If the error occurs, connect to Tmax specified as [BACKUP] (tmaxreadenv, tpstart)
-
The data saved in step 9 is used as retransmission data for trading.
1.8.6. Loss Service Call Function
When a client or service that made a service call terminates or restarts, the response message from the service called in CLH is discarded. A new feature has been added that allows a user-specified service (Loss Service) to be called (tpacacall with TPNOREADY | TPNOTRAN) when a response message is discarded. Loss Service is also called for messages that would otherwise be discarded if the server or client response queue is terminated while piling up.
How to use
Use the –L option in the CLHOPT entry. Add ‘-L service name’ and specify the Loss Service that is called when the response message is discarded in the service name.
*NODE Tmax01 TMAXIDR = …, CLHOPT = "-L LOSS_SVC" *SERVICE SVC01 SVRNAME = T4036_001_01 LOSS_SVC SVRNAME = T4036_001_01
Services designated as Lost Services will receive additional information via the cltid in TPSVCINFO.
The values of cltid are as follows:
-
cltid.clientdata[1]: tperrno of the discarded response.
-
cltid.clientdata[2]: The tpurcode of the discarded response.
-
cltid.clientdata[3]: The service index value of the discarded response. Used as an argument to the tpgetsvcname() function.
Loss Service Call Conditions
When forwarding a message to a service specified in CLH, the following conditions must be met:
-
Must be a response to tpcall or tpacall.
-
The data must exist regardless of whether the response is normal or error.
-
If CLH terminates abnormally, it may not be delivered.
1.8.7. Dynamic addition of nodes
In addition to the existing ability to dynamically add services, servers, and server groups using tmadmin’s cfgadd command, this version now allows for the dynamic addition of nodes. Nodes belonging to the COUSIN server group can also be dynamically added.
The following shows how to dynamically add a node:
-
Create a configuration file
<tmconfig.m>
*DOMAIN tmax1 SHMKEY =@SHMEMKY@, MINCLH=1, MAXCLH=3, TPORTNO=@TPORTNO@, BLOCKTIME=300, MAXCPC =100, RACPORT=@TRACPORT@ *NODE @HOSTNAME@ TMAXDIR = "@TMAXDIR@", APPDIR = "@TMAXDIR@/appbin", @RMTNAME@ TMAXDIR = "@RMTDIR@", APPDIR = "@RMTDIR@/appbin", *SVRGROUP svg1 NODENAME = "@HOSTNAME@",COUSIN="svg2",LOAD=2 svg2 NODENAME = "@RMTNAME@",LOAD=1 *SERVER svr2 SVGNAME = svg1 *SERVICE TOUPPER SVRNAME = svr2 TOLOWER SVRNAME = svr2
<tmconfig_add.m>
*NODE @RMTNAME2@ TMAXDIR = "@RMTDIR2@", APPDIR = "@RMTDIR2@/appbin", *SVRGROUP svg1 NODENAME = "@HOSTNAME@",COUSIN="svg2,svg3",LOAD=2 svg3 NODENAME = "@RMTNAME2@",LOAD=1
-
Execute racd startup
Start racd on the newly added node.
node3>$ racd -k
-
Compile the configuration file
Compile the configuration file tmconfig_add.m with the new node added.
When compiling, use the –o option to create a binary configuration file with a different name.
node1>$cfl -i tmconfig_add.m -a tmconfig_add.m –o tmchg CFL is done successfully for node(node1) CFL: rcfl start for rnode (node2) CFL is done successfully for node(node2) CFL: rcfl start for rnode (node3) CFL is done successfully for node(node3)
-
Compile the server
Compile the server for the newly added node.
node3>$ gst node3>$ compile c svr2
-
Add dynamic nodes
Dynamically add nodes using tmadmin’s cfgadd command. Note that you must run the command on each node using tmadmin –l.
Below is an example of adding node3 to a situation where node1 and node2 already exist.
# node1 $ node1>tmadmin –l -m --- Welcome to Tmax Admin (Type "quit" to leave) --- $$2 node1 (tmadm): cfgadd -i tmchg (I) TMM0211 General Infomation : CFGADD started [TMM0902] (I) TMM0211 General Infomation : CFGADD completed [TMM0907] config is successfully added # node2 $ node2>tmadmin –l -m --- Welcome to Tmax Admin (Type "quit" to leave) --- $$2 node2 (tmadm): cfgadd -i tmchg (I) TMM0211 General Infomation : CFGADD started [TMM0902] (I) TMM0211 General Infomation : CFGADD completed [TMM0907] config is successfully added
-
Start newly added nodes
Start the newly added node (node3).
Node3>tmboot -n tmaxh4 -f tmchg TMBOOT for node(tmaxh4) is starting: Welcome to Tmax demo system: it will expire 2008/11/23 Today: 2008/9/24 TMBOOT: TMM is starting: Wed Sep 24 11:11:59 2008 TMBOOT: CLL is starting: Wed Sep 24 11:11:59 2008 (I) TMM0211 General Infomation : node register (nodeno = 0(0)) success [TMM0404] (I) TMM0211 General Infomation : node register (nodeno = 1(1)) success [TMM0404] TMBOOT: CLH is starting: Wed Sep 24 11:11:59 2008 (I) CLH9991 Current Tmax Configuration: Number of client handler(MINCLH) = 1 Supported maximum user per node = 7966 Supported maximum user per handler = 7966 [CLH0125] TMBOOT: TLM(tlm) is starting: Wed Sep 24 11:11:59 2008 TMBOOT: SVR(svr2) is starting: Wed Sep 24 11:11:59 2008
-
Check newly added nodes
Check if the node has been added successfully.
node1>tmadmin TMADMIN for rnode (node2): starting to connect to RAC TMADMIN for rnode (node3): starting to connect to RAC --- Welcome to Tmax Admin (Type "quit" to leave) --- $$1 node1 (tmadm): ti Tmax System Info: DEMO version 4.0 SP #3 Fix #8: expiration date = 2008/11/22 maxuser = UNLIMITED, domaincount = 1, nodecount = 3, svgrpcount = 3, svrcount = 9, svccount = 6 rout_groupcount = 0, rout_elemcount = 0 cousin_groupcount = 1, cousin_elemcount = 3 backup_groupcount = 0, backup_elemcount = 0 Tmax All Node Info: nodecount = 3: ---------------------------------------------------------------- no name portno racport shmkey shmsize minclh maxclh ---------------------------------------------------------------- 0 node1 8350 3155 88350 225760 1 3 1 node2 8350 3155 88350 225760 1 3 2 node3 8350 3155 88350 225760 1 3 $$2 (tmadm): st -s CLH 0: -------------------------------------------------------------- svc_name svr_name count cq_cnt aq_cnt q_avg avg status -------------------------------------------------------------- TOLOWER svr2 0 0 0 0.000 0.000 RDY TOUPPER svr2 0 0 0 0.000 0.000 RDY Msg from rnode(node2): CLH 0: -------------------------------------------------------------- svc_name svr_name count cq_cnt aq_cnt q_avg avg status -------------------------------------------------------------- TOLOWER svr2 0 0 0 0.000 0.000 RDY TOUPPER svr2 0 0 0 0.000 0.000 RDY Msg from rnode(node3): CLH 0: -------------------------------------------------------------- svc_name svr_name count cq_cnt aq_cnt q_avg avg status -------------------------------------------------------------- TOLOWER svr2 0 0 0 0.000 0.000 RDY TOUPPER svr2 0 0 0 0.000 0.000 RDY
1.8.8. Older AUTOTRAN version compatibility
AUTOTRAN is an option for whether to use the function that automatically commits when a service call is successful and rolls back when it fails when using an implicit transaction.
How the old version worked
Prior to 3.8.9, if the XA server was called without tx_begin, a transaction would be started automatically (default AUTOTRAN=Y), and the transaction would be treated as a sort of local transaction. This meant that if it was an external call, the transactions would not be bundled together, resulting in the same behavior as calling with the current TPNOTRAN.
How the current version works
However, in version 3.8.9, if AUTOTRAN=Y, a global transaction is automatically started, so external calls are automatically bound to a transaction. Furthermore, since the default value has been changed to N, this value must be explicitly set when upgrading.
Upgrade example
The program below is an example of upgrading from version 3.8.5 to version 3.8.9 or later.
<SVC1>
SVC1 { EXEC SQL INSERT into EMP; tpcall("SVC2"); tpreturn(TPFAIL); }
<SVC2>
SVC2 { EXEC SQL INSERT into EMP2; tpreturn (TPSUCCESS) }
In the structure of CLI pcall(SVC1) -> SVC1 tpcall(SVC2) -> SVC2, in versions prior to 3.8.9, the success of the call to SVC2 and the success of the call to SVC1 were handled separately as local transactions, but from version 3.8.9 onwards, this is tied to a global transaction, so if the call to SVC2 fails, SVC1 itself is also processed as a call failure.
Therefore, in versions after 3.8.9, both the EMP table of SVC1 and the EMP2 table of SVC2 are rolled back, but in versions before 3.8.9, the EMP table of SVC1 is rolled back and the EMP2 table of SVC2 is committed.
The difference between AUTOTRAN versions prior to 3.8.9 and later can be summarized as "change from local transactions to global transactions".
Therefore, if you want to operate in the same way as before in an upgraded environment, application-level modifications are inevitable. To operate in the same way as the previous version, modify SVC1 as follows:
SVC1 { EXEC SQL INSERT into EMP; tpcall("SVC2",..., TPNOTRAN); tpreturn(TPFAIL); }
1.8.9. Added SysMaster support
SysMaster event handler
Library name
libsvrevt.a, libsvrevt.so
User function
The _tmax_event_handler() function is a callback function that is called when SLOG occurs when SVRTYPE is EVT_SVR.
-
Prototype
int _tmax_event_handler(char *progname, int pid, int tid, char *msg, int flags);
-
Parameters
Parameter Description *progname
The name of the program that generated the event.
pid
It is a process id.
tid
This is the thread id. It is for spare use.
*msg
This is an event message.
flags
It’s for spare parts.
-
Return value
The return value is currently unused and is reserved for future use.
Environment variable
-
NODE section
Set the event handler log level via the –h option in TMMOPT.
TMMOPT = "-h i | w | e | f" (default: e) - i : fatal, error, warn, info - w : fatal, error, warn - e : fatal, error - f : fatal
-
SERVER section
Set SVRTYPE to EVT_SVR, MIN/MAX are both 1, one EVT_SVR per node, SVRTYPE=EVT_SVR can be used.
Example
<Configuration file>
*DOMAIN tmax1 SHMKEY =@SHMEMKY@, MINCLH=1, MAXCLH=3, TPORTNO=@TPORTNO@, BLOCKTIME=30 *NODE @HOSTNAME@ TMAXDIR = "@TMAXDIR@", APPDIR = "@TMAXDIR@/appbin", PATHDIR = "@TMAXDIR@/path", TMMOPT = "-h i", SMSUPPORT = Y, SMTBLSIZE = 1000 *SVRGROUP svg1 NODENAME = "@HOSTNAME@" *SERVER evtsvr SVGNAME = svg1, SVRTYPE = EVT_SVR
<Server Program>
#include <stdio.h> #include <stdlib.h> #include <usrinc/atmi.h> #include <time.h> int tpsvrinit(int argc, char *argv[]) { printf("[EVTHND] started\n"); return 1; } int svrdone() { printf("[EVTHND stopped\n"); return 1; } int _tmax_event_handler(char *program, int pid, int tid, char *msg, int flags) { time_t t1; struct tm *tm; time(&t1); tm = localtime(&t1); printf("[EVTHND] %s.%d.%02d%02d%02d:%s\n", program, pid, tm->tm_hour, tm->tm_min, tm->tm_sec, msg); return 0; }
<Makefile.evt>
# Server makefile TARGET = $(COMP_TARGET) APOBJS = $(TARGET).o NSDLOBJ = $(TMAXDIR)/lib64/sdl.o LIBS = -lsvrevt -lnodb OBJS = $(APOBJS) $(SVCTOBJ) SVCTOBJ = $(TARGET)_svctab.o CFLAGS = -Ae +DA2.0W +DD64 +DS2.0 -O -I$(TMAXDIR) APPDIR = $(TMAXDIR)/appbin SVCTDIR = $(TMAXDIR)/svct LIBDIR = $(TMAXDIR)/lib64 # .SUFFIXES : .c .c.o: $(CC) $(CFLAGS) -c $< # # server compile # $(TARGET): $(OBJS) $(CC) $(CFLAGS) -L$(LIBDIR) -o $(TARGET) $(OBJS) $(LIBS) $(NSDLOBJ) mv $(TARGET) $(APPDIR)/. rm -f $(OBJS) $(APOBJS): $(TARGET).c $(CC) $(CFLAGS) -c $(TARGET).c $(SVCTOBJ): cp -f $(SVCTDIR)/$(TARGET)_svctab.c . touch ./$(TARGET)_svctab.c $(CC) $(CFLAGS) -c ./$(TARGET)_svctab.c # clean: -rm -f *.o core $(APPDIR)/$(TARGET)
Sysmaster Trace
GID Structure (12 bytes)
-
GID0 (4 bytes)
A unique number for each client within the product (cli id in the case of WebtoB) This is a number to distinguish clients accessing the product using domain id, node id, hth #, slot id, etc.
-
GID1 (4 bytes)
It is divided into parts. The upper 3 bytes are seq #, and the lower 1 byte is the product unique ID.
-
SEQNO (4 bytes)
The upper 2 bytes are used as the branch # of the asynchronous call.
The lower 2 bytes are used as the seq # for all calls.
Add configuration file
*NODE nodename SMSUPPORT = Y | (N) SMTBUFSIZE = num
-
SMSUPPORT = Y | N
-
This is an option to select whether to support the Sysmaster Trace function. If Y, the Trace function is supported, and if N, it is not supported.
-
-
SMTBLSIZE = num
-
The maximum number of SysMaster traces to store per CLH. (Default: 50000)
-
Add Tmadmin features (smtrc)
-
How to use
$$ node0 (tmadm) : smtrc [–a] GID0 GID1
Item Description - a
This option displays all the information, including server process index (spri), user CPU usage, system CPU usage, and return information, in addition to the existing information.
GID0
The upper 4 bytes of SysMaster’s GID are expressed in hexadecimal.
GID1
The lower 4 bytes of SysMaster’s GID are expressed in Hexa decimal.
When you run 'st –p –x', if the service is running, its GID is printed.
-
Example
$$1 tmaxh4 (tmadm): CLH 0: -------------------------------------------------------------- svr_name svgname spr_no status count avg svc PID fail_cnt err_cnt min_time max_time SysMaster_GID -------------------------------------------------------------- evtsvr svg1 36 RDY 0 0.000 -1 17980 0 0 0.000 0.000 00000000-00000000-00000000 svr1 svg1 37 RUN 0 0.000 SDLTOUPPER 17981 0 0 0.000 0.000 00000000-00000101-00000000 svr2 svg1 38 RUN 0 0.000 SDLTOUPPER2 17982 0 0 0.000 0.000 00000000-00080101-00000000 svr3 svg1 39 RUN 0 0.000 SDLTOUPPER3 17983 0 0 0.000 0.000 00000000-00100101-00000000 svr_sys svg1 40 RDY 3 0.000 -1 17984 0 0 0.000 0.000 00000000-00000000-00000000 ---------------------------------------------------------------- TOTAL COUNT = 3 TOTAL SVCFAIL COUNT = 0 TOTAL ERROR COUNT = 0 TOTAL AVG = 0.000 TOTAL RUNNING COUNT = 3 $$1 tmaxh4 (tmadm): smtrc 0 0101 CLH 0: ------------------------------------------------------------- sysmaster_global_id status svc_name ------------------------------------------------------------- 00000000:00000101:00000000 SVC_RUNNING SDLTOUPPER2 45670701 tmaxi4 (tmadm): smtrc -a 0 0101 CLH 0: ---------------------------------------------------------------- sysmaster_global_id status svc_name ctime svctime spri ucpu scpu ---------------------------------------------------------------- 00000000:00000101:00000000 SVC_RUNNING SDLTOUPPER2 14:05:32:159 0.000 38 0.000 0.000 00000000:00000101:00010000 SVC_RUNNING SMTRACE 14:05:32:159 0.000 40 0.000 0.000 00000000:00000101:00010000 SVC_DONE SMTRACE 14:05:32:159 0.000 40 0.000 0.000
Add tmadmin API (server library)
#include <usrinc/tmadmin.h> /* TMADM_SMTRC return structures */ struct tmadm_smtrc_body { int seqno; int clhno; char status[TMAX_NAME_SIZE]; char name[TMAX_NAME_SIZE]; }; struct tmadm_smtrc { /* fixed header */ struct tmadm_header header; /* fixed body */ struct tmadm_smtrc_body trc[1]; }; /* TMADM_SMTRC with TMADM_AFLAG return structures */ struct tmadm_smtrcall_body { int seqno; int clhno; char status[TMAX_NAME_SIZE]; char name[TMAX_NAME_SIZE]; int spri; int reserved; struct timeval curtime; struct timeval svctime; struct timeval ucputime; struct timeval scputime; }; struct tmadm_smtrcall { /* fixed header */ struct tmadm_header header; /* fixed body */ struct tmadm_smtrcall_body trc[1]; }; typedef struct { int gid1; int gid2; int seqno; } tmax_smgid_t; int tmadmin(int cmd, void *arg, int opt, long flags);
-
Add TMADM_SMTRC cmd
-
If the service is running, print its GID.
-
Does not support retrieving results (offset not supported)
-
It must be called after allocating enough space to retrieve the results.
-
-
flags
-
TPNOFLAGS
-
Use the tmadm_smtrc structure.
TMADM_AFLAG
-
Use the tmadm_smtrcall structure.
-
In addition to the existing information, it provides additional information such as spri, reserved, curtime, svctime, ucputime, and scputime.
opt is not used.
-
-
tmgetsmgid
The tmgetsmgid() function retrieves the current gid.
-
Prototype
#include <usrinc/tmadmin.h> int tmgetsmgid (tmax_smgid_t *gid);
-
Example
<svr02.c : Using TPNOFLAGS>
#include <stdio.h> #include <stdlib.h> #include <usrinc/atmi.h> #include <usrinc/tmadmin.h> #include "../sdl/demo.s" GETGID(TPSVCINFO *msg) { tmax_smgid_t smgid; int ret; char *buf; buf = (char *)tpalloc("STRING", NULL, 0); if(buf == NULL) tpreturn(TPFAIL, -1, NULL, 0, 0); ret = tmgetsmgid(&smgid); memcpy(buf, (char *)&smgid.gid1, 4); memcpy(buf+4, (char *)&smgid.gid2, 4); memcpy(buf+8, smgid.seqno, 4); tpreturn(TPSUCCESS, 0, (char *)buf, strlen(buf), 0); } SMTRACE(TPSVCINFO *msg) { struct tmadm_smtrc *smtrc; int max = 10, size; int gid1, gid2, n, i; struct smtrace *ptr; char *buf; buf = (char *)tpalloc("CARRAY", NULL, 0); if(buf == NULL) tpreturn(TPFAIL, -1, NULL, 0, 0); ptr = (struct smtrace *)msg->data; gid1 = ptr->gid1; gid2 = ptr->gid2; size = sizeof(struct tmadm_smtrc) + (max-1) * sizeof(struct tmadm_smtrc_body); smtrc = (struct tmadm_smtrc *)malloc(size); if(smtrc == NULL) { printf("smtrc is null\n"); tpreturn(TPFAIL, -1, NULL, 0, 0); } memset(smtrc, 0x00, size); smtrc->header.version = _TMADMIN_VERSION; smtrc->header.size = size; smtrc->header.reserve_int[0] = gid1; smtrc->header.reserve_int[1] = gid2; n = tmadmin(TMADM_SMTRC, smtrc, TPNOFLAGS, TPNOFLAGS); if(n < 0) { free(smtrc); tpreturn(TPFAIL, -1, NULL, 0, 0); } for(i=0; i<smtrc->header.num_entry; i++) { sprintf(buf, "SMTRACE[%d] : gid[%x-%x-%x] seqno[%x] clhno[%x] status [%s] name[%s]\n", i, gid1, gid2, ptr->seqno, smtrc->trc[i].seqno, smtrc->trc[i].clhno, smtrc->trc[i].status, smtrc->trc[i].name); } free(smtrc); tpreturn(TPSUCCESS, 0, (char *)buf, strlen(buf), 0);
<svr02_a.c : Use TMADM_AFLAG>
#include <stdio.h> #include <stdlib.h> #include <usrinc/atmi.h> #include <usrinc/tmadmin.h> #include "../sdl/demo.s" GETGID_A(TPSVCINFO *msg) { tmax_smgid_t smgid; int ret; char *buf; buf = (char *)tpalloc("STRING", NULL, 0); if(buf == NULL) tpreturn(TPFAIL, -1, NULL, 0, 0); ret = tmgetsmgid(&smgid); memcpy(buf, (char *)&smgid.gid1, 4); memcpy(buf+4, (char *)&smgid.gid1, 4); memcpy(buf+8, smgid.seqno, 4); tpreturn(TPSUCCESS, 0, (char *)buf, strlen(buf), 0); } SMTRACE_A(TPSVCINFO *msg) { struct tmadm_smtrcall *smtrcall; int max = 10, size; int gid1, gid2, n, i; struct smtrace *ptr; char *buf; buf = (char *)tpalloc("CARRAY", NULL, 0); if(buf == NULL) tpreturn(TPFAIL, -1, NULL, 0, 0); ptr = (struct smtrace *)msg->data; gid1 = ptr->gid1; gid2 = ptr->gid2; size = sizeof(struct tmadm_smtrcall) + (max-1) * sizeof(struct tmadm_smtrcall_body); smtrcall = (struct tmadm_smtrcall *)malloc(size); if(smtrcall == NULL) { printf("smtrcall is null\n"); tpreturn(TPFAIL, -1, NULL, 0, 0); } memset(smtrcall, 0x00, size); smtrcall->header.version = _TMADMIN_VERSION; smtrcall->header.size = size; smtrcall->header.reserve_int[0] = gid1; smtrcall->header.reserve_int[1] = gid2; n = tmadmin(TMADM_SMTRC, smtrcall, TMADM_AFLAG, TMADM_AFLAG); if(n < 0) { free(smtrcall); tpreturn(TPFAIL, -1, NULL, 0, 0); } printf("smtrcall->header.num_entry = %d\n", smtrcall->header.num_entry); printf("smtrcall->header.num_left = %d\n", smtrcall->header.num_left); n = 0; for(i=0; i<smtrcall->header.num_entry + smtrcall->header.num_left; i++) { sprintf(buf+n, "SMTRACE[%d] : gid[%x-%x-%x] seqno[%x] clhno[%x] status[%s] name[%s] spri[%d] curtime[%ld], svctime[%ld], ucputime[%ld], scputime[%ld]\n", i, gid1, gid2, ptr->seqno, smtrcall->trc[i].seqno, smtrcall->trc[i].clhno, smtrcall->trc[i].status, smtrcall->trc[i].name, smtrcall->trc[i].spri, smtrcall->trc[i].curtime.tv_sec, smtrcall->trc[i].svctime.tv_sec, smtrcall->trc[i].ucputime.tv_sec, smtrcall->trc[i].scputime.tv_sec); n = n + strlen(buf); } free(smtrcall); tpreturn(TPSUCCESS, 0, (char *)buf, strlen(buf), 0); }
<svr01.c>
#include <stdio.h> #include <usrinc/atmi.h> #include <usrinc/tmadmin.h> #include "../sdl/demo_sdl.h" SDLTOUPPER(TPSVCINFO *msg) { int i, ret, cd; struct smtrace *stdata; tmax_smgid_t smgid; char *buf; long rcvlen; buf = (char *)tpalloc("CARRAY", NULL, 0); ret = tmgetsmgid(&smgid); if(ret < 0) tpreturn(TPFAIL, -1, NULL, 0, 0); stdata = (struct smtrace *)msg->data; stdata->gid1 = smgid.gid1; stdata->gid2 = smgid.gid2; stdata->seqno = smgid.seqno; cd = tpacall("SMTRACE", msg->data, 0, 0); /* When using TMADM_AFLAG */ /* cd = tpacall("SMTRACE_A", msg->data, 0, 0); */ ret = tpgetrply(&cd, (char **)&buf, (long *)&rcvlen, 0); if(ret < 0) tpreturn(TPFAIL, -1, NULL, 0, 0); sleep(20); tpreturn(TPSUCCESS,0,(char *)buf, strlen(buf),0); }
< client.c>
#include <stdio.h> #include <stdlib.h> #include <string.h> #include <usrinc/atmi.h> #include <usrinc/tmadmin.h> #include "../sdl/demo.s" main(int argc, char *argv[]) { struct smtrace *sndbuf, *rcvbuf; long rcvlen, sndlen; int ret; if (tpstart((TPSTART_T *)NULL) == -1){ printf("tpstart failed\n"); exit(1); } ... if (tpcall("SDLTOUPPER", (char *)sndbuf, 0, (char **)&rcvbuf, &rcvlen, 0) == -1){ printf("Can't send request to service SDLTOUPPER =>\n"); tpfree((char *)sndbuf); tpfree((char *)rcvbuf); tpend(); exit(1); } printf("rcvbuf = %s\n", rcvbuf); tpfree((char *)sndbuf); tpfree((char *)rcvbuf); tpend(); }
Trace logging feature
If you specify SMLOGSVC and SMLOGINT in the node section, you can log trace information by periodically calling the specified service.
Configuration
Domain name [SMLOGSVC=service-name] [SMLOGINT=interval-time-value]
-
SMLOGSVC
-
Specifies the service name responsible for logging. If not specified, the TRACE LOGGING function will not work.
-
-
SMLOGINT
-
Specifies the logging cycle. The unit is seconds, and if not specified, the default is 30 seconds.
-
Logging information structure
<usrinc/tmadmin.h>
/* SysMaster Trace Log structure */ typedef struct { tmax_smgid_t gid; int clhno; char status[TMAX_NAME_SIZE]; char name[TMAX_NAME_SIZE]; int spri; int reserved; struct timeval curtime; struct timeval svctime; struct timeval ucputime; struct timeval scputime; } tmax_smtrclog_t;
tmget_smtrclog_count
The tmget_smtrclog_count() function returns the number of data currently being logged.
-
Prototype
#include <usrinc/tmadmin.h> int tmget_smtrclog_count(void *handle)
-
Parameters
Parameter Description *handle
A pointer to the data received from the logging service (msg->data).
-
Return value
Return value Description The number of data currently logged
The function call is successful.
-1
The function call failed.
(tperrno is set to a value corresponding to the error situation.)
tmget_smtrclog
The tmget_smtrclog() function stores logging data in a structure buffer.
-
Prototype
#include <usrinc/tmadmin.h> int tmget_smtrclog(void *handle, tmax_smtrclog_t *buf, int *count)
-
Parameters
Parameter Description *handle
A pointer to the data received from the logging service (msg→data).
*buf
This is a buffer for retrieving logging data.
*count
The number of logging cases is stored. When inputting, the maximum number of logging cases in the buffer is set, and when outputting, the number of saved logging cases is stored.
-
Return value
Return value Description 1
The function call is successful.
-1
The function call failed.
(tperrno is set to a value corresponding to the error situation.)
-
Example
#include <stdio.h> #include <stdlib.h> #include <usrinc/atmi.h> #include <usrinc/tmadmin.h> SMLOGSERVICE(TPSVCINFO *msg) { tmax_smtrclog_t *smtrclog; int ret, count=0, i; char *buf; smtrclog = (tmax_smtrclog_t *)tpalloc("CARRAY", NULL, 1024); if(smtrclog == NULL) { printf("smtrclog tpalloc fail [%s]\n", tpstrerror(tperrno)); } buf = (char *)tpalloc("STRING", NULL, 0); if(buf == NULL) tpreturn(TPFAIL, -1, NULL, 0, 0); memset(buf, 0x00, 1024); memset(smtrclog, 0x00, 1024); count = tmget_smtrclog_count((char *)msg->data); printf("\n###########################\n\n"); printf("tmget_smtrclog_count = %d\n", count); count = 100; ret = tmget_smtrclog(msg->data, smtrclog, &count); printf("count = %d\n", count); for(i=0; i<count; i++) { printf("smtrclog[%d].gid = %d-%d-%d\n", i, smtrclog[i].gid.gid1, smtrclog[i].gid.gid2, smtrclog[i].gid.seqno); printf("smtrclog[%d].clhno = %d\n", i, smtrclog[i].clhno); printf("smtrclog[%d].status = %s\n", i, smtrclog[i].status); printf("smtrclog[%d].name = %s\n", i, smtrclog[i].name); printf("smtrclog[%d].spri = %d\n", i, smtrclog[i].spri); printf("\n"); } printf("###########################\n\n"); strcpy(buf, "success\n"); tpreturn(TPSUCCESS, 0, (char *)buf, 0, 0); }
1.8.10. UCS Writable FD Scheduling
tpissetfd_w
This function checks the FDSET of a UCS-style server process to determine whether there is data to send to the socket FD given as a parameter. While the tpissetfd() function checks for a readable FDSET, this function checks for a writable FDSET. It is used for scheduling external sockets for UCS-style processes.
-
Prototype
#include <ucs.h> int tpissetfd_w(int fd)
-
Parameters
Parameter Description fd
Socket FD value to be checked in Writable FDSET.
-
Return value
Return value Description 1
The function call is successful.
-1
The function call failed.
(tperrno is set to a value corresponding to the error situation.)
-
error
Error code Description [TPESYSTEM]
A Tmax system error occurred. Detailed information is recorded in the log file.
[TPEOS]
An OS error occurred.
tpsetfd_w
This function registers the socket FD given as a parameter in the FDSET of a UCS-style server process. While the tpsetfd() function registers in a Readable FDSET, this function registers in a Writable FDSET. It is used for scheduling external sockets in a UCS-style process.
-
Prototype
#include <ucs.h> int tpissetfd_w(int fd)
-
Parameters
Parameter Description fd
The socket FD value to be registered in the Writable FDSET.
-
Return value
Return value Description 1
The function call is successful.
-1
The function call failed.
(tperrno is set to a value corresponding to the error situation.)
-
Error
Error code Description [TPESYSTEM]
A Tmax system error occurred. Detailed information is recorded in the log file.
[TPEOS]
An OS error occurred.
tpclrfd_w() function
This function removes a socket FD given as a parameter from the FDSET of a UCS-mode server process. While the tpclrfd() function removes it from a readable FDSET, this function removes it from a writable FDSET. It is used for scheduling external sockets in a UCS-mode process.
-
Prototype
#include <ucs.h> int tpclrfd_w(int fd)
-
Parameters
Parameter Description fd
The socket FD value to be removed from the Writable FDSET.
-
Return value
Return value Description 1
The function call is successful.
-1
The function call failed.
(tperrno is set to a value corresponding to the error situation.)
-
Error
Error code Description [TPESYSTEM]
A Tmax system error occurred. Detailed information is recorded in the log file.
[TPEOS]
An OS error occurred.
1.8.11. IP address-based access restriction feature
In Tmax 4.0 SP3 Fix#8 version, you can set which clients are allowed to connect and which are not based on IP address.
Set static limits
Create configuration files for clients whose connections will be allowed/denied as tmax.allow and tmax.deny, respectively. Through these files, CLL selects whether to allow/deny access to TCP clients.
Configuring clients to be allowed
Create a file called tmax.allow in the $TMAXDIR/path directory to allow access and set the IP addresses of clients to be allowed to access.
192.168.1.43 192.168.1.48
Configuring clients to be denied
Create an access denial file tmax.deny in the $TMAXDIR/path directory to set the IP addresses of clients to be denied to access.
192.168.1.35 192.168.1.45
Applied rules
The applied rules of ACL (Access Control List) are as follows.
-
The tmax.allow file is searched first, and if a matching ACL exists, access is allowed. After searching for . tmax.deny, if a matching ACL exists, the connection is denied. If a client belonging to the ACL attempts to connect (TPSTART), a TPECLOSE error occurs. If neither .tmax.allow nor tmax.deny exists in the file, access is allowed.
Syntax
-
If the first character is ‘#’, it is treated as a comment.
-
Only IP address or NETWORK/NETMASK method is allowed.
Example) 192.168.1.1 or 192.168.1.0/24
-
Only one ACL per line is allowed, and no spaces or tabs are allowed.
-
ALL is a reserved word meaning all IP addresses.
Set dynamic limits
To dynamically set client IP addresses to allow/deny access, use the tmax_add_acl() function below.
This function can be used when you want to add IP addresses that are allowed/denied access during operation in addition to the list of client IP addresses set in the tmax.allow and tmax.deny files.
-
Prototype
#include <usrinc/tmaxapi.h> int tmax_add_acl (int nodeno, char *ip, int mask, int mode, int flags)
-
Parameters
Parameter Description nodeno
The node number to which the ACL (Access Control List) will be added. If set to -1, it means that the ACL will be set on the local node.
ip
Set the IP address of the client to allow/deny access to as a string, and set NETMASK (1-32) for mask. If NETMASK is not set, set it to TMAX_ACL_IPADDR instead.
mode
Set to TMAX_ACL_ALLOW or TMAX_ACL_DENY. If set to TMAX_ACL_ALLOW, it will be added to the access allow list. If set to TMAX_ACL_DENY, it will be added to the access deny list.
flags
Currently not in use.
-
Return value
Return value Description A value greater than 0
The function call is successful.
-1
The function call failed.
-
Example
<deny.m>
*DOMAIN tmax1 SHMKEY = @SHMEMKY@, TPORTNO = @TPORTNO@, RACPORT = @TRACPORT@ *NODE @HOSTNAME@ TMAXDIR = "@TMAXDIR@", APPDIR = "@TMAXDIR@/appbin" @RMTNAME@ TMAXDIR = "@RMTDIR@", APPDIR = "@RMTDIR@/appbin", *SVRGROUP svg1 NODENAME = "@HOSTNAME@" svg2 NDENAME = "@HOSTNAME@", COUSIN = "svg3" svg3 NODENAME = "@RMTNAME@" *SERVER svr_addlist SVGNAME = svg1 svr2 SVGNAME = svg2 *SERVICE TOUPPER SVRNAME = svr2 ALLOW_IP SVRNAME = svr_addlist DENY_IP SVRNAME = svr_addlist
<svr_addlist.c>
#include <stdio.h> #include <arpa/inet.h> #include <usrinc/atmi.h> #include <usrinc/tmaxapi.h> ALLOW_IP(TPSVCINFO *msg) { struct in_addr in; int ip, port, nodeno; int ret; printf("\nALLOW_IP Service is started.\n"); printcliinfo(); getcliinfo(&ip, &port, &nodeno); in.s_addr = ip; // Configure ACL(IP address to be allowed to connect) on node #1 ret = tmax_add_acl(1, msg->data, TMAX_ACL_IPADDR,TMAX_ACL_ALLOW, 0); if(ret < 0) { printf("tmax_add_acl is failed.\n"); tpreturn(TPFAIL, 0, (char*)msg->data, 0, 0); } tpreturn(TPSUCCESS, 0, (char*)msg->data, 0, 0); } DENY_IP(TPSVCINFO *msg) { struct in_addr in; int ip, port, nodeno; int ret; printf("\nDENY_IP Service is started.\n"); printcliinfo(); getcliinfo(&ip, &port, &nodeno); in.s_addr = ip; // Configure ACL(IP address to be denied to connect)on node #1 ret = tmax_add_acl(1, msg->data, TMAX_ACL_IPADDR,TMAX_ACL_DENY, 0); if(ret < 0) { printf("tmax_add_acl is failed.\n"); tpreturn(TPFAIL, 0, (char*)msg->data, 0, 0); } tpreturn(TPSUCCESS, 0, (char*)msg->data, 0, 0); }
<cli_acladd.c>
#include <stdio.h> #include <unistd.h> #include <stdlib.h> #include <string.h> #include <usrinc/atmi.h> main(int argc, char *argv[]) { char *sndbuf, *rcvbuf; long rcvlen, sndlen; int ret, cd; if (argc != 2) { printf("Usage: [%s] IP_ADDR\n", argv[0]); exit(1); } if ( (ret = tmaxreadenv( "tmax.env","TMAX" )) == -1 ){ printf( "tmax read env failed\n" ); exit(1); } if (tpstart((TPSTART_T *)NULL) == -1){ printf("tpstart failed. [%s]\n", tpstrerror(tperrno)); exit(1); } if ((sndbuf = (char *)tpalloc("STRING", NULL, 0)) == NULL){ printf("sendbuf alloc failed !\n"); tpend(); exit(1); } if ((rcvbuf = (char *)tpalloc("STRING", NULL, 0)) == NULL){ printf("recvbuf alloc failed !\n"); tpfree((char *)sndbuf); tpend(); exit(1); } // Configure the IP address that will be denied to connect. strcpy(sndbuf, argv[1]); printf("client is calling SVC01.\n"); cd = tpcall("DENY_IP", sndbuf, 0, rcvbuf, &rcvlen, 0); if(cd < 0){ printf("tpcall is failed[%s]\n", tpstrerror(tperrno)); tpfree((char *)sndbuf); tpfree((char *)rcvbuf); tpend(); exit(1); } tpfree((char *)sndbuf); tpfree((char *)rcvbuf); tpend(); }
-
Result
client > $ cli_acladd 192.168.1.43
Any connection attempt (TPSTART) to node2 from IP 192.168.1.43 results in a TPECLOSE error.
1.8.12. Forced disconnection of abnormally connected clients
If an abnormal connection client remains indefinitely connected to CLH, other clients may be unable to connect. Therefore, the current version automatically terminates connections from CLH if a client does not receive a TPSTART connection message within 60 seconds of establishing a socket connection.
The following error occurs when the client terminates the connection:
(I) CLH0209 internal error : disconnect client because client didn't send tpstart msg for 60 sec.(192.168.1.43) [CLH0058]
1.8.13. Added TMS Recovery related logs
When using transaction recovery in TMS, an INFO log is output when transaction recovery starts or completes. (Service codes: TMS0221, TMS0222, TMS0223)
The output log is as follows.
(I) TMS0211 General Infomation : transaction recovery will be started [TMS0221] (I) TMS0211 General Infomation : transaction recovery was completed [TMS0222] (I) TMS0211 General Infomation : transaction recovery was completed [TMS0223]
1.9. Tuxedo Gateway
1.9.1. Added Tuxedo Async Gateway
Tuxedo Gateway Overview
Tuxedo Gateway is a gateway for two-way communication between Tmax and Tuxedo, and communicates according to Tuxedo’s domain gateway communication method.
-
Request from Tmax to Tuxedo
When a request message arrives, Tmax’s Tuxedo gateway converts the xid received from Tmax’s CLH to Tuxedo’s xid and then sends it to Tuxedo’s domain gateway along with the request message.
Afterwards, when an xa_prepare or xa_rollback/xa_commit message comes from CLH, it sends the message to Tuxedo’s domain gateway by referencing the previously converted xid.
-
Request from Tuxedo to Tmax
When a request message comes from Tuxedo’s domain gateway, Tmax’s Tuxedo gateway converts the xid received from Tuxedo to match Tmax’s xid and then sends the message to CLH.
CLIENT Tmax System SVC Tuxedo GateWay Tuxedo System Domain GateWay SVC XA OR Non XA After that, when an xa_prepare or xa_rollback/xa_commit request message comes from Tuxedo, the message is sent to CLH by referencing the previously converted xid.
Tmax settings
<GATEWAY section>
*GATEWAY GW name NODENAME = "nodename", GWTYPE = TUXEDO | TUXEDO_ASYNC, PORTNO = Port-number, RGWADDR = "Tuxedo-domaingw-ipaddr", RGWPORTNO = Tuxedo-domaingw-portno, CLOPT = "string" BACKUP_RGWADDR = "Backup-Tuxedo-ipaddr", BACKUP_RGWPORTNO = "Backup-Tuxedo-domaingw-portno", BACKUP_RGWADDR2 = "Backup-Tuxedo-ipaddr2", BACKUP_RGWPORTNO2 = "Backup-Tuxedo-domaingw-portno2", BACKUP_RGWADDR3 = "Backup-Tuxedo-ipaddr3", BACKUP_RGWPORTNO3 = "Backup-Tuxedo-domaingw-portno3"
-
GWTYPE = string
To communicate with Tuxedo’s domain gateway, you must specify TUXEDO or TUXEDO_ASYNC.
-
TUXEDO: This is a TYPE that existed in the previous version and communicates with Tuxedo through a synchronous channel.
-
TUXEDO_ASYNC: A new type added in 4.0 SP3 Fix#8 that communicates through an asynchronous channel.
-
-
PORTNO = numeric
This is the Listen Port used by the local Tuxedo gateway process.
-
RGWADDR = literal
Register the IP address or node name of the node where the Tuxedo domain gateway process is running, to which Tmax’s Tuxedo gateway wants to connect.
-
RGWPORTNO = numeric
Register the port number that the Tuxedo domain gateway process that Tmax’s Tuxedo gateway wants to connect to is listening on.
-
BACKUP_RGWADDR = literal
Specifies the IP address of Tuxedo to which the local Tuxedo gateway will connect for backup purposes. If the Tuxedo specified by RGWADDR fails, it will attempt to connect to the node specified in that entry.
Up to three backup Tuxedo gateways can be specified, and are designated as BACKUP_RGWADDR2-BACKUP_RGWPORTNO2, BACKUP_RGWPORT3-BACKUP_RGWPORTNO3.
-
BACKUP_RGWPORTNO = numeric
Specifies the port number of Tuxedo’s domain gateway to which the local Tuxedo gateway will connect for backup. If the Tuxedo specified by RGWADDR-RGWPORTNO fails, it will attempt to connect to the node specified in the corresponding entry (BACKUP_RGWADDR-BACKUP_RGWPORTNO).
-
CLOPT = literal
-
-a value
Sets the domain ID value transmitted when connecting to Tuxedo. This must be a name defined in Tuxedo’s configuration file.
-
-r value
Sets the Tuxedo domain ID value transmitted when connecting from Tuxedo to Tmax. This option is used when verifying and authenticating Tuxedo’s domain ID. If not specified, Tuxedo’s domain ID will not be authenticated. If a gateway with a domain ID not set to this value attempts to connect, the following error will occur and the connection will be denied.
(I) GATEWAY0046 socket connect error : incorrent remote domain name [TUXGW0416]
-
Example
<Tmax configuration file>
*DOMAIN dom1 SHMKEY = 78350, MINCLH = 1, MAXCLH = 1, TPORTNO = 8350, BLOCKTIME=60 *NODE tmaxh4 TMAXDIR="/data1/tmaxqam/tmax APPDIR="@TMAXDIR@/appbin", *SVRGROUP svg1X NODENAME = "tmaxh4", DBNAME = ORACLE, OPENINFO = "Oracle_XA+Acc=P/scott/tiger+SesTm=60+DbgFl=0x01+LogDir=@TMAXDIR@/log/xalog", TMSNAME = tms_ora *SERVER txsvr SVGNAME=svg1X txsvr2 SVGNAME = svg1X *SERVICE TMAX_INSERT SVRNAME = txsvr TUX_INSERT SVRNAME = TUXGW *GATEWAY TUXGW GWTYPE = TUXEDO_ASYNC, PORTNO = 9521, RGWADDR = "192.168.1.35", RGWPORTNO = 9311, BACKUP_RGWADDR = "192.168.1.35", BACKUP_RGWPORTNO = 9011, BACKUP_RGWADDR2 = "192.168.1.35", BACKUP_RGWPORTNO2 = 9711, NODENAME = @HOSTNAME@, CLOPT="-a TMXDOM -S AA –rTUXDOM", TIMEOUT = 30, CPC = 1
<Tuxedo configuration file>
*DM_RESOURCES VERSION=U22 *DM_LOCAL_DOMAINS TUXDOM GWGRP=DOM_GW1 TYPE=TDOMAIN #Set CLOPT="-rTUXDOM specified in the Tmax GATEWAY section DOMAINID="TUXDOM" BLOCKTIME=3000 MAXDATALEN=56 MAXRDOM=89 SECURITY=NONE BLOB_SHM_SIZE=100000 DMTLOGDEV="/data1/tmaxqas/bea/tuxedo9.1/samples/atmi/dgw/DMTLOG" AUDITLOG="/data1/tmaxqas/bea/tuxedo9.1/samples/atmi/dgw/AUDITLOG" DMTLOGNAME="DMTLOG" CONNECTION_POLICY=ON_STARTUP *DM_REMOTE_DOMAINS TMXDOM TYPE=TDOMAIN DOMAINID="TMXDOM" *DM_TDOMAIN #Set Tuxedo's domain IP address/port #RGWADDR, RGWPORTNO of the Tmax GATEWAY section TUXDOM NWADDR="//192.168.1.35:9311" CONNECTION_POLICY=ON_STARTUP #Tmax's Tuxedo Gateway IP and port TMXDOM NWADDR="//192.168.1.42:9521" CONNECTION_POLICY=ON_STARTUP *DM_LOCAL_SERVICES TUX_INSERT *DM_REMOTE_SERVICES TMAX_INSERT2
1.10. Java Gateway
1.10.1. Async Java Gateway
-
Async Java Gateway
Tmax provides an Async Java gateway to enable clients or servers to call Java application services without CPC restrictions.
-
WebTASync library
Provides a WebTASync library that allows asynchronous service requests from Tmax from outside.
For more information about the WebTASync library, refer to Tmax WebTAsync User Guide. |
1.10.2. Added -m option to JAVAGW’s CLOPT section
When making a tpacall or tpcall from WebT to JEUSGW or JEUSGWA, the default number of request messages that can be sent until a response is received from CLH to GW is 500.
A feature has been added that allows users to adjust the default value of 500. Adding "-m 1000" to the CLOPT section will allow you to set it to 1000.
1.11. JTC (Jeus-Tuxedo Connector)
1.11.1. Added JTC interactive communication features
Added API
The API’s usage is identical to WebT’s interactive communication. For more details, see WebT Interactive Communication.
-
tpconnect
TuxCallDescripter tpconnect(boolean recvNext) TuxCallDescripter tpconnect(TuxBuffer sndbuf, boolean recvNext) TuxCallDescripter tpconnect(WebtAttribute attr, boolean recvNext) TuxCallDescripter tpconnect(TuxBuffer sndbuf, WebtAttribute attr, boolean recvNext)
-
tpsend
void tpsend(TuxCallDescripter cd, TuxBuffer tx, boolean recvNext) void tpsend(TuxCallDescripter cd,TuxBuffer tx, WebtAttribute attr, boolean recvNext)
-
tprecv
TuxBuffer tprecv(TuxCallDescripter cd) throws WebtIOException, WebtServiceException, WebtDialogueException TuxBuffer tprecv(TuxCallDescripter cd,WebtAttribute attr) boolean isSendNext() boolean isRecvNext() void tpdiscon(TuxCallDescripter cd) throws WebtIOException, WebtServiceException
webt.properties settings
log.level=debug log.dir=D:\\tmax log.file=jtc.log defaultCharset=euc-kr tux.remote.name.list=TUXGW1 tux.local.name=TUXGW2 tux.buffer.size=4096 tux.default.timeout=1000 tux.default.txtimeout=30 tux.default.readtimeout=0 tux.TUXGW1.addr=192.168.1.43 tux.TUXGW1.port=9111 tux.TUXGW1.interval=1 tux.TUXGW1.svc=TOUPPER_CONV
Example
import java.io.*; import javax.ejb.*; import javax.naming.*; import java.sql.*; import javax.sql.*; import tmax.jtc.TuxService; import tmax.jtc.TuxServiceException; import tmax.jtc.TuxServiceFailException; import tmax.jtc.io.TuxBuffer; import tmax.jtc.io.TuxFieldBuffer; import tmax.webt.WebtBuffer; import tmax.webt.WebtConnection; import tmax.webt.WebtRemoteService; import tmax.webt.jeus.WebtDataSource; import java.rmi.*; import javax.naming.Context; import javax.naming.InitialContext; import javax.naming.NamingException; import javax.servlet.*; import javax.servlet.http.*; import javax.transaction.HeuristicMixedException; import javax.transaction.HeuristicRollbackException; import javax.transaction.NotSupportedException; import javax.transaction.RollbackException; import javax.transaction.SystemException; import javax.transaction.UserTransaction; import tmax.jtc.*; import tmax.jtc.io.*; import tmax.webt.*; public class JtcTest2 extends HttpServlet { public void doGet(HttpServletRequest request, HttpServletResponse response) throws IOException, ServletException { response.setContentType("text/html"); PrintWriter out = response.getWriter(); out.println("<html>"); out.println("<head>"); String title = "Request Information APIs"; out.println("<title>" + title + "</title>"); out.println("</head>"); out.println("<body bgcolor=\"white\">"); Context initial; UserTransaction ut = null; TuxService service = new TuxService("TMXDOM","TOUPPER_CONVN"); TuxBuffer sndbuf = service.createStringBuffer(); sndbuf.setString("conversation service"); try{ TuxCallDescripter cd = service.tpconnect(false); if(cd == null) { out.println("tpconnect failed!"); } else { for(int i=0 ; i<10 ; i++) service.tpsend(cd, sndbuf, false); service.tpsend(cd, sndbuf, true); WebtAttribute attr = new WebtAttribute(); attr.setTPNOTIME(true); while(service.isRecvNext()) { TuxBuffer rcvbuf = service.tprecv(cd,attr); out.println(" : " + rcvbuf.getString()); } /* Iterate */ if( service.isSendNext()) service.tpsend(cd, sndbuf, true); if( service.isRecvNext()) { TuxBuffer rcvbuf = service.tprecv(cd,attr); out.println(" : " + rcvbuf.getString()); } } catch (WebtException wie) { wie.printStackTrace(System.out); Throwable t = wie.getRootCause(); if (t != null) t.printStackTrace(System.out); System.exit(1); } finally { connection.close(); } } }
1.11.2. Added JTC Async Listener structure feature
Added the ability to receive responses to tpacall, xa_prepare, and xa_rollback in methods of the callback interface.
Added classes
-
class TuxAsyncService (the constructor is the same as TuxService) public TuxCallDescripter tpacall(TuxBuffer input, WebtAttribute attr, TuxAsyncMsgListener listener) throws WebtException TuxAsyncXAResource getXAResource()
-
class TuxAsyncXAResource int prepare(Xid xid, TuxAsyncMsgListener listener) throws XAException void commit(Xid xid, TuxAsyncMsgListener listener) void rollback(Xid xid, TuxAsyncMsgListener listener)
Added interface
-
interface TuxAsyncMsgListener
void handleEvent(TuxBuffer rcvBuffer) void handleError(Exception e)
Configuration
<webt.properties>
log.level=debug log.dir=D:\\tmax log.file=jtc.log defaultCharset=euc-kr tux.remote.name.list=TUXGW1 tux.local.name=TUXGW2 tux.buffer.size=4096 tux.default.timeout=1000 tux.default.txtimeout=30 tux.default.readtimeout=0 tux.TUXGW1.addr=192.168.1.43 tux.TUXGW1.port=9111 tux.TUXGW1.interval=1 tux.TUXGW1.svc=TOUPPER_CONV
<Tuxedo Settings>
*RESOURCES IPCKEY 75350 DOMAINID simpapp2 MASTER simple MAXACLGROUPS 50 MAXACCESS*RESOURCES IPCKEY 75350 DOMAINID simpapp2 MASTER simple MAXACLGROUPS 50 MAXACCESSERS 1000 MAXSERVERS 20 MAXGROUPS 100 MAXSERVICES 40 MODEL SHM *MACHINES DEFAULT: TUXDIR="/openframe/phk6254/bea/tuxedo9.1" APPDIR="/openframe/phk6254/bea/tuxedo9.1/samples/atmi/tmax50" TUXCONFIG="/openframe/phk6254/bea/tuxedo9.1/samples/atmi/tmax50/tuxconfig" ULOGPFX="/openframe/phk6254/bea/tuxedo9.1/samples/atmi/tmax50/ulog" TLOGDEVICE="/openframe/phk6254/bea/tuxedo9.1/log/tlog/TLOG" TLOGNAME=TLOG tmaxi4 LMID=simple *GROUPS GROUP1 LMID=simple GRPNO=1 OPENINFO=NONE GROUP2 LMID=simple GRPNO=2 OPENINFO=NONE DOM_XA LMID=simple GRPNO=3 TMSNAME=tms_ora OPENINFO="Oracle_XA:Oracle_XA+Acc=P/scott/tiger+SesTm=60+DbgFl=0xff+ LogDir=/openframe/phk6254/bea/tuxedo9.1/samples/atmi/" *SERVERS DEFAULT: CLOPT="-A -t -r -o svr.out -e svr.out" DMADM SRVGRP=GROUP1 SRVID=1 GWADM SRVGRP=GROUP2 SRVID=1 GWTDOMAIN SRVGRP=GROUP2 SRVID=2 CLOPT="-A -t -o svr.out" txsvr SRVGRP=DOM_XA SRVID=1 *SERVICES TUX_INSERT
<Tuxedo domain settings>
*DM_LOCAL_DOMAINS TUXGW1 GWGRP=GROUP2 TYPE=TDOMAIN DOMAINID="TUXGW1" BLOCKTIME=20 CONNECTION_POLICY=INCOMING_ONLY #CONNECTION_POLICY=ON_STARTUP DMTLOGDEV="/openframe/phk6254/bea/tuxedo9.1/samples/atmi/dgw/DMTLOG" AUDITLOG="/openframe/phk6254/bea/tuxedo9.1/samples/atmi/dgw/AUDITLOG" DMTLOGNAME="DMTLOG_TUXGW1" *DM_REMOTE_DOMAINS TUXGW2 TYPE=TDOMAIN DOMAINID="TUXGW2" *DM_TDOMAIN TUXGW1 NWADDR="//192.168.1.35:9111" TUXGW2 NWADDR="//192.168.15.241:9888" *DM_LOCAL_SERVICES TUX_INSERT *DM_REMOTE_SERVICES
Example
Tuxedo server
<txsvr.pc>
#include <stdio.h> #include <atmi.h> /* TUXEDO Header File */ #include <userlog.h> /* TUXEDO Header File */ #include <tx.h> #include <fml32.h> #include "Jtcfld.fld.h" #define succ_str "Insert Success" #define fail_str "Insert Failure" EXEC SQL include SQLCA.H; EXEC SQL begin declare section; int h_empno; char h_ename[11]; char h_job[10]; EXEC SQL end declare section; void TUX_INSERT(rqst) TPSVCINFO *rqst; { FBFR32 *sndbuf; char msgbuf[30]; FLDLEN32 flen; sndbuf = (FBFR32 *)rqst->data; h_empno = 0; memset( h_ename, 0x00, sizeof ( h_ename ) ); memset( h_job, 0x00, sizeof ( h_job ) ); Fprint32(sndbuf); h_empno = 9999; Fget32(sndbuf, ENAME, 0, (char *)h_ename, &flen); Fget32(sndbuf, JOB, 0, (char *)h_job, &flen); printf("%s",h_ename); EXEC SQL INSERT INTO emp( empno, ename, job ) VALUES ( :h_empno, :h_ename, :h_job ); if ( sqlca.sqlcode != 0 ){ printf( "insert failed sqlcode = %d\n",sqlca.sqlcode ); strcpy(msgbuf, fail_str); Fchg32(sndbuf, OUTPUT_STR, 0, msgbuf, 0); tpreturn( TPFAIL, -1, (char *)sndbuf, 0, 0 ); } strcpy(msgbuf, succ_str); printf( "insert success\n"); Fchg32(sndbuf, OUTPUT_STR, 0, msgbuf, 0); Fprint32(sndbuf); tpreturn(TPSUCCESS, 0, rqst->data, strlen(rqst->data), 0); }
JTC client
<TuxAsyncXA.java>
package com.tmax.tuxedo.async; import javax.transaction.xa.XAException; import javax.transaction.xa.Xid; import tmax.jtc.TuxAsyncMsgListener; import tmax.jtc.TuxAsyncService; import tmax.jtc.TuxAsyncXAResource; import tmax.jtc.external.TuxBootstrapper; import tmax.jtc.io.TuxFieldBuffer; import com.tmax.handler.TuxedoMsgListener; import com.tmax.util.Jtcfld; import com.tmax.util.XAUtil; public class TuxAsyncXA { public static void main(String[] args) { TuxBootstrapper boot = new TuxBootstrapper(); boot.init("resource\\webt.properties"); Xid xid = XAUtil.getUniqueXid(); TuxAsyncService service = new TuxAsyncService("TUXGW1", "TUX_INSERT"); try { service.getXAResource().start(xid, TuxAsyncXAResource.TMSUSPEND); } catch (XAException e) { e.printStackTrace(); } try { TuxFieldBuffer sndbuf = new TuxFieldBuffer(true); int empno = 8080; String ename = "phk6254"; String job = "tmaxqmc"; sndbuf.createField(Jtcfld.EMPNO).add(empno); sndbuf.createField(Jtcfld.ENAME).add(ename); sndbuf.createField(Jtcfld.JOB).add(job); TuxAsyncMsgListener listener = new TuxedoMsgListener(service, xid); service.tpacall(sndbuf, null, listener); Thread.sleep(2000); } catch (Exception e) { e.printStackTrace(); } } }
<TuxPrepareMsgListener.java>
package com.tmax.handler; import javax.transaction.xa.XAException; import javax.transaction.xa.Xid; import tmax.jtc.TuxAsyncMsgListener; import tmax.jtc.TuxAsyncService; import tmax.jtc.io.TuxBuffer; import tmax.webt.WebtException; public class TuxPrepareMsgListener implements TuxAsyncMsgListener { private TuxAsyncService service; private Xid xid; public TuxPrepareMsgListener(TuxAsyncService service, Xid xid) { this.service = service; this.xid = xid; } public void handleError(Exception e) { System.out.println("prepare faile > " + xid); } public void handleEvent(TuxBuffer rcvBuffer) { System.out.println("prepare success > " + xid); try { service.getXAResource().commit(xid, new TuxCommitMsgListener(service, xid)); } catch (XAException e) { e.printStackTrace(); } } }
<TuxPrepareMsgListener.java>
package com.tmax.handler; package com.tmax.handler; import javax.transaction.xa.Xid; import tmax.jtc.TuxAsyncMsgListener; import tmax.jtc.TuxAsyncService; import tmax.jtc.io.TuxBuffer; public class TuxCommitMsgListener implements TuxAsyncMsgListener { private TuxAsyncService service; private Xid xid; public TuxCommitMsgListener(TuxAsyncService service, Xid xid) { this.service = service; this.xid = xid; } public void handleError(Exception e) { System.out.println("fail commit > " + xid); } public void handleEvent(TuxBuffer rcvBuffer) { System.out.println("commit success > " + xid); System.out.println(Thread.currentThread().getName() + " ] rcv "+ rcvBuffer.toString()); } }
<TuxCommitMsgListener.java>
package com.tmax.handler; import javax.transaction.xa.Xid; import tmax.jtc.TuxAsyncMsgListener; import tmax.jtc.TuxAsyncService; import tmax.jtc.io.TuxBuffer; public class TuxCommitMsgListener implements TuxAsyncMsgListener { private TuxAsyncService service; private Xid xid; public TuxCommitMsgListener(TuxAsyncService service, Xid xid) { this.service = service; this.xid = xid; } public void handleError(Exception e) { System.out.println("fail commit > " + xid); } public void handleEvent(TuxBuffer rcvBuffer) { System.out.println("commit success > " + xid); System.out.println(Thread.currentThread().getName() + " ] rcv "+ rcvBuffer.toString()); } }
1.12. Web Services Gateway
For information on the web service gateway, refer to Tmax Gateway Guide(WebService).
1.13. TCP/IP gateway
1.13.1. Callback function
set_error_msg
This function is automatically called when an error occurs, such as a timeout or network disconnection, during a transaction with a remote node via a TCP/IP gateway. This function allows users to modify the user header or user data.
-
Prototype
#include "custom.h" int set_error_msg(msg_header_t *hp, int err, char *data, int len)
-
Return value
Return value Description Positive
Data equal to the returned length is transmitted. However, in this case, the CLOPT="-I" option must be set to apply.
0
Only the user header part is transmitted.
Negative
The message is not sent to CLH.
-
Example
int set_error_msg(msg_header_t *hp, int err, char *data, int len) { msg_body_t * body; body = (msg_body_t *)data; strcpy(body->data, "changed hello data"); /* Since error messages contain no data, use the -I option to send data. Without the -I option, only the user header is sent to the CLH. */ /* If there is no user header, *hp and *data may have the same value. */ strcpy(hp->retsvcname, "RECVSVC_CHANGE"); return len; }
The errors that cause the set_error_msg() function to be called are as follows:
Error code | Description |
---|---|
[TPECLOSE] |
Disconnected before receiving a response after sending to the remote node. |
[TPETIME] |
Timeout occurred after sending to the remote node. |
[TPEPROTO] |
Other cases than using tpforward API in ASYNC mode. |
[TPENOENT] |
Client mode / When the number of OUT channels is insufficient to establish a connection. |
[TPENOREADY] |
Client mode / If the remote node behaves abnormally when establishing a connection. |
[TPEOS] |
An error such as memory allocation. |
[TPESYSTEM] |
An internal error. |
[TPESVCERR] |
put_msg_info returns 0 or a negative number. |
1.14. Utility
1.14.1. CFL
Option to disable shared memory priority check feature
When performing CFL, if the value set in the SHMKEY entry of the DOMAIN section of the configuration file is currently in use, the UID of that value is compared, and if they are different, the following error occurs.
(E) CFL0096 shared memory : different owner 210 [COM3402]: File exists
If you do not want to use this feature, add the -I option when performing CFL as follows:
$ cfl –i sample.m –I
If you add the -I option, when performing CFL, it does not check whether the value set in SHMKEY is used or whether the UID is the same.
Option to check available FD values
During the CFL execution phase, the maximum number of available FDs in the current system, as displayed when querying with ulimit –n, is pre-checked and notified to the user. The maximum number of FDs that can be opened per CLH is pre-calculated and checked. If the FD value used in the Tmax system is set to a value greater than the number of FDs available in the system, the following error occurs.
(E) CFL9990 Current Tmax configuration contains more servers or nodes than current system can support[CFL5056]
To use this feature, add the -r option when performing CFL as follows:
$ cfl –i sample.m -r
Adding the -r option pre-checks the available FDs at the CFL stage, and an error occurs if the number is exceeded.
1.14.2. Supplementing and adding options to tmboot
In the current version, the existing tmboot –w option has been supplemented to make it more efficient when starting server processes one by one.
Conditions
-
Conditions for using LOCK when a server process connects to TMM
(LOCK | NOLOCK)
-
WAIT condition when starting the server process
(NO-WAIT | FINITE-WAIT)
–d option
-
-d val < 0 : LOCK, |VAL| FINITE-WAIT
-
-d val = 0 : NO-LOCK, NO-WAIT
-
-d val > 0 : NO-LOCK, |VAL| FINITE-WAIT
-
Basically, if the val of the –d option is not 0, the absolute value (|VAL|) is used and the unit is usec.
-
When FINITE-WAIT, |VAL| is the maximum WAIT time for each process (not the total WAIT time for all processes).
-
WAIT is released when the server process sends a signal. Even if the |VAL| time has not elapsed, if a signal is received, an attempt is made to start the next process.
-
If val is negative, use LOCK.
-
If this option is used, the -w option is ignored.
-
–w option
-
-d -1000000 (1sec) has the same effect.
-
This option is effective only when the -d option is not used. If the –d option is used, the –w option is ignored.
–D option
-
It is almost similar to the -d option, but in the case of FINITE-WAIT, even if a signal comes, it will unconditionally WAIT until |VAL|.
1.14.3. Changed the configuration file reference path for the tmboot –S option
Before update
When starting Tmax with the tmboot command, the binary configuration file $TMAXDIR/config/tmconfig is copied to $TMAXDIR/path/tmconfig, and then the configuration file in the $TMAXDIR/path directory is used.
However, if you want to start only a specific server using tmboot –S or –s while the Tmax engine is already running, the existing version refers to $TMAXDIR/config/tmconfig to start the server.
This behavior can be problematic in environments where:
-
Configuration file
<Existing operating configuration file>
*SVRGROUP svg1 NODENAME = "tmaxh4" *SERVER svr1 SVGNAME = svg1 svr2 SVGNAME = svg1 *SERVICE TOUPPER1 SVRNAME = svr1 TOUPPER2 SVRNAME = svr2
<Configuration file changed during operation>
*SVRGROUP svg1 NODENAME = "tmaxh4" *SERVER svr1 SVGNAME = svg1 svr3 SVGNAME = svg1 svr2 SVGNAME = svg1 *SERVICE TOUPPER1 SVRNAME = svr1 TOUPPER3 SVRNAME = svr3 TOUPPER2 SVRNAME = svr2
-
Recompile
Recompile the configuration file changed during operation into CFL.
$ cfl –i node1.m
-
Start up the added server
Start the newly added server as follows:
$ tmboot –S svr3
When trying to run tmboot –S after changing the configuration file in the running environment, the following error occurs.
(E) BOOT3007 maxsvr (1) is over for svr(svr3:svr2): nodeno = 0, svri = 5, cur = 1, ksvr = 1 [BOOT0015]
When CFL is performed in the operating environment, the changed contents are applied to $TMAXDIR/config/tmconfig, but the actual shared memory is configured identically to $TMAXDIR/path/tmconfig before the change. Therefore, starting a newly added server process with tmboot –S actually results in starting an additional server process that is already running. CFL is not allowed while the Tmax engine is running, but if an error occurs due to this mistake, the error makes debugging difficult.
After update
When starting each server with options such as tmboot –S, -s –g, -q, -t, and -A while the Tmax engine is running, the path of tmconfig referenced has been changed from $TMAXDIR/config/tmconfig to $TMAXDIR/path/tmconfig. (However, the existing method is maintained when starting the engine.)
Caution
If you use the tmboot -f option to specify a specific binary configuration file, the server will behave as before, starting by referencing $TMAXDIR/config/tmconfig. This is because the current behavior causes problems when adding servers dynamically.
When adding a server dynamically, be sure to specify a specific configuration file with the –f option to reference the changed binary configuration file in $TMAXDIR/config/.
1.14.4. Added umask option to racd
For processes started through racd, the umask is set to 0 by default, which solves the problem of files with unwanted permissions being created, and allows users to set permissions.
How to set up
Set the desired umask by specifying the '-P umask' option to racd.
$ racd -P umask
The '-P umask' option allows the user to set the umask for processes started via racd, allowing them to create files with desired permissions.
Example
<Configuration file>
*DOMAIN tmax1 SHMKEY =@SHMEMKY@, MINCLH=1, MAXCLH=3, TPORTNO=@TPORTNO@, BLOCKTIME=30, RACPORT = 3255 *NODE @HOSTNAME@ TMAXDIR = "@TMAXDIR@", APPDIR = "@TMAXDIR@/appbin", PATHDIR = "@TMAXDIR@/path", @RMTNAME@ TMAXDIR = "@RMTDIR@", APPDIR = "@RMTDIR@/appbin", PATHDIR = "@RMTDIR@/path", *SVRGROUP svg1 NODENAME = "@HOSTNAME@", COUSIN="svg2" svg2 NODENAME = "@RMTNAME@" *SERVER svr2 SVGNAME = svg1, CLOPT="-o $(SVR).out -e $(SVR).err" *SERVICE TOUPPER SVRNAME = svr2
Start racd on the RMT node.
$ export TMAX_RAC_PORT=3255 $ racd –k –P 055
Start the entire Tmax (tmboot) on the HOST node and check the file permissions of svr2.out in ULOGDIR of the RMT node.
1.14.5. Library version information query function
tmaxlibver
If you’re using a UNIX library, you can’t determine the Tmax version based on that library alone. With 5.0 SP1, you can check the Tmax library version using the tmaxlibver utility.
-
How to use
$ tmaxlibver [-l filename] [-d | -s] [-6] [-L directory] [-o arg] [-h]
Item Description -l filename
Specifies the name of the library to query.
-d | -s
Sets whether the library is dynamic or static.
-6
Used if the library is 64-bit. If not set, it operates as 32-bit. If -6 is set and the –L option is not set, the library located in the $TMAXDIR/lib64 directory is automatically referenced.
-L directory
Enter the absolute or relative path to the library to be retrieved. If not specified, the default path will vary depending on the -6 option.
-h arg
This is a command help option.
-
Support Library
Version information query functions for libcli, libclithr, libsvr, and libsvrucs are available.
-
Example
$ tmaxlibver -l libsvr.a –s -6 libsvr.a for TMAX Version 5.0 SP #1 64bit binary for AIX 5L
1.15. Client/Server
1.15.1. tpgetsvcname
This function gets the service name from the service index, and uses the service index value of cltid.clientdata[3] of TPSVCINFO as a parameter in the Loss Service that is called when the response message is discarded.
The tpgetsvcname() function can only be used on Tmax servers and not on Tmax clients. Furthermore, the returned buffer is an internal static buffer, so it’s recommended to copy the contents of the buffer to a separate buffer before use, rather than directly modifying it.
-
Prototype
#include <tmaxapi.h> char* tpgetsvcname(int svc_index)
-
Parameters
Parameter Description svc_index
The index value of the service.
-
Return value
Return value Description Address value of the buffer
If successful, the address of the buffer where the service name is stored is returned.
NULL
If failed, NULL is returned.
(tperrno is set to a value corresponding to the error situation.)
-
Example
#include <stdio.h> #include <usrinc/atmi.h> #include <usrinc/tmaxapi.h> SVC01(TPSVCINFO *msg) { int i; printf("\nSVC01 service is started!\n"); printf("INPUT : data=%s\n", msg->data); for (i = 0; i < msg->len; i++){ msg->data[i] = toupper(msg->data[i]); printf("OUTPUT: data=%s\n", msg->data); } sleep(10); tpreturn(TPSUCCESS,0,(char *)msg->data, 0,0); } LOSS_SVC(TPSVCINFO *msg) { long svcindex; char *svcname; printf("\nLOSS_SVC service is started!\n"); printf("INPUT : data = %s\n", msg->data); printf("TPERROR : %d\n", msg->cltid.clientdata[1]); printf("TPURCODE : %d\n", msg->cltid.clientdata[2]); svcindex = msg->cltid.clientdata[3]; printf("SVC INDEX Of Discarded Message : %ld\n", svcindex); svcname = tpgetsvcname((int)svcindex); if(NULL == svcname) { printf("tpgetsvcname is failed!!\n"); } else { printf("SVC Name Of Discarded Message : %s\n", svcname); } tpreturn(TPSUCCESS, 0, (char*)msg->data, 0, 0); }
1.15.2. tptsleep
This function waits for a server process termination event from TMM. If the tpprechk() callback function needs to wait, it periodically calls tptsleep() to ensure a normal tmdown. The timeout applies in the same way as the select() system function.
-
Prototype
#include <usrinc/tmaxapi.h> int tptsleep(struct timeval *timeout)
-
Example
int tpprechk(void) { struct timeval timeout; int ret; timeout.tv_sec = 5; timeout.tv_usec = 0; while(1) { ret = tptsleep(&timeout); ... return 0; }
1.15.3. tpmcallx
This function provides an extended functionality to the existing tpmcall() function. Unlike the previous function, it waits until all services in the COUSIN server group have responded. Additionally, the r_list structure, which stores the success or failure of a response, has been added to the svglist structure, and several items have been added to the flags structure.
-
Prototype
#include <usrinc/tmaxapi.h> struct svglistx* tpmcallx(char *svc, char *data, long len, long flags)
-
Parameters
The available values for flags are:
Flag Description TPNOREPLY
Only transmission is performed. Setting this value makes it behave similarly to the existing tpmcall.
TPBLOCK
Verifies that the transmission was successfully delivered to CLH.
TPNOTIME
Ignores the block time value and waits for a response indefinitely.
-
Return value
Return value Description Server Group List
If successful, the list of server groups for which the service call was successful is returned in the svglistx structure.
NULL
If failed, NULL is returned.
(tperrno is set to a value corresponding to the error situation.)
The svglistx structure is as follows:
struct svglistx { int ns_entry; /* number of entries of s_list */ int nf_entry; /* number of entries of f_list */ int nr_entry; /* number of entries of r_list */ int *s_list; /* list of server group numbers */ int *f_list; /* list of tperrno of each server group */ int *r_list; /* list of tpurcode of each server group */ };
Variable Description ns_entry
The number of server groups that successfully completed tpmcall().
nf_entry
The number of server groups that failed tpmcall().
nr_entry
r_list is the number of server groups set.
*s_list
An array of server group serial numbers that successfully completed tpmcall().
*f_list
An array of server group serial numbers that failed tpmcall().
*r_list
An array of server group serial numbers for which tpurcode is set.
-
Error
See tpmcall().
-
Example
#include <stdio.h> #include <unistd.h> #include <stdlib.h> #include <string.h> #include <usrinc/atmi.h> #include <usrinc/tmaxapi.h> main(int argc, char *argv[]) { char *sndbuf, *rcvbuf; long rcvlen, sndlen; struct svglistx svglist; struct svglistx *psvglist; int ret, i; psvglist = &svglist; if (argc != 2) { printf("Usage: toupper string\n"); exit(1); } if ( (ret = tmaxreadenv( "tmax.env","TMAX" )) == -1 ){ printf( "tmax read env failed\n" ); exit(1); } if (tpstart((TPSTART_T *)NULL) == -1){ printf("tpstart failed[%s]\n", tpstrerror(tperrno)); exit(1); } if ((sndbuf = (char *)tpalloc("STRING", NULL, 0)) == NULL){ printf("sendbuf alloc failed !\n"); tpend(); exit(1); } if ((rcvbuf = (char *)tpalloc("STRING", NULL, 0)) == NULL){ printf("recvbuf alloc failed !\n"); tpfree((char *)sndbuf); tpend(); exit(1); } strcpy(sndbuf, argv[1]); psvglist = tpmcallx("TOUPPER", sndbuf, 0, TPBLOCK); if(psvglist == NULL){ printf("tpmcall is failed[%s]\n", tpstrerror(tperrno)); tpfree((char *)sndbuf); tpfree((char *)rcvbuf); tpend(); exit(1); } printf("send data: %s\n", sndbuf); printf("ns_entry = %d\n", psvglist->ns_entry); printf("nf_entry = %d\n", psvglist->nf_entry); printf("nr_entry = %d\n", psvglist->nr_entry); for(i=0; i<psvglist->ns_entry; i++) printf("psvglist->s_list[%d] = %d\n", i, psvglist->s_list[i]); for(i=0; i<psvglist->nf_entry; i++) printf("psvglist->f_list[%d] = %d\n", i, psvglist->f_list[i]); for(i=0; i<psvglist->nr_entry; i++) printf("psvglist->r_list[%d] = %d\n", i, psvglist->r_list[i]); tpfree((char *)sndbuf); tpfree((char *)rcvbuf); tpend(); }
1.15.4. Added –B option to server CLOPT entry.
The following features were added in Tmax 4.0 SP#3 Fix#2.
-
The issue of processing being delayed due to CLH queue timeout not being applied when requests are scheduled simultaneously to one server process in a multi-CLH environment has been improved.
-
For batch jobs, an exception is required, and if the -B option is applied to the CLOPT entry in the server section, the queue timeout is exceptionally ignored and the job is performed when scheduled to the server process.
Configuration
Apply the -B option to the CLOPT entry in the NODE section as follows:
*SERVER SVRNAME [CLOPT = "-B"]
-B option causes requests to be processed without applying CLH queue timeout when they are scheduled simultaneously to one server process.
1.15.5. Added –X option to server CLOPT entry
When a query is performed on a transaction that has already been rolled back in Oracle, an ORA-24761 error will initially occur, but if the user ignores this and continues to perform the next query, it will be processed as a local transaction, which may cause consistency issues. In this case, it is a problem with the user code, but to prevent this, you can use the –X option to restart the server process if the result of xa_end() is not XA_OK, which will resolve the consistency issue.
Configuration
Apply the –X option to the CLOPT entry in the NODE section as follows:
*SERVER SVRNAME [CLOPT = "-X"]
For XA servers, if xa_end() fails within tpreturn(), a reset of the XA channel is performed by default. If the -X option is applied and xa_end() fails, a Fatal error message (service code: CSC5608) is printed and the server process is terminated.
1.15.6. Added option to close standard output file with date changes
We have fixed an issue where the standard output file resulting from the user log of server operations would not be deleted when the delete command was executed when the date changed while the server was running. In other words, the logging file would not be closed when the date changed, but when the -c option was used to check for a date change and delete it, the deletion is now performed properly.
Configuration
Apply the -c option as follows:
*SVRGROUP svg1 NODENAME = "tmaxi4" *SERVER DEFAULT: MIN = 10, MAX = 30, ASQCOUNT = 1, MAXQCOUNT = 1000, MAXRSTART = 10000, LIFESPAN = IDLE_1800, CLOPT = "-o $(SVR)_$(CDATE).out -e $(SVR)_$(CDATE).out -c" svr2userlog1 SVGNAME = svg1
1.15.7. Added unlimited (-1) functionality to -q option of RDP server.
When using tpflush(), an option [-q -1] has been added to unconditionally move data from the TPSENDTOCLI queue to the WRITE queue even if there is untransmitted data in the WRITE queue.
realmt SVGNAME = svg1, SVRTYPE = REALSVR_MT, CPC = 13, CLOPT="-o /home/tmax/realmt.log -q -1"
This option only applies to RDPMT and is not recommended as it removes any limits on memory growth and should only be used for testing purposes.
1.15.8. Logging client error log files
Set the log file path in an environment variable or client configuration file (tmax.env). When setting the configuration file, the client’s PID (Process ID) is automatically appended to the file name (filename.pid).
Configuration
<When set in .profile>
TMAX_DEBUG=directory/filename
<When set in tmax.env>
TMAX_DEBUG=directory/filename
Example
<tmax.env>
TMAX_DEBUG=/data1/tmaxqa/tmax/work/client/cli00
If the Tmax client fails to connect to the main node when executing tpstart, a file named /data1/tmaxqa/tmax/work/client/cli00.18176 is created and the following message is stored in the file.
(E) CLI3003 unable to connect to main server : 192.168.1.43 [CLI0106][Connection refused] (E) CLI3003 unable to connect to main server : 192.168.1.48 [CLI0106][Connection refused]
1.15.9. NULL-terminating character protection in STRING buffers
When a STRING type buffer is allocated using the tpalloc / tprealloc API and the user inputs data up to the last 1 byte space for NULL and requests a service, previously, TPEINVAL (client) and TPESVCERR (server) errors occurred. In the current version, when this situation occurs, an additional 1-byte buffer is allocated and the NULL character is automatically set in that space.
Configuration
<When set in .profile>
export TMAX_STRING_NULL=Y
Caution
-
This feature is supported only for buffers allocated with tpalloc/tprealloc.
-
The server and client behave identically.
-
You must set the environment variable described in 8.9.2 to Y.
1.16. Settings
1.16.1. Added DUMMY server group and server settings
When setting COUSIN on a domain gateway, the DUMMY server group and servers are used. This has been improved from the previous method of temporarily using the server group name "__dummy". (Servers in the existing "__dummy" server group will not start when tmboot is executed.)
How to set up
*SVGGROUP DUMMY = N|Y (Default: N) *SERVER DUMMY = N|Y (Default: N)
Caution
The DUMMY setting in the SVGGROUP section is not inherited by the SERVER section, so it must be specified separately in the SERVER section.
Example
*DOMAIN tmax1 SHMKEY = 89255, MINCLH=1, MAXCLH=3, TPORTNO=9255, BLOCKTIME=10000, MAXCPC = 100, MAXGW=10, RACPORT = 3155 *NODE tmaxh4 TMAXDIR = "@TMAXDIR@", APPDIR = "@TMAXDIR@appbin", *SVRGROUP ### tms for Oracle ### dgw_svg NODENAME = "tmaxh4", COUSIN = "gw1, gw2", LOAD = -2,DUMMY=Y *SERVER dummy SVGNAME =dgw_svg , DUMMY=Y *SERVICE TOUPPER SVRNAME = dummy *GATEWAY gw1 GWTYPE = TMAXNONTX, PORTNO = 7500, RGWADDR="192.168.1.43", RGWPORTNO = 6500, NODENAME =tmaxh4 , CPC = 1, LOAD=-2 gw2 GWTYPE = TMAXNONTX, PORTNO = 7510, RGWADDR="192.168.1.48", RGWPORTNO = 6510, NODENAME = tmaxh4, CPC = 1, LOAD=-2
1.17. Management Tools
1.17.1. Added function to query status information of COUSIN service (st -s -X)
Before update
When using the st -s command of tmadmin, the COUSIN service is always searched as RDY regardless of its actual status.
After update
For the COUSIN service, you can separately check the local status in the form of Status 1 (Status 2).
For example, it is searched as follows:
RDY(NRDY)
Caution
When using the st -s command, you must use the -X option.
1.18. WebT
1.18.1. Modified behavior of WebtField.get() to match TMAX’s fbget()
Data retrieved through get() in WebtField and WebtFieldSet do not appear when fbprint() is performed on the corresponding field buffer in Tmax. Since get() data is not reused, its absence is not necessarily a malfunction; however, its behavior is inconsistent with Tmax’s fbget(). This has been corrected so that when fields are printed with fbprint() in the Tmax service, the data retrieved by get() is printed with ( r ) in front of it.
1.18.2. Added AutoClose feature in Webt.properties style
When jeus.servlet.webt.autoClose.enable=true is set in webt.properties, a function has been added to automatically close connections that remain unclosed after the service ended.
1.18.3. Changes to the internal structure of WebT
With the addition of the Rolling Down feature, the internal structure was designed to select when reading, but since most users do not use Rolling Down, it was modified to use select only when Rolling Down is performed and read immediately during general use.
How to set up
<Set when running WebT applications>
DUSE_ROLLING_DOWN=true
<Set within the WebT client source>
System.setProperty("USE_ROLLING_DOWN", "true")
1.18.4. xid output at non-debugging level during XA processing in WebT
Even when the log level is set to info in webt.properties or JEUSMain.xml, a feature has been added to record xa_start, xa_prepare, xa_commit, and xa_rollback in the WebT log when processing a transaction by adding the tmax.webt.xid.log=true option. (This was previously recorded only at the debug level.)
2. Changed features
2.1. Engine
2.1.1. Changed ASQCOUNT behavior when shutting down server with tmdown -S
Before update
When tmdown -S terminates all processes of a specific server while the server is running, it will wait until the server processes finish executing before terminating all processes. If ASQCOUNT is set, if a client request comes in during this waiting time, additional server processes will be automatically started by ASQCOUNT, and even if the tmdown -S command is successfully completed, the additional server processes will remain. If it operates in the existing way, the result of tmdown -S will not be what the user intended.
After update
In the current version, if you terminate all processes of a specific server with tmdown -S, no additional server processes will be started by ASQCOUNT even if there are client service requests to the server.
Caution
When the server is waiting to be shut down with tmdown –S, client service requests are queued in the server queue, and the user must adjust the timeout settings appropriately so that they can be automatically cleared.
2.1.2. Improved CLH log messages
The following example shows how to improve the output of errors CLH2052 and CLH2053 so that the service name is printed afterwards.
CLH2052 msg discarded due to closed client(clientId) connection : svc = svcname CLH2053 msg discarded due to closed server connection : svc = svcnam
2.1.3. Improvements related to domain socket file permissions
In the HP-UX environment, the domain socket file permission was created as 0777 (srwxrwxrwx) without following the IPCPERM of the node section, but this was modified to follow the IPCPERM like other OSes.
Configuration
*DOMAIN tmax1 SHMKEY =89255, MINCLH=1, MAXCLH=3, TPORTNO=9255, BLOCKTIME=5, MAXSVC=10, IPCPERM = 0777
Caution
This setting is affected by both UMASK and IPCPERM.
2.1.4. CLH Dead Lock Detection Feature Enhancement
-
When running tpcall/tpacall with your own service, it fails, but in the case of tpacall(TPNOREPLY), it succeeds.
-
When handling errors, TPESYSTEM / TPEPROTO was unified as TPEPROTO.
(Previously, TPEPROTO occurred when there was only one server process, and TPESYSTEM occurred when there was more than one.)
2.1.5. Improved call function for discarded response message service
-
In Tmax 4.0 SP3 FIX#6, the Loss Service was added to call the Loss Service when messages that are discarded when the server or client response queue is terminated are piled up.
-
If the entire engine is down, the Loss Service is not called.
2.2. Utility
2.2.1. Updated SVG’s COUSIN setting with -a option when running CFL
Previously, when adding a node for additional COUSIN settings, the original configuration file needed to be modified and recompiled with CFL before running cfgadd in tmadmin.
In 5.0, the feature was improved to prevent cfgadd from being executed without the –a option. Therefore, COUSIN settings could no longer be added with the existing approach. In other words, the feature was modified to update COUSIN settings only in the SVRGROUP section.
Functional constraints
-
The additional SVRGROUP section sets only COUSIN and NODENAME.
-
Updates can only be done by setting COUSIN.
-
A COUSIN that can be updated must contain a pre-defined COUSIN. (The order cannot be changed.)
2.3. TDL (Tmax Dynamic Library)
2.3.1. Performance improvements to tdlcall
Improved the performance degradation phenomenon that occurs when CPU usage increases due to many collisions in the TDL shared memory hash table.
-
Performance was improved by introducing a local caching function as an indexing technique internally within the module.
-
Minimized shared memory hash table lookups by including TDL shared memory slot locations inside the module local cache.
-
Improved indexing performance when the module local cache in tdlcall is VERSION3.
Caution
As the number of modules increases, it is necessary to distinguish modules with many conflicts and compare the increasing search time and CPU usage.
2.3.2. Fixed high memory usage in TDL processes when MONITOR=Y set
When the monitoring option is enabled in the existing TDL, the memory usage on the server or client increases in proportion to MAXMODULE. This has been corrected so that even when the monitoring function is enabled, less memory is used as when it is disabled.
2.4. WebT
2.4.1. Improved workload in logger.log with log level set to none
Improved overhead caused by performing the task of retrieving information required to leave WebT logs even when WebT’s log level is set to none.
2.4.2. WebTAsync log-related feature improvements
In the selector-thread, the request text (excluding Tmax Header) is logged in Hexa up to the specified length regardless of the log level, depending on the option.
Options
-
webtasync.appmsg.isdump=<true | false>
Activate the request feature.
-
webtasync.appmsg.dump.length=<Length>
The length of the data in the request is output. If the length value is negative or no option is specified, the full data is output.
2.5. Tuxedo Gateway
The limit on the number of concurrent requests from Tuxedo to Tmax has been changed. When calling a service from Tuxedo to Tmax, the limit on the number of concurrent requests that can be made without receiving a response has been increased from 500 to 1024. The actual number of concurrent requests is (MAXCLH * CPC * 1024).
3. Bug patch
3.1. Client/Server
3.1.1. tmadmin API
Fixed the following error in tmadmin:
-
st -p query error
The phenomenon of a SIGSEGV error occurring when querying st -p(TMADM_SPR_STAT) with the tmadmin API when CPC = 2 or more in the SERVER or GATEWAY section has been fixed.
-
st -s query error
The phenomenon of NRDY services being displayed as RDY when searching st -s(TMADM_SVC_STAT) has been corrected.
3.1.2. tpstart API
Fixed an error that caused memory shortage by internally allocating memory when executing tpstart() and not freeing the memory when executing tpend().
3.1.3. tmax_is_restart API
Fixed an error that returned TRUE when calling the tmax_is_restart API on a server started by ASQCOUNT or the tmadmin API.
3.1.4. tpqsvcstat API
Fixed the issue where the tpqsvcstat API would return a negative value or the server would terminate abnormally when using the tpdeq API after using the tpqsvcstat API.
3.1.5. Multi-Threaded Client Library
The following error, which frequently occurs when using the Tmax API in a multi-thread/multi-context environment, has been fixed.
(E) CLH0200 magic number errorfrom client(70.12.204.147): 0 0 0 0 [CLH0516] (E) CLI0209 internal error : unknown message type :1002 [CSC5713] (E) CLI2008 tpcall reply arrived after timeout. Msg discarded : 1003 1 [CSC5708]
3.1.6. Expanded SysMaster GID trace support
Supports SysMaster GID for querying running services, not only for existing tpcall but also for services called with tpacall.
3.1.7. Missing tpmcallx API flags
Fixed the issue where TPNOREPLY flags were not applied in tpmcallx.
3.1.8. Static library build issue
Fixed the issue where an error occurred during build when building with static-library after adding the tpsvrinit() function without the tpprechk() function to the server source.
3.1.9. Memory leak when FDL file loading error occurs
Fixed a memory leak that occurred when FREAD() failed to load an FDL file when using functions that require matching FDLKEY and FDLNAME, such as FBPRINT, FBGET_FLDNAME, and FBGET_FLDKEY APIs.
3.1.10. Memory handling error on UCS server
Fixed a memory handling error that occurred when calling tprelay() within usermain of a UCS with SMSUPPORT = Y.
3.1.11. Memory leak in tmadmin API
Fixed a memory leak issue that occurred when checking service status using the tmadmin API.
3.1.12. USERLOG API buffer overflow
Fixed the issue where the process would terminate abnormally when the internal buffer size exceeded 8100 bytes when using the UserLog API.
3.2. Engine
3.2.1. Improved Rolling Down feature
-
Modifications
-
Fixed an error where CLH would not terminate when the client failed or terminated abnormally.
-
Fixed an error where servers of type CUSTOM_GATEWAY would not terminate when performing Rolling Down (tmdown -R).
-
Fixed an error where the tperrno received by the client would change from TPENOREADY to TPESYSTEM if TMAX_BACKUP_ADDR and TMAX_BACKUP_PORT were not set properly on the client.
-
Fixed an error where the response order would change when the number of server processes was 2 or more and requests were accumulated in the server queue.
-
-
Restrictions
-
Only tpacall/tpcall of general clients and multi-thread clients are supported.
-
Must use the client of the corresponding version (4.0 SP3 Fix#8).
-
Normal downtime may be difficult under heavy load conditions.
-
Not supported with Transactions (XA), tpacall (TPBLOCK), interactive communication, legacy clients, compression, and encryption features.
-
Only NON-XA domain gateways are supported. (XA domain gateways and custom gateways are not supported.)
-
3.2.2. CLHQTIMEOUT error message
Fixed the issue where nodes that did not have CLHQTIMEOUT set would see the related error message.
An error message occurs in the following cases:
-
local domain
Occurs when a service exists and CLHQTIMEOUT is not set.
-
remote domain
Occurs when B service exists as COUSIN on each node in multi-node and CLHQTIMEOUT is set.
-
CLH Server Queue
In a situation where a load is placed on a B service within an A service and a call is made, if the B service responds late and is loaded into the CLH server queue, and the B service goes down, the following message is output in the local domain.
CLH.11160.190613:(E) CLH2093 server queue is purged due to CLHQTIMEOUT:SVRNAME [itpasyncgw] CLID[0x1604] [CLH0802]
3.2.3. Error related to dynamic addition of server groups
Fixed an error where, when dynamically adding an XA server group configured with COUSIN via cfgadd, the actual commit might not be performed even if tx_commit() succeeds when calling the service.
3.2.4. Error related to dynamic addition of COUSIN service
Fixed an error where the dynamically added COUSIN service was not recognized when restarting a node unrelated to COUSIN after dynamically adding the COUSIN service through mksvr.
It occurs under the following conditions:
-
Consists of NodeA, NodeB, and NodeC
-
svgA of NodeA and svgB of NodeB are configured as COUSIN
-
Configure svgC of NodeC exclusively
-
Dynamically added services belonging to SvgA and SvgB to mksvr
-
Dynamically added services belonging to SvgC to mksvr
-
The client connects to NodeC and calls svgC’s service.
-
The service in svgA or svgB was called from svgC
-
NodeC restarted
-
TPENOENT error occurred when running #6 and #7
3.2.5. Node failure detection delay error
In a multi-node (3 or more) environment, when a failure occurs in a specific node device, the issue of the backup server startup being delayed and related service calls being blocked due to the delay in failure detection in some nodes has been corrected.
If NCLHCHKTIME is set, node failure message broadcasts behave abnormally, delaying detection of some node failures (up to 10 minutes).
For example, it occurs in the following environment:
-
NODE section
The setup order is node1, node2, node3, node4 (node3 is a backup for node4).
-
DOMAIN section
Setting NLIVEINQ and NCLHCHKTIME
3.2.6. Enhanced pending transaction processing
The issue where the transaction remains in the Pending state (TXTIME * 3) for a long time when the server process is abnormally terminated while the client is in the transaction logging request state after tx_begin has been corrected.
3.2.7. Error using transaction XID
In an environment where client calls service 1 and service 2, if tx_begin and tx_commit are called in the tpsvrinit() function of the server to which service 1 belongs, and service 1 calls service 2 after tx_begin, an error that occurs because service 2 uses the transaction XID used in tpsvrinit() of service 1 has been corrected.
3.2.8. CLH shared memory error
Fixed an error where part of the CLH shared memory would be corrupted when using tpreturn in usermain() of a UCS type server.
When this phenomenon occurs, tmadmin cannot properly retrieve service or server process status information for a specific CLH, and a server restarted after tpreturn(TPEXIT) may encounter TPENOREADY.
3.2.9. Channel connection errors between TMMs in a multi-node environment
When processing TM_NCLH_START_NOTIFY between TMMs in a multi-node environment, the phenomenon of not being processed properly in certain cases has been fixed.
Additionally, NPING_REQUESTED and NPING_REQUESTED2 were added to the node status judgment criteria of TMM.
3.2.10. Service NRDY with ASQCOUNT setting
Fixed the issue of services stuck in NRDY state when the entire server is restarted with the ASQCOUNT setting applied.
-
Conditions
-
CLH is under heavy load
-
ASQCOUNT set in the SERVER section / SVCTIME set in the SERVICE section
-
-
Symtomps
-
SVCTIME increases, often leading to a database failure.
-
Requests are backed up in the server queue due to continuous client requests.
-
SVCTIMEOUT occurs almost simultaneously on all servers.
-
All servers shut down almost simultaneously (the service status becomes NRDY for a moment).
-
As messages accumulate in the server queue, additional servers are started by ASQCOUNT.
-
All servers up to the MAX value are restarted instantaneously, and the service status does not change from NRDY to RDY.
-
3.2.11. CORE error when forcibly terminating CLH, CLL, and TMM
Fixed the issue where CORE would occur and terminate or terminate with strange characters when LOGCLH, CLL, and TMM were forcibly terminated with the KILL signal.
3.2.12. Compatibility issue with old header of CLH
Fixed the issue where abnormal request messages were transmitted when multiple requests (tpacall) were sent simultaneously from an older version of 3.X client to version 5.0.
3.2.13. Abnormal CLH error
In a multi-node environment, an error that caused CLH to terminate abnormally when only the server group was set and no servers were set after setting COUSIN has been fixed.
3.2.14. Interactive type POD server operation error
Fixed the issue where calling an interactive type POD server would not work properly and a TPENOREADY error would occur.
3.2.15. Server queue disappearing when server shuts down
The phenomenon of the queue COUNT being deleted (cq_count becoming 0) when the server is terminated/restarted while requests are accumulated in the server queue has been fixed.
3.2.16. Dynamic service load balancing error
An error that caused load balancing to not occur when the server of a dynamic service generated by MKSVR was recognized as NOT READY when started when configured as COUSIN has been fixed.
3.2.17. Extending SLOG’s logging capabilities
When restarting the engine and server, the value of the MAXRSTART entry in the configuration file is also output.
-
Added MAXRSTART to the existing TMM3004 error code.
(I) TMM3004 CLH (clh) is restarted the 1th time (MAXRSTART = 2) [TMM0149]
-
Generates additional logs when MAXRSTART is reached
(I) TMM3034 CLH MAXRSTART reached: clh [TMM0166]
3.2.18. IRT issue with server group assignment request function
Fixed an error where IRT was applied when using the server group designation request function (tpcallsvg, tpacallsvg, tpsvgcall, tpmcall, tpmcallx) in multi-node, causing calls to different server groups.
3.2.19. tpacall with TPBLOCK flags processing function
When making a Tmax service call (tpacall TPBLOCK), if the Tmax service does not exist or is not running, a failure response is now returned normally. Previously, the call was blocked without returning a failure response.
3.2.20. Abnormal behavior when dynamically adding servers to COUSIN server group
Fixed an error that caused a failure response when performing a service on a client with a server that was dynamically added to the COUSIN server group.
3.3. Utility
3.3.1. tmboot error
Fixed an error that occurred when using options in the order of -g -t when performing tmboot.
3.3.2. Missing SVR binary when running tmboot -w
When executing tmboot -w, the following situation occurred, and the server was always started at 1-second intervals. This issue has been fixed.
-
If the server binary does not exist
-
If an unimplemented service is defined in the configuration file
3.3.3. tmboot -D operating abnormally on certain platforms
-
Fixed an error where tmboot -D would take about twice as long on Unix servers with many other processes running.
-
Fixed an error where tmboot -D would ignore the time interval set when running tmboot -D and immediately start the next server if there was a server that returned -1 from tpprechk().
-
Fixed an error where the next server would be started immediately, ignoring the time interval set when running tmboot -D or -d if the server binary was missing.
3.3.4. tmdown stop error
Fixed an error related to the phenomenon in which the Tmax engine does not terminate but stops when tmdown is performed in a situation in which the server process frequently TPEXITs due to service timeouts under a load situation.
3.3.5. tmboot MAXUSER check issue
Fixed an error that caused the boot process to terminate if the MAXUSER of the license and the MAXUSER of the current environment settings failed to be checked during the tmboot execution phase.
3.3.6. CFL’s error checking features
The phenomenon of error messages not being displayed when the last paragraph of a paragraph ends with a comma (,) has been corrected.
*SERVER svr2 SVGNAME = svg1,
3.3.7. Incorrect error message during CFL execution
Fixed the issue where an incorrect error message would be output when a node had more gateways declared than the number of MAXGWSVRs defined for that node in the configuration file when performing a CFL.
Message before update : (E) CFL3008 server group [svgname] is defined as duplicate COUSIN [CFL0310] Message after update : (E) CFL3110 more GW(5) than MAXGWSVR(5) are defined at node(nodename) [CFL0941]
3.4. Domain Gateway
3.4.1. Recovery processing error during gateway operation
-
When using the Tmax Transaction Domain Gateway, after configuring DOM2 (3.x) by setting the -h option to DOM1 (5.0 SP1) CLOPT, and requesting a transaction from DOM2 to DOM1, an error occurred where DOM1 did not respond to the commit after preparing. This has been fixed.
-
Fixed an error where Recovery was not performed in a pending state when configuring an older version of Tmax (3.x) and a domain gateway.
3.4.2. Inconsistency when processing transactions with older versions (3.x) of Tmax
Fixed an error where only local domain transactions were committed when processing transactions using a domain and transaction gateway consisting of an older version (3.x) of Tmax in a multi-node environment.
An inconsistency error may occur under following conditions:
-
Local domain: NODE1 (gw1), NODE2 (gw2)
-
Remote domain: NODE1-1 (gw1-1, gw2-1)
-
gw1 and gw2 are composed of COUSIN
-
Lower gw1
-
Connect to a local domain and initiate a global transaction
-
Only commit transactions in the local domain (inconsistency error occurs)
3.4.3. Domain gateway failback error
Fixed an error that prevented failback when using the independent channel function (CLOPT = -i) on the domain gateway when configuring the domain gateway as COUSIN and linking with two nodes.
3.4.4. Abnormal behavior when using CLOPT = -c -i options together
Fixed the issue where the -i option did not work properly when the -c and -i options were set together in the GATEWAY section CLOPT entry.
3.5. Management Tools
3.5.1. Sort by CLH index when querying servers in tmadmin
When checking server status with tmadmin’s st -v, the results are always output in CLH index order.
3.5.2. Channel status error in domain gateway
Fixed the issue in which domain gateway 1 is abnormally terminated while domain gateway 1 and domain gateway 2 are connected to each other and the status is still displayed as RDY when searching with tmadmin’s ntxgwi / txgwi.
3.5.3. suspend / resume command functions
-
If some of the server processes are RUNNING
Fixed the issue where if tmadmin is forcibly terminated with CTRL+C in a blocked state after suspend, all subsequent suspend functions would become unusable when resume is executed.
-
If all server processes are RUNNING
Fixed the issue where the server would change to NRDY state when tmadmin was forcibly terminated with CTRL+C in a blocked state after suspend.
3.7. WebT
3.7.1. Applying blocktime for TPBLOCK flag
-
Fixed the issue of waiting until the response is complete without the set blocktime being reflected when blocktime is specified with setTPtimeout and tpcall or tpacall is called with the TPBLOCK flag.
-
Fixed an error where tpcall and tpacall would not process a response if the thread was interrupted while waiting for the response.
3.7.2. Selector error when calling service in bulk
Fixed the issue where the ‘java.io.IOException: Unable to establish loopback connection’ error would occur after making approximately 10,000 repeated calls to tpcall or tpacall.
3.7.3. handleError() when unsolicited message processing session ends
Fixed an error where handleError() could not be called when unsolicited messages were not received from the WebT client or an error occurred.
3.7.4. Error in responding to WebTAsync request
When performing tpcall, prepare, and commit in WebTAsync, the error in which JEUSGWA discards the message without responding and WebTAsync continues to wait and then times out has been fixed.
3.7.5. Threads not increasing up to maximum setting of response worker thread pool
The following jtmax1.worker.thread.min and jtmax1.worker.thread.max were set differently, and the thread pool was modified to increase to the value set in jtmax1.worker.thread.max.
[webtasync] jtmax1.worker.thread.min = 5 , jtmax1.worker.thread.max = 20
3.7.6. Recovery error when reconnecting to network
Fixed an error that caused recovery not to be performed when reconnecting to the network if a pending transaction occurred due to a network shutdown during transaction processing.
3.7.7. NullpointerException
JEUSGWA sends a registration message and also sends a tpacall message, so we fixed an error that caused a NullpointerException when processing the tpacall message before the registration message was processed.
3.7.8. NullpointerException when sending a message after terminating only one channel
If CPC 2 or higher is set to setRegister(true), and a channel is terminated by the -A (alive check) option, the domain_id part managed by JTmax is deleted, and when requesting a service, the service is requested through the original channel, resulting in a NullpointException.
The JTmax error was fixed so that the connection to the domain_id with the same information as the internally managed domain_id would be deleted after termination.
3.7.9. Error processing unsolicited messages in WebTConnetionPool
An error has been fixed where, when registering an event handler in a WebT client to request an unsolicited message from the server using tpboradcast or tpsendtocli, a user-defined callback interface function would continue to be called on a connection that had already been returned to the connection pool. (The function should no longer be called on a closed connection.)
3.8. Java Gateway
3.8.1. Failure response could not be received from tpacall with flag TPBLOCK
Fixed a bug so that WebT would properly receive a failure response when calling a service that is down or missing via JavaGW.
3.8.2. JEUSGWA avg time not being displayed in tmadmin
Fixed an error where the avg time of the JEUSGWA service was not displayed when performing tmadmin’s st -p.
3.8.3. Abnormal behavior of handling xids from other domains during recovery in multi-domain environment
When WebTAsync is restarted to process a pending transaction that occurred in dom1, the pending xid of the transaction being processed in dom1 is sent to dom2 during the recovery process, and dom2 processes it. Since the xid processed in this way is an xid that does not exist in dom1, a rollback decision is made, resulting in an error that causes data integrity to be inconsistent.
You need to add the following settings:
-
JTmaxServer
JTmaxServer server = new JTmaxServer(name, port, maxConnection,2,this); server.setRegister(true);
-
JEUSGWA
CLOPT = "-t"
3.8.4. Large data transfer error when communicating with older versions
When setting up JEUSGW and setting '-h 1' in the CLOPT entry to communicate with older versions, the following error occurred when transmitting large amounts of data. This has been fixed.
GATEWAY.4068028.132758:(I) GATEWAY2062 remote gateway closed: 111.60.1.62 [JGW0205] GATEWAY.4068028.132758:(E) GATEWAY0050 write error: rgw closed [JGW0018]
3.8.5. No response received when executing tpacall TPBLOCK TPNOREPLY
Fixed an error where WebT clients would not receive responses when calling a large number of services in the following environment.
-
Set tpacall( TPNOREPLY | TPBLOCK ) to JEUSGW in WebT
-
Set sleep 60 and CLHQTIMEOUT to Tmax service
3.8.6. TPESYSTEM error with both -n and -A set for JavaGW
When requesting WebTAsync (JTmax) from Tmax, if JavaGWA has both the -n option and the -A option, an alive check message is sent, and a request is received from CLH without receiving a response. The error that occurs in TPESYSTEM has been fixed.
3.8.7. Status not being displayed when calling ajgwinfo in tmadmin
When calling ajgwinfo in tmadmin to view the GW status of JEUS_ASYNC, an error was fixed where nothing was displayed and both JEUS_ASYNC and JEUS type GWs were displayed in jgwinfo.
3.9. Tuxedo Gateway
3.9.1. tpacall removal issue
When calling the Tmax service from Tuxedo, the tpacall (flag such as TPNOREPLY) setting has been modified so that it is passed on to the Tmax service.
3.9.2. Modified tpacall from Tuxedo to Tmax
Fixed the issue where Tuxedo did not respond when calling the Tmax service with tpacall.
3.9.3. Failover not working
When attempting to connect from Tuxedo to Tmax and then calling Tuxedo’s service from Tmax, the phenomenon of not being able to connect to Tuxedo2 or Tuxedo3 backup in the following environment has been fixed.
RGWADDR = Tuxedo1 BACKUP_RGWADDR = Tuxedo2 BACKUP_RGWADDR2 = Tuxedo3
3.10. TCP/IP gateway
3.10.1. uhead not being applied when a timeout occurs during remote reception
When a timeout occurs during remote reception on a TCP/IP gateway, the phenomenon of the original uhead of the message being transmitted to CLH being transmitted even when the uhead is modified in get_service_name() has been corrected.