Session

SESSION 

1. WHAT IS SESSION AND BATCHES? 
2. WHAT ARE VARIOUS SESSION TRACING LEVELS? 
3. CAN WE COPY A SESSION TO NEW FOLDER OR NEW REPOSITORY? 
4. IS IT POSSIBLE TO STORE ALL THE INFORMATICA SESSION LOG INFORMATION IN A DATABASE TABLE? NORMALLY THE SESSION LOG IS
STORED AS A BINARY COMPRESSION .BIN FILE IN SESSLOGS DIRECTORY. CAN WE STORE THE SAME INFORMATION IN DATABASE TABLES
FOR FUTURE ANALYSIS? 
5. CAN WE CALL A SHELL SCRIPT FROM SESSION PROPERTIES? 
6. CAN WE CHANGE THE SOURCE AND TARGET TABLE NAMES IN SESSION LEVEL? 
7. HOW TO WRITE FLAT FILE COLUMN NAMES IN TARGET? 
8. WHAT ARE THE ERROR TABLES PRESENT IN INFORMATICA? 
9. WHAT ARE THE ALTERNATE WAYS TO STOP A SESSION WITHOUT USING “STOP ON ERRORS” OPTION SET TO 1 IN SESSION
PROPERTIES? 
10. SUPPOSE A SESSION FAILS AFTER LOADING OF 10,000 RECORDS IN THE TARGET. HOW CAN WE LOAD THE RECORDS FROM 10,001
WHEN WE RUN THE SESSION NEXT TIME? 
11. DEFINE THE TYPES OF COMMIT INTERVALS APART FROM USER DEFINED? 
12. SUPPOSE SESSION IS CONFIGURED WITH COMMIT INTERVAL OF 10,000 ROWS AND SOURCE HAS 50,000 ROWS EXPLAIN THE
COMMIT POINTS FOR SOURCE BASED COMMIT & TARGET BASED COMMIT. ASSUME APPROPRIATE VALUE WHEREVER REQUIRED? 
13. HOW TO CAPTURE PERFORMANCE STATISTICS OF INDIVIDUAL TRANSFORMATION IN THE MAPPING AND EXPLAIN SOME
IMPORTANT STATISTICS THAT CAN BE CAPTURED? 
14. HOW CAN WE PARAMETERIZE SUCCESS OR FAILURE EMAIL LIST? 
15. IS IT POSSIBLE THAT A SESSION FAILED BUT STILL THE WORKFLOW STATUS IS SHOWING SUCCESS? 
16. WHAT IS BUSY PERCENTAGE? 
17. CAN WE WRITE A PL/SQL BLOCK IN PRE AND POST SESSION OR IN TARGET QUERY OVERRIDE? 
18. WHENEVER A SESSION RUNS DOES THE DATA GETS OVERWRITTEN IN A FLAT FILE TARGET? IS IT POSSIBLE TO KEEP THE EXISTING
DATA AND ADD THE NEW DATA TO THE TARGET FILE? 
19. CAN WE USE THE SAME SESSION TO LOAD A TARGET TABLE IN DIFFERENT DATABASES HAVING SAME TARGET DEFINITION? 
20. HOW DO YOU REMOVE THE CACHE FILES AFTER THE TRANSFORMATION? 
21. WHY DOESN’T A RUNNING SESSION QUIT WHEN ORACLE OR SYBASE RETURN FATAL ERRORS? 

 

184.Enlist some properties of sessions.

A session is available in the workflow manager and is configured by creating a session task. Within a mapping program, there can be multiple sessions and it can be either reusable or non-reusable.

Some of the properties of the session are as follows:

  • As per the requirement, session tasks can be run either concurrently or sequentially.
  • A session can be configured to analyze the performance.
  • To create or run a session task, it is required to have general information about Session name, schedule and integration service.
  • Other important property of session includes Session log file, the test load, error handling, commit interval, target properties, etc.

185.How many numbers of sessions is grouped in one batch?  

Any number of sessions can be grouped in one batch but however, for an easier migration process, it is better if the number is lesser in one batch.

 

186.How To Recover Sessions In Concurrent Batches?

 

If multiple sessions in a concurrent batch fail, you might want to truncate all targets and run the batch again. However, if a session in a concurrent batch fails and the rest of the sessions complete successfully, you can recover the session as a standalone session.

To recover a session in a concurrent batch:

  1. Copy the failed session using Operations-Copy Session.
  2. Drag the copied session outside the batch to be a standalone session.
  3. Follow the steps to recover a standalone session.
  4. Delete the standalone copy.

187.How Can You Complete Unrecoverable Sessions?

Under certain circumstances, when a session does not complete, you need to truncate the target tables and run the session from the beginning. Run the session from the beginning when the Informatica Server cannot run recovery or when running recovery might result in inconsistent data.

If there is no recovery mode on in session and workflow failed in mid of execution then

  1. Don’t truncate table immediately.
  2. If there is large volume of data is performing by the load and more than 25% data has loaded then-if same workflow has multiple session then check particular session which caused to be failed and fire the delete command only to delete particular session data which has loaded and copy the session into new workflow and run only that session or dependent others.

188. Can We Run A Group Of Sessions Without Using Workflow Manager.

It is possible two run two session only (by precession, post session) using pmcmd without using workflow. Not more than two.

189. What Is Meant By Direct And Indirect Loading Options In Sessions?

  1. Direct loading can be used to Single transformation where as indirect transformation can be used to multiple transformations or files.
  2. In the direct we can perform recovery process but in Indirect we cant do it .

 

190. What is the benefit of Session Partitioning ?

While Integration Service is running in the environment the workflow is partitioned for better performance. These partitions are then used to perform Extraction,

Transformation, and Loading.

 

191. What is Session and Batches?

SESSION – A Session is a set of instructions that tells the Informatica Server / Integration Service, how and when to move data from Sources to Targets. After creating the session, we can use either the server manager or the command line program pmcmd to start or stop the session.
BATCHES – It Provides A Way to Group Sessions For Either Serial Or Parallel Execution By The Informatica Server. There Are Two Types Of Batches:

  •  SEQUENTIAL – Run Session One after the Other.
  •  CONCURRENT – Run Session at the Same Time.

192. What are various session tracing levels?

Normal – default Logs initialization and status information, errors encountered, skipped rows due to transformation errors, summarizes session results but not at the row level.
Terse – Log initialization, error messages, notification of rejected data.
Verbose Initialization – In addition to normal tracing levels, it also logs additional initialization information, names of index and data files used and detailed transformation statistics.
Verbose Data – In addition to verbose initialization, it records row level logs.
193. Can we copy a session to new folder or new repository?
Yes we can copy session to new folder or repository, provided the corresponding Mapping is already in the folder or repository.
194. Is it possible to store all the Informatica session log information in a database table?
Normally the session log is stored as a binary compression .bin file in SessLogs directory.
Can we store the same information in database tables for future analysis?It is not possible to store all the session log information in some table. Along with error related information we may get some other session related information from metadata repository tables like REP_SESS_LOG.
To capture error data, we can configure the session as below:
Go to Session->Config Object-> Error Handling Section
Give the setting-
Error Log Type: Relational Database.
Error Log Type: Give the Database Connection, where we want to store the error tables.
Error Log Table Name Prefix: Prefix for the error tables. By default, Informatica creates 4 different error tables.
If we provide a prefix here the error tables will be created with the same prefix in the database.
Log Row Data: This option is used to log the data at the point where the error happened.
Log Source Row Data: Capture the source date for the error record.
Log Source Row Data: Error data will be stored into a single column of the database table. We can specify the delimiter for the source data here.
List of Error tables created by Informatica:
PMERR_DATA. Stores data and metadata about a transformation row error and its corresponding source row.
PMERR_MSG. Stores metadata about an error and the error message.
PMERR_SESS. Stores metadata about the session.
PMERR_TRANS. Stores metadata about the source and transformation ports, such as name and data type, when a transformation error occurs.
The above tables are specifically used to store the information about exception (error) records – e.g. records in the reject file.
We can use this as a base for error handling strategy. But this does not contain all the information that are present in session log – like performance details (thread busy percentage), details of the transformation invoked in the session etc. We can also check the contents of REP_SESS_LOG view under Informatica repository schema; however, that too does not contain all the information.
195. Can we call a shell script from session properties?

The Integration Service can execute shell commands at the beginning or at the end of the session. The Workflow Manager provides the following types of shell commands for each Session task:

  •  Pre-session command
  • Post-session success command
  •  Post-session failure command

Use any valid UNIX command or shell script for UNIX nodes, or any valid DOS or batch file for Windows nodes. Configure the session to run the pre- or post-session shell commands

196. Can we change the Source and Target table names in Session level?

Yes, we can change the source and target table names in the session level. Go to the session and navigate to the mapping tab. Select the source/target to be changed- for target mention new table name in “Target Table Name” & for source choose “Source Table Name”.
One more suitable method would be to parameterize the source and target table name. We can run the same mapping concurrently using different parameter files. We have to enable concurrent run mode in the Workflow level. Also find more information regarding parameterization.
197. How to write flat file column names in target?
There are two options available in session properties to take care of this requirement. For this, Go to Mapping Tab Target Properties and Choose the header option as Output Field names OR Use Header Command output File.
Option 1, will create your output file with a header record and the column heading names will be same as your Target transformation port names.
Option 2, we can create our command to generate the header record text. We can use an ‘echo’ command here to get this created. Here is an example
echo ‘”Employee ID”|”Department ID”‘
It is recommended using the second option as it gives more flexibility for writing the column names.
198. What are the ERROR tables present in Informatica?

  •  PMERR_DATA- Stores data and metadata about a transformation row error and its corresponding source row.
  • PMERR_MSG– Stores metadata about an error and the error message.
  • PMERR_SESS– Stores metadata about the session.
  • PMERR_TRANS– Stores metadata about the source and transformation ports, such as name and data type, when a transformation error occurs.

199. What are the alternate ways to stop a session without using “STOP ON ERRORS” optionn set to 1 in session properties?
We can also use the functions STOP () or ERROR () in an expression transformation to stop the execution of a session based on some user-defined conditions

200. Suppose a session fails after loading of 10,000 records in the target. How can we load the records from 10,001 when we run the session next time?

If we configure the Session for Normal load rather than Bulk load & by using Recovery Strategy in the Session Properties & selecting the Option “Resume from last Check point”, then we can run the Session from the last Commit Interval.
In this case if we specify the Commit Interval as 10,000 & the Integration Service issues a commit after loading 10,000 records then you can load the records from 10,001.
If 9999 rows were loaded and the session fails and Integration Service did not issue any commit as the Commit Interval in this case is 10,000 then we cannot perform Recovery. In this case truncate the Target Table & Restart the session.
201. Define the types of Commit intervals apart from user defined?
The different commit intervals are:

  • Target-based commit. The Informatica Server commits data based on the number of target rows and the key constraints on the target table. The commit point also depends on the buffer block size and the commit interval.
  • Source-based commit. The Informatica Server commits data based on the number of source rows. The commit point is the commit interval you configure in the session properties.

202.Suppose session is configured with commit interval of 10,000 rows and source has 50,000 rows explain the commit points for source based commit & target based commit. Assume appropriate value wherever required?

Target Based commit (First time Buffer size full 7500 next time 15000)
Commit Every 15000, 22500, 30000, 40000, 50000
Source Based commit(Does not affect rows held in buffer)
Commit Every 10000, 20000, 30000, 40000, 50000

203.How to capture performance statistics of individual transformation in the mapping and explain some important statistics that can be captured?
Use tracing level Verbose data.
205.How can we parameterize success or failure email list?
We can parameterize the email user list and modify the values in parameter file.
Use $PMSuccessEmailUser, $PMFailureEmailUser.
Also we can use pmrep command to update the email task:
updateemailaddr
-d <folder_name>
-s <session_name>
-u <success_email_address>
-f <failure_email_address>
206.Is it possible that a session failed but still the workflow status is showing success?
If the workflow completes successfully it will show the execution status of success irrespective of whether any session within the workflow failed or not. The workflow success status has nothing to do with session failure. If and only if we set the session general option in the workflow designer Fail Parent if this task fails,
then only the workflow status will display as failed on session failure.
207.What is Busy Percentage?
Duration of time the thread was occupied compared to total run time of the mapping.
So let’s say, we have one writer thread – this thread is internally responsible for writing data to the target table/ file. Now if our mapping runs for 100 seconds but the time taken by the mapping to write the data to the target is only 20 seconds (because other time it was busy in reading/ transforming the data), then busy percentage of the writer thread is 20%

208.Can we write a PL/SQL block in pre and post session or in target query override?Yes we can. Remember always to put a backslash (\) before any semi-colon ( ; ) we use in the PL-SQL block.
209.Whenever a session runs does the data gets overwritten in a flat file target? Is it possible to keep the existing data and add the new data to the target file?
Normally with every session run target file data will be overwritten, except if we select “Append if Exist” (8x onwards) option for the Target session Property which will append the new data to the existing data in the
flat file target.

210.Can we use the same session to load a target table in different databases having same target definition?
Yes we can use the same session to load same target definition in different databases with the help of the Parameterization; i.e. using different parameter files with different values for the parameterized Target Connection object $DBConnection_TGT and Owner/Schema name Table Name Prefix with $Param_Tgt_Tablename. To run the single workflow with the session, to load two different database target tables we can consider using Concurrent workflow Instances with different parameter files.
Even we can load two instance of the same target connected in the same pipeline. At the session level use different relational connection object created for different Databases.
211.How do you remove the cache files after the transformation?
After session complete, DTM remove cache memory and deletes caches files. In case using persistent cache and Incremental aggregation then caches files will be saved.
212.Why doesn’t a running session QUIT when Oracle or Sybase return fatal errors?
The session will only QUIT when its threshold: “Stop on errors” is set to 1. Otherwise the session will continue to run.

213.If we have written a source override query in source qualifier in mapping level but have modified the query in session level SQL override then how integration service behaves.
Informatica Integration Service treats the Session Level Query as final during the session run. If both the queries are different Integration Service will consider the Session level query for execution and will ignore the Mapping level query.