INFORMATICA SCENARIO-1

 

 

57.Load all records except last N – Informatica

 
Example: My source file contains the following records: 
Name
----
A
B
C
D
E
F
G

After excluding the last 5 records, i want to load A,B into the target. How to implement a mapping logic for this in informatica? 

Solution: Follow the below steps
  • Connect the source qualifier transformation, NEXTVAL port of sequence generator to the sorter transformation.
  • In the sorter transformation, check the key box corresponding to NEXTVAL port and change the direction to Descending.
  • Create one more sequence generator transformation and a filter transformation.
  • Connect the NEXTVAL port of the second sequence generator transformation to the filter and Name port of sorter transformation to filter.
  • Specify the filter condition as NEXTVAL > 5.
  • Save the mapping. Create a workflow and session. Save the workflow and run the workflow.
You can use the same approach to remove the footer record from the source by specifying the filter condition as NEXVAL>1. If you have any issues in solving this problem, please do comment here.
 
 
 
57.How to generate sequence / incremental numbers in Informatica?
 
Solution 1: In the expression transformation, create a variable port and increment it by 1. Then assign the variable port to an output port.
 
In the expression transformation, the ports are:
V_COUNT=V_COUNT+1
O_COUNT=V_COUNT
 
Solution 2: Insert a sequence generator transformation drag NEXTVAL port from sequence generator to expression.This port will give you the incremental numbers.
Use Start Value property to 1 and Increment By property to 1 for a series like 1,2,3,4,5,6,7,8…..
 
 

http://dwhlaureate.blogspot.com/2012/09/4-scenarios-where-we-would-use-stored.html

http://dwhlaureate.blogspot.com/2013/02/informatica-scenarios-based-questions.html

58.Lookup On Multiple Records In Informatica 9

 Need to lookup on File with multiple records and extract records with specific condition (For example in this case only account number starting with “12”)

 Source File

ID

User_Name

1

James

2

Williams

3

Ravi

Lookup file

ID

Account_Name

Account_number

1

Acc _James_1

123232

1

Acc _James_2

45778266

2

Acc _ Williams_1

5455546

2

Acc _ Williams_2

1234343

3

Acc _ Ravi_1

254589

3

Acc _ Ravi_2

12544456

Expected Output

ID

Account_Name

Account_number

User_Name

1

Acc _James_1

123232

James

2

Acc _ Williams_2

1234343

Williams

3

Acc _ Ravi_2

12544456

Ravi

a)In the Lookup condition give the option for Multiple Output for the matching records (this option only available for Informatica 9).The output in Lookup will be as below(Data in Expression)

ID

Account_Name

Account_number

User_Name

1

Acc _James_1

123232

James

1

Acc _James_2

45778266

James

2

Acc _ Williams_1

5455546

Williams

2

Acc _ Williams_2

1234343

Williams

3

Acc _ Ravi_1

254589

Ravi

3

Acc _ Ravi_2

12544456

Ravi

b)In expression check for the account starting with “12” using below condition

IIF (SUBSTR (Account_number, 1, 1) =’12’, 1, 0)

 

c)Next step is quite simple. We can use a filter and take records only were the flag is 1.The output will be as below

ID

Account_Name

Account_number

User_Name

1

Acc _James_1

123232

James

2

Acc_ Williams_2

1234343

Williams

3

Acc _ Ravi_2

12544456

Ravi

 

59.Scenario Implementation 1

60.Scenario Implementation 2

https://forgetcode.com/informatica/943-max-find-maximum-value-of-number-or-date-or-string-port

1

2

3

 

4

Convert multiple Rows to single row (multiple Columns) in Informatica

 

I have the sales table as a source. The sales table contains the sales information of products for each year and month. The data in the source table is shown below:

 
Source Data: Sales table

year product month amount
-------------------------
1999  A      Jan   9600
1999  A      Feb   2000
1999  A      Mar   2500
2001  B      Jan   3000
2001  B      Feb   3500
2001  B      Mar   4000

The sales information of a product for each month is available in a separate row. I want to convert the rows for all the months in a specific year to a single row. The output is shown below:

Target Data:

year product Jan_month Feb_month2 Mar_month
-------------------------------------------
1999   A      9600       2000      2500
2001   B      3000       3500      4000 

How to implement a mapping logic for this in informatica?

Solution:

Follow the below steps to implement the mapping logic for the above scenario in informatica:

  • Create a new mapping.
  • Drag the source into the mapping.
  • Create an expression transformation.
  • Drag the ports of source qualifier into the expression transformation.
  • Create the below additional ports in the expression transformation and assign the corresponding expressions:
Jan_Month (output port) = IIF(month='Jan', amount, null)
Feb_Month (output port) = IIF(month='Feb', amount, null)
Mar_Month (output port) = IIF(month='Mar', amount, null)
  • Connect the expression transformation to an aggregator transformation. Connect only the ports year, product, Jan_Month, Feb_Month,Mar_Month ports of expression to aggregator transformation. Group by on year and product in aggregator transformation.
  • Create the below additional ports in aggregator transformation and assign the corresponding expressions:
o_Jan_Month (output port) = MAX(Jan_Month)
o_Feb_Month (output port) = MAX(Feb_Month)
o_Mar_Month (output port) = MAX(Mar_Month)
  • Now connect the ports year, product, o_Jan_Month, o_Feb_Month, o_Mar_Month of aggregator transformation to the target.
  • Save the mapping.
 
 
 

61.Describe the approach for the requirement. Suppose the input is:

56

 

62.How can we implement aggregation operation without using an Aggregator Transformation in Informatica?

We will use the very basic concept of the Expression Transformation, that at a time we can access  the previous row data as well as the currently processed data in an expression transformation.
What we need is simple Sorter, Expression and Filter transformation to achieve aggregation at Informatica level.
For detailed understanding visit Aggregation without Aggregator.

63.SCENARIO IMPLEMENTATION

64.SCENARIO IMPLEMENTATION

65.SCENARIO IMPLEMENTATION

66.SCENARIO IMPLEMENTATION

67.SCENARIO IMPLEMENTATION

7

8

9



Answer:
Use Expression Transformation to concatenate both values as- name = name1 || name2

10

https://forgetcode.com/informatica/1814-reg-match-string-contains-or-like-operation

https://informaticareference.wordpress.com/2011/12/14/reg_match/

1112

 

68.SCENARIO IMPLEMENTATION

17.PNG

 

69. Scenario Implementation 1
Suppose we have a source table populating two target tables. We connect the NEXTVAL port of the Sequence Generator to the surrogate keys of both the target tables.
Will the Surrogate keys in both the target tables be same? If not how can we flow the same sequence values in both of them.


When we connect the NEXTVAL output port of the Sequence Generator directly to the surrogate key columns of the target tables, the Sequence number will not be the same.
A block of sequence numbers is sent to one target tables surrogate key column. The second target receives a block of sequence numbers from the Sequence Generator transformation only after the first target table receives the block of sequence numbers.
Suppose we have 5 rows coming from the source, so the targets will have the sequence values as TGT1 (1,2,3,4,5) and TGT2 (6,7,8,9,10). [Taken into consideration Start Value 0, Current value 1 and Increment by 1]Now suppose the requirement is like that we need to have the same surrogate keys in both the targets.Then the easiest way to handle the situation is to put an Expression transformation in between the Sequence Generator and the Target tables. The Sequence Generator will pass unique values to the expression transformation, and then the rows are routed from the expression transformation to the targets.

 

25

70. Scenario Implementation 2
Suppose we have 100 records coming from the source. Now for a target column population we used a Sequence generator. Suppose the Current Value is 0 and End Value of Sequence generator is set to 80. What will happen?

End Value is the maximum value the Sequence Generator will generate. After it reaches the End value the session fails with the following error message:

TT_11009 Sequence Generator Transformation: Overflow error.
Failing of session can be handled if the Sequence Generator is configured to Cycle through the sequence, i.e. whenever the Integration Service reaches the configured end value for the sequence; it wraps around and starts the cycle again, beginning with the configured Start Value.

71. Scenario Implementation 3
Consider we have two mappings that populate a single target table from two different source systems. Both the mappings have Sequence Generator transform to generate surrogate key in the target table. How can we ensure that the surrogate key generated is consistent and does not generate duplicate values when populating data from two different mappings?discuss

We should use a Reusable Sequence Generator in both the mappings to generate the target surrogate keys.

72. SUPPOSE WE HAVE TWO SOURCE QUALIFIER TRANSFORMATIONS SQ1 AND SQ2 CONNECTED TO TARGET TABLES TGT1 AND TGT2

If we have multiple Source Qualifier transformations connected to multiple targets, we can designate the order in which the Integration Service loads data into the targets.
In the Mapping Designer, We need to configure the Target Load Plan based on the Source Qualifier transformations
in a mapping to specify the required loading order.

2

It defines the order in which Informatica server loads the data into the targets. This is to avoid integrity constraint
violations

73. SUPPOSE WE HAVE A SOURCE QUALIFIER TRANSFORMATION THAT POPULATES TWO TARGET TABLES. HOW DO YOU ENSURE TGT2 IS LOADED AFTER TGT1? 

In the Workflow Manager, we can Configure Constraint based load ordering for a session. The Integration Service orders the target load on a row-by-row basis. For every row generated by an active source, the Integration Service loads the corresponding transformed row first to the primary key table, then to the foreign key table.
Hence if we have one Source Qualifier transformation that provides data for multiple target tables having primary and foreign key relationships, we will go for Constraint based load ordering.

3


74. SUPPOSE WE HAVE THE EMP TABLE AS OUR SOURCE. IN THE TARGET WE WANT TO VIEW THOSE EMPLOYEES WHOSE SALARY ARE GREATER THAN OR EQUAL TO THE AVERAGE SALARY FOR THEIR DEPARTMENTS. DESCRIBE YOUR MAPPING APPROACH. 

 

4

To start with the mapping we need the following transformations:
After the Source qualifier of the EMP table place a Sorter transformation. Sort based on DEPTNO port.

5

Next we place a Sorted Aggregator Transformation. Here we will find out the AVERAGE SALARY for each(GROUP BY) DEPTNO.
When we perform this aggregation, we lose the data for individual employees.
To maintain employee data, we must pass a branch of the pipeline to the Aggregator Transformation and pass a branch with the same sorted source data to the Joiner transformation to maintain the original data.
When we join both branches of the pipeline, we join the aggregated data with the original data

6

 

7

So next we need Sorted Joiner Transformation to join the sorted aggregated data with the original data,based on DEPTNO. Here we will be taking the aggregated pipeline as the Master and original dataflow as Detail
Pipeline.

8

 

9

After that we need a Filter Transformation to filter out the employees having salary less than average salary
for their department.
Filter Condition: SAL >= AVG_SAL

10

Finally we place the Target table instance.

75. SCENARIO IMPLEMENTATION 1 

111213
75. HOW CAN WE LOAD ‘X’ RECORDS (USER DEFINED RECORD NUMBERS) OUT OF ‘N’ RECORDS FROM SOURCE DYNAMICALLY,WITHOUT USING FILTER AND SEQUENCE GENERATOR TRANSFORMATION? 

  • Take a mapping parameter say $$CNT to pass the number of records we want to load dynamically by changing in the parameter file each time before session run.
  • Next after the Source Qualifier use an Expression transformation and create one output port say CNTR with value CUME (1).
  •  Next use an Update Strategy with condition IIF ($$CNT >= CNTR, DD_INSERT, DD_REJECT).

76. SUPPOSE WE HAVE A FLAT FILE WHICH HAS A HEADER RECORD WITH ‘FILE CREATION DATE’, AND DETAILED DATA RECORDS.
DESCRIBE THE APPROACH TO LOAD THE ‘FILE CREATION DATE’ COLUMN ALONG WITH EACH AND EVERY DETAILED RECORD. 

16

77. SCENARIO IMPLEMENTATION 2 

17

78. SUPPOSE WE HAVE A FLAT FILE WHICH CONTAINS JUST A NUMERIC VALUE. WE NEED TO POPULATE THIS VALUE IN ONE COLUMN OF THE TARGET TABLE FOR EVERY SOURCE RECORD. HOW CAN WE ACHIEVE THIS? 

18

19

79. HOW WILL YOU LOAD A SOURCE FLAT FILE INTO A STAGING TABLE WHEN THE FILE NAME IS NOT FIXED? THE FILE NAME IS LIKE SALES_2013_02_22.TXT, I.E. DATE IS APPENDED AT THE END OF THE FILE AS A PART OF FILE NAME. 

20

80. SOLVE THE BELOW SCENARIO USING INFORMATICA AND DATABASE SQL. 

21

2223

81. SCENARIO IMPLEMENTATION 3 

2829

 

30

82. Generate different flat file target based on the Locationname, like separate files for Mumbai.dat, Bangalore.dat, and Delhi.dat

Source Table:

Dept nameDept IDLocation
DWH1Mumbai
Java2Bangalore
Dot net3Delhi
  1. Sort the source table by the column Location using a Sorter.
  2. Create Expression transformation and create the below ports

In_Location = location
V_flag = IIF(In_Location = V_ Location, 0,1)
V_ Location = In_Location
Output_flag (output port) = V_flag

Output_file_name (output port) = location ||’.dat’

****This expression will check when Location changes (Eg.fromMumbai to Bangalore).

  1. Now we need to connect the Expression Transformation toTransactionControl transformation. Informatica Power Centre allows us to control the roll back and commit on transaction based on set of rows that passes through theTransactionControl transformation. This allows to define your transaction whether it should be committed or rollback based on the rows that pass through, such as based on the Entry Date or some other column.

Use the following syntax for the expression:

IIF (condition, value1, value2)

IIF(Output_flag = 1, TC_COMMIT_BEFORE, TC_CONTINUE_TRANSACTION)

****This expression will write to the output file when location changes (Eg.from Mumbai to Bangalore)

  1. Connect to the Target Table

Note: We have to use special port called “Filename” port in the Target file definition.

 

 

 
83. I have a flat file, want to reverse the contents of the flat file which means the first record should come as last record and last record should come as first record and load into the target file

 

 

84.Load Source File Name in Target – Informatica

How to load the name of the current processing flat file along with the data into the target using informatica mapping?
 
 
 

We will create a simple pass through mapping to load the data and “file name” from a flat file into the target. Assume that we have a source file “customers” and want to load this data into the target “customers_tgt”. The structures of source and target are

Source file name: customers.dat
Customer_Id
Location

Target: Customers_TBL
Customer_Id
Location
FileName

The steps involved are:

  • Login to the powercenter mapping designer and go to the source analyzer.
  • You can create the flat file or import the flat file.
  • Once you created a flat file, edit the source and go to the properties tab. Check the option “Add Currently Processed Flat File Name Port”. This option is shown in the below image.
  • A new port, “CurrentlyProcessedFileName” is created in the ports tab.
  • Now go to the Target Designer or Warehouse Designer and create or import the target definition. Create a “Filename” port in the target.
  • Go to the Mapping designer tab and create new mapping.
  • Drag the source and target into the mapping. Connect the appropriate ports of source qualifier transformation to the target.
  • Now create a workflow and session. Edit the session and enter the appropriate values for source and target connections.
  • The mapping flow is shown in the below image

The loading of the filename works for both Direct and Indirect Source filetype. After running the workflow, the data and the filename will be loaded in to the target. The important point to note is the complete path of the file will be loaded into the target. This means that the directory path and the filename will be loaded(example: /informatica/9.1/SrcFiles/Customers.dat).

If you don’t want the directory path and just want the filename to be loaded in to the target, then follow the below steps:

  • Create an expression transformation and drag the ports of source qualifier transformation into it.
  • Edit the expression transformation, go to the ports tab, create an output port and assign the below expression to it.
REVERSE
(
  SUBSTR
  (
     REVERSE(CurrentlyProcessedFileName),
     1,
     INSTR(REVERSE(CurrentlyProcessedFileName), '/') - 1
  )
)
  • Now connect the appropriate ports of expression transformation to the target definition.

 

85.How do we achieve DENSE_RANK in Informatica?
In order to achieve the DENSE RANK functionality in Informatica we will use the combination of Sorter, Expression and Filter transformation. Based on the previous example data set, let’s say we want to get the top 2 highest salary of each department as per DENSE RANK.

  •  Use a SORTER transformation.
  • DEPTNO ASC, SAL DESC
    After the sorter place an EXPRESSION transformation.

 

20

 

86. Suppose we have ten source flat files of same structure. How can we load all the files in target database in a single batch run using a single mapping?
Answer:
After we create a mapping to load data in target database from source flat file definition, next we move on
to the session property of the Source Qualifier.
To load a set of source files we need to create a file say final.txt containing the source flat file names, ten files in our case and set the Source filetype option as Indirect. Next point this flat file final.txt, fully qualified
with Source file directory and Source filename

1

 

87. CAN WE PASS THE VALUE OF A MAPPING VARIABLE BETWEEN 2 PIPELINES UNDER THE SAME MAPPING? IF NOT HOW CAN WE ACHIEVE THIS? 

27

88.How to create Target Files Dynamically.

Scenario:How to generate file name dynamically  with name of sys date ?

Solution:

  1. Drag your target file to target designer and add a column as show on the picture. It’s not a normal column .click on the ‘add file name to the table’ property. (I have given a red mark there)
  2. Then drag your source to mapping area and connect it to an expression transformation.
  3. In expression transformation add a new port as string data type and make it output port.
  4. In that output port write the condition like describe as bellow and then map it in to filename port of target. Also send other ports to target. Finally run the session. You will find two file one with sys date and other one is ‘.out’ file which one you can delete.

 

90. Design a mapping to load a target table with the following values from the above source?

department_no, employee_list
10, A
10, A,B
10, A,B,C
10, A,B,C,D
20, A,B,C,D,P
20, A,B,C,D,P,Q
20, A,B,C,D,P,Q,R
20, A,B,C,D,P,Q,R,S

Solution:
Step1: Use a sorter transformation and sort the data using the sort key as department_no and then pass the output to the expression transformation. In the expression transformation, the ports will be
department_no
employee_Name
V_employee_list =IF(ISNULL(V_employee_list),employee_name,V_employee_list||’,’||employee_name)
O_employee_list = V_employee_list

Step2: Now connect the expression transformation to a target table.

 

91.Design a mapping to load a target table with the following values from the above source?

department_no, employee_list
10, A
10, A,B
10, A,B,C
10, A,B,C,D
20, P
20, P,Q
20, P,Q,R
20, P,Q,R,S

Solution:
Step1: Use a sorter transformation and sort the data using the sort key as department_no and then pass the output to the expression transformation. In the expression transformation, the ports will be
department_no
employee_name
V_curr_deptno=department_no
V_employee_list = IIF(V_curr_deptno! = V_prev_deptno,employee_name,V_employee_list||’,’||employee_name)
V_prev_deptno=department_no
O_employee_list = V_employee_list

Step2: Now connect the expression transformation to a target table.

92.Design a mapping to load a target table with the following values from the above source?

department_no, employee_names
10, A,B,C,D
20, P,Q,R,S

Solution: 
The first step is same as the above problem. Pass the output of expression to an aggregator transformation and specify the group by as department_no. Now connect the aggregator transformation to a target table

93. Consider the following product types data as the source.

Product_id, product_type
10, video
10, Audio
20, Audio
30, Audio
40, Audio
50, Audio
10, Movie
20, Movie
30, Movie
40, Movie
50, Movie
60, Movie

Assume that there are only 3 product types are available in the source. The source contains 12 records and you dont know how many products are available in each product type.

. Design a mapping to select 9 products in such a way that 3 products should be selected from video, 3 products should be selected from Audio and the remaining 3 products should be selected from Movie.

Solution:
Step1: Use sorter transformation and sort the data using the key as product_type.
Step2: Connect the sorter transformation to an expression transformation. In the expression transformation, the ports will be

product_id
product_type
V_curr_prod_type=product_type
V_count = IIF(V_curr_prod_type = V_prev_prod_type,V_count+1,1)
V_prev_prod_type=product_type
O_count=V_count

Step3: Now connect the expression transformation to a filter transformation and specify the filter condition as O_count<=3. Pass the output of filter to a target table.

. In the above problem Q1, if the number of products in a particular product type are less than 3, then you wont get the total 9 records in the target table. For example, see the videos type in the source data. Now design a mapping in such way that even if the number of products in a particular product type are less than 3, then you have to get those less number of records from another porduct types. For example: If the number of products in videos are 1, then the remaining 2 records should come from audios or movies. So, the total number of records in the target table should always be 9.

Solution:
The first two steps are same as above.

Step3: Connect the expression transformation to a sorter transformation and sort the data using the key as O_count. The ports in soter transformation will be

product_id
product_type
O_count (sort key)

Step 4: Discard O_count port and connect the sorter transformation to an expression transformation. The ports in expression transformation will be

product_id
product_type
V_count=V_count+1
O_prod_count=V_count

Step 5: Connect the expression to a filter transformation and specify the filter condition as O_prod_count<=9. Connect the filter transformation to a target table.

 
 

 

 
94.How to Process multiple flat files to single target table through informatica if all files are same structure?


95: How to Load only First and Last record of Source to Target.

http://informaticachamp.blogspot.in/2014/03/scenario-6-how-to-load-only-first-and.html

96: How to handle comma in a comma delimited file?

http://informaticachamp.blogspot.in/2014/03/scenario-16-how-to-handle-comma-in.html

97: How to achieve target below?

(Source)

ID  | COUNTRY

101 | INDIA

102 | NEPAL

101 | AMERICA

103 | AFRICA

102 | JAPAN

103 | CHINA

(Target)

SID|ID|COUNTRY

1|101| INDIA

2|101| AMERICA

1|102| NEPAL

2|102| JAPAN

1|103| AFRICA

2|103| CHINA

http://informaticachamp.blogspot.in/2014/03/scenario-10-how-to-assign-unique-id-for.html

 

98.How can we implement Reverse Pivoting using Informatica transformations?

Pivoting can be done using Normalizer transformation. For reverse-pivoting we will need to use an aggregator transformation like below:

30

 

99. SUPPOSE WE HAVE A COLUMN IN SOURCE WITH VALUES AS BELOW: 

242526

 

99. SCENARIO IMPLEMENTATION 4 

31

 

32

 

100.Generate rows based on a column value – Informatica

I have the products table as the source and the data of the products table is shown below.

Table Name: Products
Product  Quantity
-----------------
Samsung  NULL
Iphone   3
LG       0
Nokia    4

Now i want to duplicate or repeat each product in the source table as many times as the value in the quantity column. The output is

product  Quantity
----------------
Iphone   3
Iphone   3
Iphone   3
Nokia    4
Nokia    4
Nokia    4
Nokia    4

The Samsung and LG products should not be loaded as their quantity is NULL, 0 respectively.

Now create informatica workflow to load the data in to the target table?

Solution:

Follow the below steps

  • Create a new mapping in the mapping designer
  • Drag the source definition in to the mapping
  • Create the java transformation in active mode
  • Drag the ports of source qualifier transformation in to the java transformation.
  • Now edit the java transformation by double clicking on the title bar of the java transformation and go to the “Java Code” tab.
  • Enter the below java code in the “Java Code” tab.
if (!isNull("quantity"))
{
  double cnt = quantity;
  for (int i = 1; i <= quantity; i++)
  {
    product = product;
    quantity = quantity;
    generateRow();
  }
}
informatica joiner transformation example
  • Now compile the java code. The compile button is shown in red circle in the image.
  • Connect the ports of the java transformation to the target.
  • Save the mapping, create a workflow and run the workflow.

If you like this post, please share it on google by clicking on the +1 button.

 
 
How to generate or load values in to the target table based on a column value using informatica etl tool

101.Informatica Scenarios:Pivoting of records(Pivoting of Employees by Department)

In this scenario we will discuss about how to pivot a set of records based on column:

Source Records:

Dept_idEmp_name
10CLARK
10KING
10MILLER
20JONES
20FORD
20ADAMS
20SMITH
20SCOTT
30WARD
30TURNER
30ALLEN
30BLAKE
30MARTIN
30JAMES

Expected Output

DEPT_IDEMP_NAME
10CLARK|KING|MILLER
20JONES|FORD|ADAMS|SMITH|SCOTT
30WARD|TURNER|ALLEN|JAMES|BLAKE|MARTIN

For this we can use the below pivot_mapping. To get the required source records we have used the below source query.

select  d.deptno,e.ename from emp e,dept d where

e.deptno=d.deptno

Sort: Next step is to sort the Deptid using a Sorter or we can sort using the query directly.

In the expression we need to use the logic as shown below and arrange the columns in the below order

V_EMP_NAME = DECODE(V_DEPT_PREV,DEPT_ID,V_DEPT||’|’||EMP_NAME, EMP_NAME)

O_EMP_NAME=V_EMP_NAME

V_DEPT_PREV=DEPT_ID

Group by dept_id: To group the deptid column we can make use of the AGGREGATOR .

When we group by using the DEPTID it will group and return the last row of each department group and the target table will have the below records as expected

SQL> SELECT * FROM PIVOT_TARGET;

DEPTN        ENAME

10       CLARK|KING|MILLER

20        JONES|FORD|ADAMS|SMITH|SCOTT

30        WARD|TURNER|ALLEN|JAMES|BLAKE|MARTIN

 

102.have a file which contains 2 columns like below:

col1   col2

1         A

2         Z

3         K

15       PG

49       KMC

many records like this and we don’t know how many records are there and at what point they are ending.

We just know that 1st column is Integer number and second is string.Our requirement is to repeat the second column (ie., string value) as many times as mentioned in column 1.

Which means our result should be like this:

col1  col2

1        A

2        ZZ

3        KKK

15      PGPGPGPG…….15 times

49      KMCKMCKMC…….49 times

JAVA transformation with cycle for (i=0; i<=col1; i++) Out_port=Out_port+ col2;

Not built into the tool. An EXP won’t help because the EXP is a passive transformation whereas for this task you need some active transformation logic.

A Normalizer can be used if (and only if) the maximum number of potential repetitions is known in advance. For example, if you know that the first column in this file can never exceed a value of e.g. 50, then you can do it using an EXP, a Normalizer, and a Filter transformation.

 

103. Design a mapping to convert column data into row data without using the normalizer transformation.
The source data looks like

col1, col2, col3
a, b, c
d, e, f

The target table data should look like

Col
a
b
c
d
e
f

Solution:

Create three expression transformations with one port each. Connect col1 from Source Qualifier to port in first expression transformation. Connect col2 from Source Qualifier to port in second expression transformation. Connect col3 from source qualifier to port in third expression transformation. Create a union transformation with three input groups and each input group should have one port. Now connect the expression transformations to the input groups and connect the union transformation to the target table.

104.Reverse the Contents of Flat File – Informatica

As an example consider the source flat file data as

Informatica Enterprise Solution
Informatica Power center
Informatica Power exchange
Informatica Data quality

The target flat file data should look as

Informatica Data quality
Informatica Power exchange
Informatica Power center
Informatica Enterprise Solution

Solution:

Follow the below steps for creating the mapping logic

  • Create a new mapping.
  • Drag the flat file source into the mapping.
  • Create an expression transformation and drag the ports of source qualifier transformation into the expression transformation.
  • Create the below additional ports in the expression transformation and assign the corresponding expressions
Variable port: v_count = v_count+1
Output port o_count = v_count
  • Now create a sorter transformation and drag the ports of expression transformation into it.
  • In the sorter transformation specify the sort key as o_count and sort order as DESCENDING.
  • Drag the target definition into the mapping and connect the ports of sorter transformation to the target.
 

Scenario: You have two columns in source table T1, in which the col2 may contain duplicate values.All the duplicate values in col2 of  will be transformed as comma separated in the column col2  of target table T2.

Source Table: T1

Col1Col2
ax
by
cz
am
bn

 

 

 

 

 

 

Target Table: T2

col1col2
ax,m
by,n
cz

Solution:

https://forgetcode.com/Informatica/1057-Decode-Value-Search-or-Nested-If-Else

  1. We have to use the following transformation as below.
    First connect a sorter transformation to source and make col1 as key and its order is ascending. After that connect it to an expression transformation.
  2. In Expression make four new port and give them name as in picture below.
  3. In concat_val write expression like as describe bellow and send it to an aggregator
  4. In aggregator group it by col1 and send it to target
  5. Finally run the session.

106.Create a workflow to load only the Fibonacci numbers in the target table. The target table data should look like as

 Id
1
2
3
5
8
13
…..

In Fibonacci series each subsequent number is the sum of previous two numbers. Here assume that the first two numbers of the Fibonacci series are 1 and 2.

Solution:

STEP1: Drag the source to the mapping designer and then in the Source Qualifier Transformation properties, set the number of sorted ports to one. This will sort the source data in ascending order. So that we will get the numbers in sequence as 1, 2, 3, ….1000

STEP2: Connect the Source Qualifier Transformation to the Expression Transformation. In the Expression Transformation, create three variable ports and one output port. Assign the expressions to the ports as shown below.

Ports in Expression Transformation:
id
v_sum = v_prev_val1 + v_prev_val2
v_prev_val1 = IIF(id=1 or id=2,1, IIF(v_sum = id, v_prev_val2, v_prev_val1) )
v_prev_val2 = IIF(id=1 or id =2, 2, IIF(v_sum=id, v_sum, v_prev_val2) )
o_flag = IIF(id=1 or id=2,1, IIF( v_sum=id,1,0) )

STEP3: Now connect the Expression Transformation to the Filter Transformation and specify the Filter Condition as o_flag=1

STEP4: Connect the Filter Transformation to the Target Table.

107.SCENARIO IMPLEMENTATION

16

108. SCENARIO IMPLEMENTATION 1 

109. SCENARIO IMPLEMENTATION 2 

 

2728

 

110. Explain SCD TYPE 3 through mapping.

SCD Type3 Mapping

In SCD Type3, there should be two columns added to identifying a single attribute. It stores one time historical data with current data.

  1. This is the source:
    SCD-Type-3-informatica-interview-questions
  2. This is the entire mapping:
    SCD-Type3-mapping-informatica-interview-questions
  3. Up to router transformation, all the procedure is same as described in SCD type1.
  4. The only difference is after router, bring the new_rec to router and give condition dd_insert send to.
  5. Create one new primary key send to target. For old_rec send to update_strategy and set condition dd_insert and send to target.
  6. You can create one effective_date column in old_rec table