MIGRATION

To find out the node list:
sel distinct NodeId from DBC.ResUsageSpma;

To find out AMP count in the whole Teradata system:
sel hashamp()+1;

To list down all the AMP Ids (if ResUsageSvpr is in non-summary mode):
sel distinct VprId from DBC.ResUsageSvpr where VprType=’AMP’;

Minimum: 1
maximum: 10 per node

How to find the no of nodes in my teradata system using sql assistant?

Select count(distinct nodeid) from dbc.resusagescpu

And this one for number of Amps per node

Select nodeid,count(distinct Vproc) from dbc.ResCpuUsageByAmpView group by 2

r finding out AMPS : SELECT HASHAMP()+1

select distinct(vproc) from dbc.diskspace

Select Count( distinct vproc) from dbc.AmpUsage

Select Distinct
    NodeID
    ,NodeType
    ,VProcType1 || ': ' || TRIM(VProc1) AS AMPs
    ,VProcType2 || ': ' || TRIM(VProc2) AS PEs
    ,VProcType3 || ': ' || VProc3 AS GTW
    ,VProcType4 || ': ' || VProc4 AS RSG
    ,VProcType5 || ': ' || VProc5 AS TVS
    ,VProcType6 || ': ' || VProc6 AS VProc6
    ,VProcType7 || ': ' || VProc7 AS VProc7
From DBC.ResUsageSpma
Order By NodeID

To find out the node list:
sel distinct NodeId from DBC.ResUsageSpma;

To find out AMP count in the whole Teradata system:
sel hashamp()+1;

To list down all the AMP Ids (if ResUsageSvpr is in non-summary mode):
sel distinct VprId from DBC.ResUsageSvpr where VprType=’AMP’;

logon to the DBS window(operator console in view point) and type the command get Config;This will display the configuration with the displaying the node_id, state(Online or standby), number of CPUs, number of AMPS in the node etc.

I am using the below SQL to calculate the number of PEs, AMPs, Nodes, Gateways, and TVS:

Select VPRId, Count(Distinct(VPRId)) From DBC.ResUsagesVPR
Group By 1;

The number of AMPs in the system is easy to get:

1
SELECT HASHAMP() + 1;

When you got access to dbc.Resusage you will find the number of AMPs for each node using:

1
2
3
SELECT DISTINCT NodeID, Vproc1 AS "#AMPs", Vproc2 AS "#PEs"
FROM  dbc.resusagespma
WHERE TheDate = CURRENT_DATE

Max 10 PEs per node are permissible, but better is to add 2 PE per node (it will give you optimal performance).

1 PE supports 128 sessions where 8 are reserved for internal use. SO 128-8 = 120 sessions per PE is usable by users.

SEL
DISTINCT Vproc AS
AMPs
FROM DBC.DiskSpace;

You could get similar information from Res
Usage related tables if you have ResUsage enabled.

Thus there’s no ratio between PEs and AMPs, typically you got 2 PEs per node and enough AMPs to keep your hardware (CPU & disks) busy. Dieter

select count(distinct vproc) from dbc.diskspace

select hashamp()+1

Select Count( distinct vproc) from dbc.AmpUsageNo.

of Nodes :

Select count(distinct nodeid) from dbc.resusagescpu

2down voteaccepted

The number of AMPs per node is determined by the available CPU and IO.

Usually it’s approx. 1 AMP per logical CPU and each AMP has assigned 2*2 mirrored (RAID1) disks (plus eventually a 1/4 of a SSD).

Simply adding AMPs will not improve performance, because you need to add disks, too, and the system is already using 100% resources.

To get more AMPs you add a new node with new CPU/disks.

shareimprove this answer

To find AMPs, PEs,Nodes from dbc

1.To find the number of nodes

Select count(distinct nodeid) from dbc.resusagescpu

sel count(distinct VprId ) from DBC.ResUsageSvpr where VprType=’NODE’;

And this one for number of Amps per node

Select nodeid,count(distinct Vproc) from dbc.ResCpuUsageByAmpView

2.to find the number of vprocs:

Select count(distinct vproc) from dbc.diskspace

3.To find the number of APMs:

sel distinct VprId from DBC.ResUsageSvpr where VprType=’AMP’;

sel hashamp(0)+1

Select count(distinct vproc) from dbc.ampusage

sel count(distinct VprId ) from DBC.ResUsageSvpr where VprType=’AMP’;

  1. TO find the number of PE’s

select count(t1.ProcId) Nodes, sum(t1.AmpCount) Amps, sum(t1.PECount) PEs

from table (MonitorPhysicalResource()) as t1

where t1.Status = ‘U’

;

sel count(distinct VprId ) from DBC.ResUsageSvpr where VprType=’PE’;

A Hot Standby Node (HSN) is a node that is a member of a clique that is not configured (initially) to execute any Teradata vprocs. If a node in the clique fails, the AMPs from the failed node move to the hot standby node. The performancedegradation is 0%.

When the failed node is recovered/repaired and restarted, it becomes the new hot standby node. A second restart of Teradata is not needed.

Characteristics of a hot standby node are:

  • A node that is a member of a clique.
    • Performance Review
    • Performance Appraisal
    • Online Coupons
    • Grocery Coupons
  • Does not normally participate in the trusted parallel application (TPA).
  • Can be brought into the configuration when a node fails in the clique.
  • Helps with unplanned outages.
  • Eliminates the need for a restart to bring a failed node back into service.

Overview of the Upgrade Procedure Phases

The upgrade planning process includes scheduling of activities that must occur across multiple phases of the upgrade process, beginning one to two months prior to the actual upgrade, and completing a day or two following the upgrade.

These phases include:

  • • Deciding on the upgrade target and method
  • • Creating a Change Control plan for the upgrade
  • • Preparing the system for the upgrade
  • • Preparing the Teradata Database for the upgrade
  • • Preparing for the upgrade procedure
  • • Performing the upgrade
  • • Performing post-upgrade activities

Each PE can support 120 sessions.  If you expect to support 500 simultaneous sessions, you will need at least 5 PE

Creating an Upgrade Plan

The upgrade plan outlines the implementation, test, backout, and recovery plans for the upgrade process. Subsequent chapters in this document provide more information that will help you create the plan. Your Teradata CSR creates the upgrade plan for Business Critical and SWI customers.

  • Updates to system components, such as the Administration Workstation (AWS), node memory, or firmware
  • • Implementation of a Teradata Open Backup solution, if not already in place
  • • Upgrading to the latest required Teradata Tools and Utilities
  • • Third party applications that communicate with the database must be compatible with the Teradata Database software, such as BusinessObjects, Microstrategies, etc.
  • • Teradata applications that communicate with the Teradata database, such as CRM, TMSM, Viewpoint, etc.

Overview of the Upgrade Timeline

One to Three Months Prior to the Planned Upgrade

Once you have determined the Teradata Database upgrade target and process, you will need between 4 and 12 weeks to prepare for the upgrade. During this period, your system will be analyzed to determine if any changes need to be made prior to the upgrade, and you will schedule these changes to be implemented.

• Open a Change Control incident and obtain a Change Control number

• Determine if the CMIC needs to be upgraded; if so, schedule the upgrade well in advance

• Implement an Open Teradata Backup solution, if not already implemented

• Install or upgrade to the latest set of required Teradata Tools and Utilities

• Upgrade firmware on all systems to the latest Teradata certified releases

• Examine the memory size of the nodes to determine if they meet the requirement for the target release

• Obtain the required software media

• Install or upgrade to the latest released version of TDput, PUTSRC, and PUTTools

• Run the following to determine whether the system and database are prepared for the upgrade:

• PUT System Readiness Check (SRC)

• Pre-Upgrade Preparation script (found in PUTTools package)

• Run the Cylinder Index Space Test Chapter 2 Database Upgrade Planning Creating an Upgrade Plan Upgrade Guide: Overview and Preparation 19

• Run SCANDISK and CheckTable to validate the structure of the database files and tables.

• Run the Upgrade Estimator tool to estimate the time required for the upgrade

• If running 3rd party applications on the TPA nodes, verify compatibility with the Teradata versions you are upgrading to

• If running any Teradata applications such as TCRM, verify compatibility with the Teradata version you are upgrading to Any problems or issues found during this phase must be corrected or addressed. If changes to the system are required, you must allocate sufficient time to make these changes before performing the upgrade.

One Week Prior to the Planned Upgrade

• Consult with the system administrator to schedule system downtime, and notify users

• Run SCANDISK and CheckTable again, if not performed in the past 30 days

• Obtain sufficient tape media for a data archive

• Ensure sufficient free space is available in the node file system

• Ensure you have installed or upgraded to the latest version of TDput, PUTSRC, and PUTTools

One Day Prior to the Planned Upgrade

• Archive user data

• Save Crashdumps

Immediately Prior to the Upgrade Just before you begin the actual upgrade process, you must perform these activities:

• Ensure all ETL operations are complete.

• Back up system disks

• Stop Replication Services

• Ensure you have installed or upgraded to the latest version of PUT

• Stop TDPs

During the Upgrade

• Install the latest drivers for the Host Channel Adapter

• Run the first phase of the PUT Install/Upgrade Software Operation

• Stop the Teradata Database

• Run the second phase of the PUT Install/Upgrade Software Operation

• Verify the upgrade succeeded and the status of the system

Immediately After the Upgrade

• Set MAXPERM for CRASHDUMPS database (major upgrades only)

• Perform any system-specific activities required by the upgrade

• Back up system disks

• Run SCANDISK and CheckTable again

• Resync MDS

• Restart Replication Services

• Enable Logons

• Enable TDPs

• Return the system to the users

• Run cleanup script

• Run an Asset Baseline Report

Finish Load/Unload operations Checks to make sure all load and unload data transfer jobs are completed or stopped before an upgrade can be performed.

Run pre-upgrade preparation script Combines several inspections into one convenient script. It inspects the Teradata Database to find problems that could interrupt the upgrade. It also runs the queries needed for the Upgrade Estimator Tool.

Save DBQL log data Saves existing log data to a new user table before the upgrade so that they are preserved afterwards.

Save RSS data Saves Resource Sampling Subsystem (RSS) tables which will be re-created during the normal upgrade process.

Save crashdumps Saves crashdumps from the CRASHDUMPS database. Existing crashdumps tables will not be compatible with the new release and cannot be used after the upgrade.

Set node windows for reboot Monitors node reboots on systems. Verify BAR utilities are not running Verifies that Backup and Recovery (BAR) utilities are not running during the upgrade.

Run PUT’s Install/Upgrade Software Operation Runs a program that performs many of the required upgrade steps without manual intervention. There is some manual interaction with PUT during the operation, but most of the activities are performed by PUT without interaction.

Bring down TDPs (start of system downtime) Applies only to a host with a Channel connection. Stops the Teradata Directory Programs (TDPs) running on the mainframes or network so that the system is completely quiescent and isolated before doing a reboot.

Bring up Teradata Database Starts the database so that the conversion preparation script in the PUT DBS Conversion Prep screen can be run

Check for table locks Verifies that the DBS is quiet and ready for the upgrade.

Verify no pending operations Verifies that there are no load/unload data transfer jobs running.

Conversion preparation and reboot (point of no return) Runs the conversion preparation script from PUT’s DBS Conversion Prep screen where the system is rebooted.

Monitor node reboots Watches the nodes as they reboot.

Verify running versions Verifies that the correct versions of PDE, DBS, RSG, TGTW, HOST, and TDGSS are running.

Converting the DBS Starts the PDE on the new Teradata Database version (for major/minor only).

Verify new versions Verifies that the correct versions of PDE, DBS, RSG, TGTW, HOST, and TDGSS are running, that the PDE is in TPA state and that the system is quiescent.

Running Database Initialization Program (DIP) Installs the new patches of this upgrade into the Teradata Database. For major upgrades the DBS conversion script runs DIP automatically; DIP must be run manually for minor, maintenance, and efix upgrades.

Restarting the DBS after DIP Starts the DBS after you run DIP.

Upgrade channel adapter firmware Stops the DBS, installs new channel device firmware, then starts PDE and the DBS again.

Run SCANDISK Runs the SCANDISK utility.

Run CheckTable Runs the CheckTable utility

Here are the ones I have used so far:

-Record Count

SELECT COUNT(*) FROM table_name;

-Sum columns

SELECT SUM(column_name) FROM table_name;

-Check for Null

SELECT column_name FROM table_name
      WHERE column_name IS NULL


say “source system” and “target system” you have two tables. Something like:

SELECT * FROM t1 WHERE NOT EXISTS (SELECT * FROM t2 WHERE t2.column_name = t1.column_name)

Or find records which exist in source table but not in target table, as well as records which exists in target table but not in source table:

SELECT * FROM (SELECT Id, column_name FROM t1, 'old'
UNION ALL
SELECT Id, column_name FROM t2, 'new') t
ORDER BY Id

SUM(CHECKSUM(concatenation_of_my_fields)) to get a sense if you really have the same data.