Hana Smart Data Integration – Architecture

WRITTEN BY: supportmymoto.com STAFF

This put up is a part of a whole collection

Hana Sensible Knowledge Integration – Overview

The SDI product consists of three fundamental constructing blocks: The Hana Index Server, the Knowledge Provisioning Server and the Knowledge Provisioning Agent.

Whereas the primary two are processes of the Hana occasion itself, the agent is an exterior course of and as such could be put in wherever.

SDI architecture.png

SDA use case

Because the structure includes fairly just a few elements, finest is to start out with an instance and observe its approach by way of all of the steps.

The person did execute a SQL assertion like “choose columnA, substring(columnB, 1, 1) from virtual_table the place key = 1“.

SDI architecture - SDA.png

  1. This SQL assertion is a part of a person session in Hana and enters the SQL Parser. The very first thing the parser wants is the required metadata – does a digital desk of that title even exist, what are its columns and many others. All of this are knowledge dictionary tables in Hana.
  2. The SQL Optimizer does attempt to pushdown as a lot of the logic as attainable. To be able to make adapter improvement less complicated, it can not merely pushdown the total SQL however slightly have a look at the metadata what sort of statements the adapter informed to help. On this instance the adapter shall be a quite simple one, all it helps are choose statements studying all columns and no capabilities; easy equal the place clauses it does help. Therefore the optimizer will rewrite the assertion into one thing like “choose columnA, substring(columnA, 1, 1) from (choose * from virtual_table the place key = 1)“. This assertion will return the identical as the unique one however now it turns into apparent what components are executed contained in the adapter and what needs to be executed in Hana. The inside “choose * the place..” is shipped to the adapter and the adapter will return the row with key = 1 however all columns. Hana will then take that row, learn columnA solely and return its worth plus the results of the substring perform to the person.
  3. The SQL Executor is accountable for getting the precise knowledge, so it’s going to inform the Federation Framework to retrieve the information of the optimized SQL.
  4. The Federation Framework is accountable of mixing the information coming from Hana and the distant system. On this easy instance the SQL choose reads distant knowledge solely, so it’s going to name the suitable strategies within the adapter, that’s an open name to ascertain the connection to the supply (if not executed already for that session, present the inside SQL to the adapter by calling the executeStatement technique after which name the fetch technique till there is no such thing as a extra knowledge.
  5. Because the Federation Framework can not speak by way of the community to that distant adapter instantly, is has a Delegator for the Federation Framework. This elements calls the equal strategies within the Adapter Framework of the Knowledge Provisioning Server.
  6. The Adapter Framework itself sends the command by way of the community to the agent the place the corresponding strategies within the Adapter are referred to as. The accountability of that part is to route the instructions to the proper adapter and take care of any error conditions like agent can’t be reached and many others.
  7. The adapter acts as a bridge. It will get an open name with all of the parameters supplied by the distant supply object, it ought to open a connection to the supply system. It does obtain the SQL command within the executeStatement technique, therefore it does translate that into the equal name for the given supply. Its fetch technique is known as, the adapter ought to return the subsequent batch of information by studying the supply knowledge and translating the values in to the Hana datatype worth.
See also  Home Depot Vs Lowe’s 2021 (Prices, Products, Services + More)

Realtime push

The definition of what distant tables the person needs to subscribe to follows the identical path as SDA from step 1 to five. A SQL assertion is executed – create distant subscription on … – and all of the validation like, does the desk exist, does the adapter help the required capabilities and many others are carried out.

SDI architecture - Realtime.png

  1. The fascinating half begins when the subscription is made lively. With the “alter distant subscription ..queue” command the Realtime Provisioning Shopper is informed to start out the subscription. There the primary checks are carried out, e.g. forestall beginning an already began subscription.
  2. The Realtime Provisioning Supervisor contained in the DP Server decides what must be executed in an effort to get the change knowledge. Mainly this implies two issues, both telling the adapter to start out sending the modifications or, if the adapter is sending the modifications already for an additional subscription and its knowledge could be reused, merely consuming it as nicely.
  3. If the adapter needs to be notified concerning the request of getting change knowledge, the Adapter Framework forwards that request to the agent and from there to the adapter.
  4. And the adapter does no matter must be executed in an effort to seize the requested modifications. That is actually supply particular, for databases it’d imply studying the database transaction log, for different sources it could possibly be applied as a listener and the supply does push modifications. and in worst case the adapter has to often verify for modifications within the supply. From then on the Adapter retains sending all change info for the requested knowledge again to the Adapter Framework and from there into the Change Knowledge Receiver.
  5. The Change Knowledge Receiver has to take care of numerous conditions. If the subscription is in queue state nonetheless, then the preliminary load of the desk is in progress and therefore no change info needs to be loaded into the goal but. Therefore it has to recollect these rows someplace, the Change Knowledge Backlog Retailer. In case the supply adapter doesn’t help re-reading already despatched knowledge, then all acquired knowledge is put into that Backlog Retailer as nicely to permit the supply sending extra knowledge, even when the information has not been dedicated in Hana already. In different circumstances the receiver supplies the information to the Applier for processing.
  6. The Change Knowledge Applier is loading all knowledge into Hana within the correct order and utilizing the identical transactional scope as the information was modified within the supply. It’s the Applier who take care of the case that one change file is used for a number of subscriptions and hundreds the information into all targets then. In case the goal is a desk, it interprets the opcode (insert, replace, delete, …) acquired and performs the correct motion on the goal desk. And in case of a Process, it makes the incoming change set distinctive per main key earlier than sending the information to the Process object (insert+replace+replace = insert row with the values of the final replace assertion).
  7. the Applier presently creates common SQL statements like insert…choose from ITAB; or begin job utilizing parameter…; and these statements are executed by the Index Server like every other SQL statements.

Knowledge Dictionary Metadata

One vital side of the structure is that every one, actually all, info is saved in Hana itself and nothing within the adapter. The reason being easy, there could be a number of brokers for a similar supply system for failover, the adapter/agent would possibly cease working and upon restart must know the place to pickup the work, the adapter could be reinstalled.

See also  Xiaomi Redmi 1S on xda-developers- XDA Developers

All this knowledge is saved in Hana tables and could be fairly fascinating to debug an issue. Often these tables should not queried instantly however slightly a public synonym pointing to a view, which has the row degree safety applied inside, is used. Here’s a listing of such objects

  • AGENTS: Returns the listing of all recognized Knowledge Provisioning Brokers and tips on how to attain them.
  • ADAPTERS: Returns the listing of adapters recognized by the Hana database. New entries are added every time an agent does deploy an adapter beforehand not recognized. When considered one of these adapters has the flag IS_SYSTEM_ADAPTER = true, then it’s an Adapter primarily based on ODBC and executed by the Index Server. All different adapters are the SDI adapters.
  • ADAPTER_LOCATIONS: As one adapter could be hosted on one agent however not the opposite or on a number of brokers, this desk tells the connection.
  • REMOTE_SOURCES: For every created distant supply one line is returned.
  • VIRTUAL_TABLES: All created digital tables could be present in there.
  • VIRTUAL_TABLE_PROPERTIES: Extra metadata the adapter requires at runtime are saved in Hana and could be seen by way of this view.
  • VIRTUAL_COLUMNS: The columns of every digital desk.
  • VIRTUAL_COLUMN_PROPERTIES: Extra metadata could be added to columns as nicely.
  • REMOTE_SUBSCRIPTIONS: The listing of all distant subscriptions and their state.
  • REMOTE_SUBSCRIPTION_EXCEPTIONS: In case a subscription has an error, the exception quantity and the explanation could be discovered right here and utilizing the exception id a restoration could be triggered manually.

The SQL statements used could be discovered right here: Hana Adapter SDK – Interplay by way of SQL

See also  Cross Docking in SAP Warehouse Management – One Step


Utilizing above metadata tables all monitoring could be executed utilizing pure SQL instructions. However there’s a Hana Cockpit primarily based set of screens as nicely in a separate distribution unit discovered on Service Market. Very useful to put in these as an alternative plus it reveals the standing of the Process framework as nicely, the calculation engine primarily based knowledge transformation framework a part of the SDI resolution.

Agent Configuration Device

To be able to assist establishing the agent and the adapters, the Agent Configuration Device a part of any Agent set up can be utilized. It does execute the SQL instructions for including brokers, adapters, and many others and edits the native configuration information.

SAPDB Adapters

Above the one adapters defined the place the SDI adapters. If the ODBC primarily based adapters of the Index Server are used, then no Knowledge Provisioning Server, Agent and many others is required. As an alternative the Federation Framework does entry these ODBC adapters by way of the SAPDB Adapters part. Though an increasing number of adapters might be moved to the SDI, this path will live on however for SAP owned databases solely. Sybase ASE and IQ for instance. All these databases the place going although a number of hops and an SDK does introduce limitations of some variety.

C++ Adapters

Talking of a number of hops and SDK, one query could be why the Agent is required even when the supply is native. For this case there’s a C++ model of the SDK obtainable as nicely and such an adapter could be accessed by the Knowledge Provisioning Server instantly. The one adapter applied presently is the OData adapter. And as this SDK is the common Adapter SDK, such an adapter could possibly be deployed on the Agent as nicely. Then an area supply can be accessed instantly by way of the DP server, a distant supply by way of the agent.

The usually used Adapter SDK is the Java model nevertheless and as Hana is written in C++ it can not run the Java code instantly, there needs to be some inter-process communication which the Agent handles. Therefore for a situation, the place the supply is native, putting in the Agent on the Hana field could possibly be an choice however the is required nonetheless.

NOTE : Please do not copy - https://supportmymoto.com

Leave a Reply