Buy

Buy

List of Posts

Get this widget
To get notifications, Please like our Facebook Page >>>>>>

Use Search box to find a topic in this Blog!!!

SAP ETL Questions (Part - 2)


51. What is I_T_FIELDS?
 List of the transfer structure fields. Only these fields are actually filled in the data table and can be sensibly addressed in the program.
52. What is C_T_DATA?
 Table with the data received from the API in the format of source structure entered in table ROIS (field ROIS-STRUCTURE).
53. What is I_UPDMODE?
 Transfer mode as requested in the Scheduler of the BW. Not normally required.
54. What is I_T_SELECT?
 Table with the selection criteria stored in the Scheduler of the SAP BW. This is not normally required.
55. What are the different Update Modes?
 Direct Delta: In this method, extraction data from document postings is transferred directly to BW delta queue.
 Queued Delta: In this method, extraction data from document postings is collected in an extraction queue, from which a periodic collective run is used to transfer the data to BW delta queue.
o The transfer sequence and the order in which the data was created are the same in both Direct and Queued Delta.
 Unserialized V3 Update: In this method, the extraction data is written to the update tables and then is transferred to the BW delta queues without taking the sequence into account.
56. What are the different ways Data Transfer?
 Full Update: All the data from the InfoStructure is transferred according to the selection criteria defined in the scheduler in the SAP BW.
 Delta Update: Only the data that has been changed or is new since the last update is transferred.
57. Which Object connects Aggregates and InfoCube?
 ReadPointer connects Aggregates and InfoCube. We can view the ReadPointer in table RSDDAGGRDIR, the field name is RN_SID, whenever we are rolling up the data, it contains the request number, it will check with the next request for second roll up. Just follow the table for a particular InfoCube and roll up the data.
58. What is switching ON and OFF of aggregates? How do we do that?
 When we switch off an aggregate, it is not available to supply data to queries, but the data remains in the aggregate, so if required, we can turn it on and update the data, instead of re-aggregating all the data. However if we deactivate an aggregate, it is not available for reporting and also we lose the aggregated data. So when you activate it, it starts the aggregation anew. To do this select the relevant aggregate and choose the Switch On/Off (red and green button). An aggregate that is switched off is marked in column Filled/Switched off with Grey Button.
59. While creating aggregates system gives manual or automatic option. What are these?
 If we select the automatic option, system will propose aggregates based on the BW statistics. i.e., how many times the InfoCube is used to fetch data, etc. Else we can manually select the dataset which should form the aggregate.
60. What are the options when defining aggregates?
 Manual
 Automatic
61. What are Aggregates and when are they used?
 An aggregate is a materialized, aggregated view of the data in an InfoCube. In an aggregate, the dataset of an InfoCube is saved redundantly and persistently in a consolidated form. Aggregates make it possible to access InfoCube data quickly in Reporting. Aggregates can be used in following cases:
o To speed up the execution and navigation of a specific query.
o Use attributes often in queries.
o To speed up reporting with characteristic hierarchies by aggregating specific hierarchy levels.
62. When I run Initial load it failed then what should I do?
 Deletion of an initial load can be done in the InfoPackage. First set the QM status of the request to red if not yet done, then delete it from all data targets. After that we go to the InfoPackage and choose from menu scheduler  “initialization options for the source system”. There you should see your red request. Mark it and delete it. Accept deletion question and accept post information message. Now the request should be deleted from the initialization options. Now you can run a new init.
You can also run a repair request. That's a full request. With this you correct your data in the data target because of failed deltas or wrong inits. You do this in the InfoPackage too. Choose menu scheduler repair full request. But if you want to use the init/delta load you have to make a successful init first.
63. What are the inverted fields in DataSource?
 They allow to do reverse posting. It would actually multiply the field by -1.
64. What are setup tables and why should we delete the setup tables first before extraction?
a. Setup tables are filled with data from application tables. They are the OLTP tables storing transaction record. They interface between the application tables and extractor. LO extractor takes data from Setup table while initialization and full upload. It is not needed to access the application table for data selection. As setup tables are required only for full and init load we can delete the data after loading in order to avoid duplicate data.
65. What are the setup tables? Why use setup tables?
 In LO Extraction Mechanism when we fill the setup tables the extract structure is filled with the data. When we schedule InfoPackage using FULL / INITI DELTA from BW, the data is picked from the setup tables
66. When filling the set tables, is there any need of delete the setup tables?
a. Yes. By deleting the setup tables we are deleting the data that is in the setup tables from the previous update. This avoids updating the records twice into the BW.
67. Why we need to delete the setup table first then filling?
 During the Setup run, these setup tables are filled. Normally it's a good practice to delete the existing setup tables before executing the setup runs so as to avoid duplicate records for the same selections.
68. With what data the setup table is filling (Is it R3 data)?
 The init loads in BW pull data from the Setup tables. The setup tables are only used in case of first init/full loads.
69. Will there be any data in the application tables after sending data to Setup tables?
 There will be data in application tables even after fill up of setup tables. Setup tables are just temp tables that fill up from application tables for setting up Init/Full loads for BW.
70. How to work master data delta?
 We always do full load for Master Data. It would always overwrite the previous entries.
71. Master data is stored in Master Data tables. Then what is the importance of dimensions?
 Dimension tables link Master Data tables with the fact table through SID's.
72. I replicated the DataSource to BW system. I want to add one more field to DataSource. How do I do it?
 Add the field to extract structure and replicate the DataSource again into BW and this field will appear in BW also.
73. Suppose one million records are uploaded to InfoCube. Now I want to delete 20 records in InfoCube. How can we delete 20 records?
 This you could do with selective deletion
74. What is the InfoCube for inventory?
 InfoCube: 0IC_C03
75. What is the maintenance of DataSource?
 It is the maintenance of required fields in a particular DataSource for which there are reporting requirements in BW and data for the same needs to be extracted.
76. What is the maintenance of Extract structure
 Extract structures are maintained in case of LO DataSources. There are multiple extract structures for each DataSource in the LO for different applications. Any enhancements to DataSource in case of LO are done using maintenance of extract structures.
77. What are MC EKKO, MC EKPO in the maintenance of DataSource?
 These are purchasing related communication structures.
78. How is the Delta Load different for an InfoCube and ODS?
 An InfoCube will have additive Delta, but you will still be able to see all individual records in the InfoCube contents. This is because if you choose to delete the current request - then the records have to be rolled back to the prior status. You build a query on the InfoCube and on the query you will find that the data is actually summed up. The ODS records will not have duplicate records. You will have only one record.
79. What is the difference between the transactions LBWF and RSA7?
 RSA7 is to view BW delta queue. This gets overwritten each time.
 LBWF is the Log for LO Extract Structures. This is populated only when the User parameter MCL is set, and is recommended only for testing purposes.
80. What exactly happens (background) when we are inactivating/activating the extract structure for LO Cockpit?
 If the extract structure is activated then any online transaction or on the compilation of setup tables, the data is posted to the extract structures depending on the update method selected. Activation marks the DataSource with green else it is yellow. The activation/deactivation makes entries to the TMCEXACT table.
81. What is content extraction?
 These are extractors supplied by SAP for specific business modules. Eg. 2FI_AR_4: Customers: Line Items with Delta Extraction / 2FI_GL_6: General Ledger Sales Figures via Delta Extraction.
82. What is direct Update of InfoObject?
 This is updating of InfoObject without using Update Rules but only the Transfer Rules.
83. You get New status or Additive Delta. If I set here (on R/3) what is the need of setting in BW.
 In R/3 the record mode determines this as seen in the RODELTAM table i.e., whether it will be a new status or additive delta for the respective DataSource. Based on this you need to select the appropriate update type for the data target in BW. For e.g., ODS supports additive as well as Overwrite function. Depending on which DataSource is updating the ODS, and the record mode supported by this DataSource, you need to do the right selection in BW.
84. Where does BW extract data from during Generic Extraction and LO Extraction?
 All deltas are taken from the delta queue. The way of populating the delta queue differs for LO and other DataSources.
85. What is the importance of ODS Object?
 ODS is mainly used as a staging area.
86. Differences between star and extended star schema?
 Star schema: Only characteristics of the dimension tables can be used to access facts. No structured drill downs can be created. Support for many languages is difficult.
 Extended star schema: Master data tables and their associated fields (attributes), External hierarchy tables for structured access to data, Text tables with extensive multilingual descriptions are supported using SIDs.
87. What are the major errors in BW and R3 pertaining to BW?
 Errors in loading data (ODS loading, InfoCube loading, delta loading etc)
 Errors in activating BW or other objects.
88. When are tables created in BW?
 When the objects are activated, the tables are created. The location depends on the Basis installation.
89. What is M table?
 Master Data table.
90. What is F table?
 Fact table
91. What is data warehousing?
 Data Warehousing is a concept in which the data is stored and analysis is performed over it.
92. What is a RemoteCube and how is it accessed and used?
 A RemoteCube is an InfoCube whose data is not managed in the BW but externally. Only the structure of the RemoteCube is defined in BW. The data is read for reporting using a BAPI from another system.
 With a RemoteCube, we can report using data in external systems without having to physically store transaction data in BW. We can, for example, include an external system from market data providers using a RemoteCube.
 This is best used only for small volume of data and when less users access the query.
93. Tell about a situation when you implemented a RemoteCube.
 RemoteCube is used when we like to report on transactional data. In a RemoteCube data is not stored on BW side. Ideally used when detailed data is required and we want to bypass loading of data into BW.
94. Differences between MultiCube and RemoteCube.
 A MultiCube is a type of InfoProvider that combines data from a number of InfoCubes and makes them available as a whole to reporting.
 A RemoteCube is an InfoCube whose transaction data is not managed in the BW but externally. Only the structure of the RemoteCube is defined in BW. The data is read for reporting using a BAPI from another system.
95. How you did Data modeling in your project? Explain
 We had collected data from the user and created HLD (High level Design document) and we analyzed to find the source for the data. Then data models were done indicating dataflow, lookups. While designing the data model considerations were given to use existing objects (like ODS and InfoCube) not storing redundant data, volume of data, Batch dependency.
96. There is an InfoObject called 0PLANT I activated and using it after some days one more person came and activated it again. What will happen, whether there will be any effect merge or no effect.
 Reactivating the InfoObject shouldn't affect unless he has made some changes to that and then reactivated it.
97. I have two processes one process contains ABAP program. After the successful completion of the first process it should trigger second one how to know whether the first is successful or not?
 For process chains go through this linkhttp://help.sap.com/saphelp_nw2004s/helpdata/en/8f/c08b3baaa59649e10000000a11402f/frameset.htm
 http://help.sap.com/saphelp_nw2004s/helpdata/en/c4/3a7f12505211d189550000e829fbbd/frameset.htm
 http://help.sap.com/saphelp_nw2004s/helpdata/en/00/5e261e7547d8479e062d72d68bc9e7/frameset.htm
 http://help.sap.com/saphelp_nw2004s/helpdata/en/9e/e3f69e54b93f40b4dc1433e6378195/frameset.htm
98. I want to create an InfoObject that is a dependent InfoObject. How to do it?
 Go to the first InfoObject screen in administration work bench go to compounding tab, create the InfoObject that is dependent on the former InfoObject and activate.
99. Delta has been done successfully in LO. Later some fields were added to that particular DataSources then there will be any effect to the previous data records.
 No. If there is data in the DataSource we can only append the fields. No data will be lost. But you need to have separate mechanism to fill in the historical data for the newly added fields.
100. There are 5 characteristics in an InfoCube. We have to assign these characteristics to a dimension based on what we assign characteristics to dimension?
 Depends on the characteristic and cardinality.
 The characteristics that logically belong together can be grouped together in a Dimension.
 First we will decide the dimensions of the InfoCube. After that we will assign necessary InfoObjects to the corresponding dimensions

No comments:

Post a Comment

Note

This blog is solely for the purpose of getting educated in SAP. This blog does not encourage distribution of SAP hold copyright and any other publication and/or materials bearing SAP mark. If you find any copyright materials, please mail me so I can remove them. This blog is not in association with SAP.

ALL Posts

Get this widget