1.     How to use Virtual K.F/Char. ?
   Ans
 : This “virtual” characteristic is getting a value assigned at query 
runtime and must not be loaded with data in data target. Therefore, no 
change to existing update rules.
   The implementation can be divided into the following areas:
   1.      Create of InfoObject [Key Figure / Characteristics] and attach the InfoObject to the InfoProvider.
 2.      Implementation of BADI RSR_OLAP_BADI (Set filter on Infoprovider while defining BADI implementation)
   3.      Adding the InfoObject into the Query.
2.     Query Performance Tips :
   Ans :
   i.                    Don’t show too much data in initial view of report output
   ii.                 Limit the level of hierarchies on initial view
   iii.               Always use Mandatory variables
   iv.               Utilize filters based on InfoProviders 
   v.                  Suppress Result rows if not needed
   vi.               Eliminate or Reduce Not logic in query selection
3.     DataStore Objects :
4.     Types Of DataStore Objects :
  ·       Standard DSO
  ·       Write-Optimized DSO -
  ·       Direct DSO  -
 The DataStore object for direct update differs from the standard 
DataStore object in terms of how the data is processed. In a standard 
DataStore object, data is stored in different versions (active, delta, 
modified), whereas a DataStore object for direct update contains data in
 a single version. Therefore, data is stored in precisely the same form 
in which it was written to the DataStore object for direct update by the
 application. In the BI system, you can use a DataStore object for 
direct update as a data target for an analysis process
| 
Type | 
Structure | 
Data Supply | 
SID   Generation | 
Details | 
Example | 
| 
Standard   DataStore Object | 
Consists of   three tables: activation queue, table of active data, change log | 
From data   transfer process | 
Yes | ||
| 
Write-Optimized   DataStore Objects | 
Consists of the   table of active data only | 
From data   transfer process | 
No | 
A
 plausible   scenario for write-optimized DataStore objects is exclusive
 saving of new,   unique data records, for example in the posting 
process for documents in   retail. In the example below, however, 
write-optimized DataStore objects are   used as the EDW layer for saving
 data. | |
| 
DataStore   Objects for Direct Update | 
Consists of the   table of active data only | 
For APD | 
5.     Line Item Dimension :
 If dimension table size (no. of rows) exceeds 20% of fact table size, 
then the dimension should be flagged as Line Item Dimension. This means 
that the system does not create a dimension table. Instead, the 
SID table of the characteristic takes the role of dimension table. 
Removing the dimension  table has the following advantages:
 
     ○       When loading transaction data, no IDs are generated for the
 entries in    the dimension table. This number  range operation can 
compromise performance precisely in the case where a degenerated 
dimension is involved.
 
    ○       A table- having a very large cardinality- is removed from 
the star schema. As a result, the SQL-based     queries are simpler. In 
many cases, the database optimizer can choose better execution plans.   
  Nevertheless, it also has a disadvantage: A dimension marked as
 a line item cannot subsequently include   additional characteristics. 
This is only possible with normal dimensions.
      Scenario : 0IS_DOCID(Document Identification)  InfoObejct used as Line Iten Dimension.
6.  High cardinality: This
 means that the dimension is to have a large number of instances (that 
is, a high cardinality). This information is used to carry out 
optimizations on a physical level in depending on the database platform.
 Different index types are used than is normally the case. A general rule is that a dimension has a high cardinality when the number of dimension entries is at least 20% of the fact table entries. If you are unsure, do not select a dimension having high cardinality.
7.  Different Types Of InfoCubes :
- Standard InfoCube (with physical data store)
- VirtualProvider (without physical data store)
- Based on a data transfer process and a DataSource with 3.x InfoSource: A VirtualProvider that allows the definition of queries with direct access to transaction data in other SAP source systems.
- Based on a BAPI: A VirtualProvider whose data is not processed in the BI system, but externally. The data is read from an external system for reporting using a BAPI.
- Based on a function module: A VirtualProvider without its own physical data store in the BI system. A user-defined function module is used as a data source.
8.     Real Time Cube:
 
   Real-time-enabled InfoCubes can be distinguished from standard 
InfoCubes by their ability to support      parallel write accesses, 
whereas standard InfoCubes are technically optimized for read accesses. 
Real-time InfoCubes are used when creating planning data. The data is   
 written to the InfoCube by several users at the same time. Standard 
InfoCubes are not suitable for this. They should be used if you only 
need read access (such as for reading reference data).
 
   Real-time-enabled InfoCubes can be filled with data using two 
different methods: Using the BW-BPS transaction for creating planning 
data and using BW Staging. You have the option to switch the real-time 
InfoCube between these two methods. From the context menu for your 
real-time InfoCube in the InfoProvider tree, choose Switch Real-Time 
InfoCube. If Real-Time InfoCube Can Be Planned, Data Loading not Allowed
 is selected by default, the Cube is filled using BW-BPS functions. If 
you change this setting to Real-Time InfoCube Can Be Loaded with Data; 
Planning Not Allowed, you can then fill the Cube using BW Staging.
 
     For real-time InfoCubes, a reduced read performance is compensated 
for by the option to read in parallel (transactionally) and an improved 
write performance.
9.     Remodelling :
 
    Remodeling is a new feature available as of NW04s BI 7.0 which 
enables to change the structure of an InfoCube already loaded without 
disturbing data. This feature does not yet support remodeling of DSO 
and InfoObjects.
     Using remodeling a characteristic can be simply deleted or added/replaced    with a constant value, value of another InfoObject (in the same dimension), with value of an attribute of another InfoObject (in the same dimension) or with a value derived using Customer Exit.
Similarly
 a KeyFigure can be deleted, replaced with a constant value or a new 
KeyFigure can be added and populated using a constant value or a 
Customer Exit. 
This
 article describes how to add a new characteristic to InfoCube using the
 remodeling feature and populating it using a Customer Exit. 
Note following before you start remodeling process:
- Back-up of existing data
- During remodeling process InfoCube is locked for any changes or data loads so make sure you stall all the data loads for this InfoCube till the time this process finishes.
- If you are adding or replacing a KeyFigure compress the cube first to avoid inconsistencies unless all the records in the InfoCube are unique.
Note following after you finish remodeling process and start daily loads and querying this InfoCube:
- All the objects dependent on InfoCube like transformations, MultiProviders will have to be re-activated.
- If aggregates exists than they need to be reconstructed.
- Adjust queries based on this InfoCube to accommodate the changes made.
- If new field was added using remodeling than don’t forget to map it in the transformation rules for future data loads.
The code is written in SE24 by creating a new class. The interface for the class should be IF_RSCNV_EXIT and code is written in the Method IF_RSCNV_EXIT~EXIT.
10.Difference between With Export and Without Export Migration of 3.x datasource :
     With Export - Allows you to revert back to 3.x DataSource, Transfer Rules, etc...when you choose this option (Recommended) Without Export - Does not allow you to ever revert back to the 3.x DataSource.
11.             Difference between Calculated Key Figure and Formula :
   The
 replacement of formula variables with the processing type Replacement  
 Path acts differently in calculated key figures and formulas:
If you use a formula variable with “Replacement from the Value of an Attribute” in a calculated key figure, then the system automatically adds the drilldown according to the reference characteristic for the attribute.
 The system then evaluates the variables for each characteristic value 
for the reference characteristic. Afterwards, the calculated key figure 
is calculated and, subsequently, all of the other operations are 
executed, meaning all additional, calculated key figures, aggregations, 
and formulas. The system only calculates the operators, which are 
assembled in the calculated key figure itself, before the aggregation 
using the reference characteristic. 
If you use a formula variable with Replacement from the Value of an   Attribute in a formula element, then the
 variable is only calculated if the reference characteristic is uniquely
 specified in the respective row, column, or in the filter.
12.             Constant Selection :
   In
 the Query Designer, you use selections (e.g. Characteristic restriction
 in Restricted Key Figure) to  determine the data you want to display at
 the report runtime. You can alter the selections at runtime using 
navigation and filters. This allows you to further restrict the 
selections. The Constant Selection function allows you to mark a 
selection in the Query Designer as constant. This means that navigation 
and filtering have no effect on the selection at runtime.
13.             Customer Exit for Query Variables :
 
   The customer exit for variables is called three times maximally. 
These three steps are called I_STEP.      The first step (I_STEP = 1) is
 before the processing of the variable pop-up and gets called for every variable of the processing type “customer exit”. You can use this step to fill your variable with default values. 
     The second step (I_STEP = 2) is called after the processing of the variable pop-up. This step is called only for those variables that are not marked as “ready for input” and are set to “mandatory variable entry”.
     The third step (I_STEP = 3) is called after all variable processing and gets called only once and not per variable. Here you can validate the user entries.
    Please
 note that you cannot overwrite the user input values into a variable 
with this customer exit. You can only derive values for other variables 
or validate the user entries.
14.             How to create Generic Datasource using Function Module :
 
   A structure is created first for extract structure which will contain
 all datasource fields. Then a Function module is created by copying the
 FM RSAX_BIW_GET_DATA_SIMPLE. The code needs to be modified as per 
requirement.
For
 delta functionality, if the base tables (from where data will be 
fetched) contain date and time field then include a dummy field 
(Timestamp) in the extract structure created for the FM and use this 
field in code. (by splitting the timestamp in date and time).
Type-Pools : SBIWA, SRSC.
15.             In which scenario you have used Generic DataSource ?
We
 had a requirement to send Contract (Sales Order) data from BI to MERICS
 (external system). Selection criteria to extract data was –
o       Billing Plan Date(FPLT-AFDAT) < Current month date
o       Billing Status(FPLT-FKSAF) = Not yet Processed
o       Contract Type(VBAK-AUART) = Fixed Price (ZSCC)
o       Item Category(VBAP-PSTYV) = ZSV2
We
 could have used Data Source 2LIS_11_VAITM and could have enhanced that 
for FPLT fields. But the problem was that whenever there will be status 
change in a Billing Plan, that will not be captured by data source 
2LIS_11_VAITM.
Thereby we have created a generic data source using Function Module.           
16.             Generic DS Safety Interval – Lower limit and Upper Limit ?What are the Delta Specific fields ? When to chose ‘New status for changed records’ and when ‘Additive Delta’ ?
Safety Interval Upper Limit :
The
 upper limit for safety interval contains the difference between the 
current highest value at the time of    the delta or initial delta 
extraction and the data that has actually been read. If this value is 
initial, records that  are created during extraction cannot be 
extracted."
This
 would mean that if your extractor takes half an hour to run , then 
ideally your safety upper limit should be half hour or more , this way 
records created during extraction are not missed.
For
 example : If you start extraction at 12:00:00 with no safety interval 
and then your extract runs for 15 minutes , the delta pointer will read 
12:15:00 and subsequent delta will read records created / changed on or 
after 12:15:00 - this would mean that all records created / changed 
during extraction are skipped.
Estimate
 the extraction time for your datasource and then accordingly set the 
safety upper limit so that no records are skipped. But then this being 
an additive delta - you need to be careful not to double your records . 
Ideally this being an additive delta - either extract records during 
periods of very low activity or have smaller safety limits to make sure 
data does not get duplicated.
Safety Interval Lower Limit : This
 field contains the value taken from the highest value of the previous 
delta extraction to determine the lowest value of the time stamp for the
 next delta extraction.
For
 example: A time stamp is used to determine a delta. The extracted data 
is master data: The system only transfers after-images that overwrite 
the status in the BW. Therefore, a record can be extracted into the BW 
for such data without any problems.
Taking
 this into account, the current time stamp can always be used as the 
upper limit when extracting: The lower limit of the next extraction is 
not seamlessly joined to the upper limit of the last extraction. 
Instead, its value is the same as this upper limit minus a safety 
margin. This safety margin needs to be big enough to contain all values 
in the extraction which already had a time stamp when the last 
extraction was carried out but which were not read. Not surprisingly, 
records can be transferred twice. However, for the reasons above, this 
is unavoidable.
1. If delta field is Date (Record Create Date or change date), then use Upper Limit of 1 day.
This will load Delta in BW as of yesterday. Leave Lower limit blank.
2. If delta field is Time Stamp, then use Upper Limit of equal to 1800 Seconds (30 minutes).
This will load Delta in BW as of 30 minutes old. Leave Lower limit blank.
3. If delta field is a Numeric Pointer i.e. generated record # like in GLPCA table, then use
Lower Limit. Use count 10-100. Leave upper limit blank. If value 10 is used then last 10
records will be loaded again. If a record is created when load was running, those records
may get lost. To prevent this situation, lower limit can be used to backup the starting
sequence number. This may result in some records being processed more than once;
therefore, be sure this DataSources is only feeding an ODS Object
Delta Specific Fields :
                                                o                                                                                     
 TimeStamp - The field is a DEC15 field which always contains the time 
stamp of the last change to a   record in the local time format.
                                                 o                                                                                      Calendar Day - The field is a DATS8 field which always contains the day of the last change.
                                                 o                                                                                      Numeric Pointer - The field contains another numerical pointer that  appears with each new record.
Additive Delta :
The
 key figures for extracted data are added up in BW. DataSources with 
this delta type can supply data to ODS objects and InfoCubes.
New status for changed records :
Each
 record to be loaded delivers the new status for the key figures and 
characteristics. DataSources with this delta type can write to ODS 
objects or master data tables.
 
17.             How to Fill Up Set Up Table and related Transaction
18.             Maximum Char. And Key Figures allowed in a Dimension of Cube ?
Max. Char. – 248
Max. Key Fig. – 233
19.             Different Types of DTP :
    o       Standard DTP - Standard DTP is used to update data from PSA to data targets ( Info cube, DSO  etc).
      o       Direct Access DTP - DTP for Direct Access is the only available option for VirtualProviders.
     o       Error DTP - Error DTP is used to update error records from Error stock to the corressponding data targets.
20.             How to Create Optimized InfoCube ?
     o       Define lots of small dimensions rather than a few large dimensions.
      o       The size of the dimension tables should account for less than 10% of the fact table.
     o       If the size of the dimension table amounts to more than 10% of the fact table, mark the dimension as a line item dimension.
21.             Difference between DSO and Cube      
| 
DSO  | 
InfoCube | |
| 
Use | 
Consolidation of data in the data warehouse   layer 
Loading delta records that can subsequently   be updated to InfoCubes or master data tables 
Operative   analysis (when being used in the operational data store) | 
Aggregation and   performance 
optimization for 
multidimensional   reporting 
Analytical and   strategic data 
analysis | 
| 
Type of data | 
Non volatile data (when being used 
in the data warehouse layer) 
Volatile data (when being used in the 
operational data store) 
Transactional data, document-type 
data (line items) | 
Non volatile data 
Aggregated data,   totals | 
| 
Type of data   update | 
Overwrite (in   rare cases: addition) | 
Addition only | 
| 
Data structure | 
Flat and relational database tables, 
semantic key fields | 
Enhanced star schema (fact 
table and   dimension tables) | 
| 
Type of data   analysis | 
Reporting at high level of granularity, 
flat reporting 
The number of query records should 
be strictly limited by the choice of key 
fields 
Individual document display | 
Multidimensional data analysis 
with low level of granularity 
(OLAP analysis) 
Use of InfoCube aggregates 
Drill-through to document level 
(stored in DataStore objects) 
possible using the reportreport 
interface | 
22.             Give an example where DSO is used for Addition not overwrite.
23.             Difference between 3.x and 7.0
   1. In Infosets now you can include Infocubes as well.
 
   2. The Remodeling transaction helps you add new key figure and 
characteristics and handles historical data as well without much hassle.
 This is only for info cube. 
 
   3. The BI accelerator (for now only for infocubes) helps in reducing 
query run time by almost a factor of 10 - 100. This BI accl is a 
separate box and would cost more. Vendors for these would be HP or IBM.
 
   4. The monitoring has been improved with a new portal based cockpit. 
Which means you would need to have an EP guy in ur project for 
implementing the portal ! :) 
     5. Search functionality hass improved!! You can search any object. Not like 3.5
     6. Transformations are in and routines are passe! Yess, you can always revert to the old transactions too.
     7. The Data Warehousing Workbench replaces the Administrator Workbench.
 
  8. Functional enhancements have been made for the DataStore object: 
New type of DataStore object Enhanced settings for performance 
optimization of DataStore objects. 
     9. The transformation replaces the transfer and update rules.
    10. New authorization objects have been added 
    11. Remodeling of InfoProviders supports you in Information Lifecycle Management.
 
   12 The Data Source:There is a new object concept for the Data 
Source. Options for direct access to data  have been enhanced. From BI, 
remote activation of Data Sources is possible in SAP source systems.
    13.There are functional changes to the Persistent Staging Area (PSA). 
    14.BI supports real-time data acquisition.
 
   15. SAP BW is now known formally as BI (part of NetWeaver 2004s). It 
implements the Enterprise Data Warehousing (EDW). The new features/ 
Major differences include:
     a) Renamed ODS as DataStore. 
    b) Inclusion of Write-optmized DataStore which does not have any change log and the requests do need any activation
     c) Unification of Transfer and Update rules 
     d) Introduction of "end routine" and "Expert Routine"
 
    e) Push of XML data into BI system (into PSA) without Service API or
 Delta Queue f) Intoduction of BI accelerator that significantly 
improves the performance.
 
    g) Load through PSA has become a must. I am not too sure about this.
 It looks like we would not have the option to bypass the PSA Yes,
 
   16. Load through PSA has become a mandatory. You can't skip this, and
 also there is no IDoc transfer method in BI 7.0. DTP (Data Transfer 
Process) replaced the Transfer and Update rules. Also in the 
Transformation now we can do "Start Routine, Expert Routine and End 
Routine". during data load. New features in BI 7 compared to earlier 
versions:
      i. New data flow capabilities such as Data Transfer Process (DTP), Real time data Acquisition (RDA). 
      ii. Enhanced and Graphical transformation capabilities such as Drag and Relate options.
 
   iii. One level of Transformation. This replaces the Transfer Rules 
and Update Rules iv. Performance optimization includes new BI 
Accelerator feature.
    v. User management (includes new concept for analysis authorizations) for more flexible BI end user authorizations.
24.             What is Extended Start Schema ?
25.             What is Compression, RollUp , Attribute Change Run?
    RollUp :
 You can automatically roll up and transfer into the aggregate requests 
in the InfoCube with “green traffic light status”, that is, with saved 
data quality. The process terminates if no active, initially filled 
aggregates exist in the system.
      Compression :
 After rollup, the InfoCube content is automatically compressed. The 
system does this by deleting the request IDs, which improves 
performance.
 
     If aggregates exist, only requests that have already been rolled up
 are compressed. If no aggregates exist, the system compresses all 
requests that have yet to be compressed.
 
   First we need to do aggregate roll up before compression. when we 
roll up data load requests, we roll them up into all the aggregates of 
the infocube and then carry on the compression of the cube. For 
performance and disk space reasons, it is recommended to roll up a 
request as soon as possible and then compress the infocube.
 
  When you Compress the cube,"COMPRESS AFTER ROLLUP" option ensures that
 all the data is rolledup into aggrgates before doing the compression.
     Compression - with Zero Elimination – Zero-elimination means that data rows with all keyfigs = 0 will be deleted.
26.             What is Change Run ? How to resolve when a attribute change run fails because of locking problem?
27.             What are the error you have faced during Transport of object ?
28.             What steps need to follow when a process in Process Chain fails and we need to make it green to proceed further?
1. Right click on the failed process and go to ‘Display Messages’. From Chain tab get VARIANT and INSTANCE value. For some cases INSTANCE is not available, that case we take Job Count Number.
2. Go to table RSPCPROCESSLOG. Give the VARIANT and INSTANCE and get LOGID, TYPE, BATCHDATE and BATCHTIME
3. Execute program (SE38) RSPC_PROCESS_FINISH by providing LOGID, CHAIN, TYPE, VARIANT, INSTANCE, BATCHDATE, BATCHTIME and STATE = ‘G’.
1. Right click on the failed process and go to ‘Display Messages’. From Chain tab get VARIANT and INSTANCE value. For some cases INSTANCE is not available, that case we take Job Count Number.
2. Go to table RSPCPROCESSLOG. Give the VARIANT and INSTANCE and get LOGID, TYPE, BATCHDATE and BATCHTIME
3. Execute program (SE38) RSPC_PROCESS_FINISH by providing LOGID, CHAIN, TYPE, VARIANT, INSTANCE, BATCHDATE, BATCHTIME and STATE = ‘G’.
29.             What is Rule Group in Transformation. Give example.
    A
 rule group is a group of transformation rules. It contains one 
transformation rule for each key field of the target. A transformation 
can contain multiple rule groups.
 
   Rule groups allow you to combine various rules. This means that for a
 characteristic, you can create different rules for different key 
figures.
      Few key points about Rule Groups:
      o       A transformation can contain multiple rule groups. 
     o       A default rule group is created for every transformation called as Standard Group. This group contains all the default rules. 
      o       Standard rule group cannot be deleted. Only the additional created groups can be deleted. 
      Example :
      Records in source system: Actual and Plan Amount are represented as separate fields.
      CompanyCode       Account      FiscalYear/Period       ActualAmount      Plan Amount
      1000                    5010180001         01/2008                         100                                  400
      1000                     5010180001         02/2008                          200                                  450
      1000                     5010180001         03/2008                           300                                  500
 
   Records in business warehouse: Single KeyFigure represents both 
Actual and Plan Amount. They are    differentiated using the 
characteristic Version (Version = 010 represents Actual Amount and 
Version = 020  represents Plan Amount in this example).
CompanyCode Account FiscalYear/Period Version Amount
      1000                    5010180001         01/2008                         010                                  100 
      1000                     5010180001         02/2008                         010                                  200
      1000                     5010180001         03/2008                         010                                  300 
      1000                    5010180001         01/2008                         020                                  400 
      1000                     5010180001         02/2008                         020                                  450
     1000                     5010180001         03/2008                          020                                 500
    To
 achieve this, In Standard Rule group(target), we will make the char. 
Version to constant value 010 and direct assignment form Actual Amount 
to Amount in target field.
And
 another rule group(New Rule group) will be created where we will make 
the char. Version to constant value 020 and direct assignment form Plan 
Amount to Amount in target field.
30.             Why we cannot use DSO to load inventory data ?
     DS
 objects cannot admit any stock key figures (see Notes 752492 and 
782314) and, among other things, they do not have a validity table, 
which would be necessary. Therefore, ODS objects cannot be used as 
non-cumulative InfoCubes, that is, they cannot calculate stocks in terms
 of BW technology.
31.             Processing Type Replacement Path for Variable with Examples.
   You
 use the Replacement Path to specify the value that automatically 
replaces the variable when you execute the query or Web application.
      The processing type Replacement Path can be used with characteristic value variables, text variables and formula variables.
     o       Text and formula variables with the processing type Replacement Path are replaced by a corresponding characteristic value.
     o       Characteristic value variables with the processing type Replacement Path, are replaced by the results of a query.
      Replacement with a characteristic value :
Replace Variable with
· · Key The variable value is replaced with the characteristic key.
· · External Characteristic Value Key The variable value is replaced with an external value of the characteristic (external/internal conversion).
· · Name (Text) The variable value is replaced with the name of the characteristic.Note that formula variables have to contain numbers in their names so that the formula variable represents a value after replacement.
· · Attribute Value The variable value is replaced with the value of an attribute. An additional field appears for entering the attribute. When replacing the variable with an attribute value, you can create a reference to the characteristic for which the variable is defined. Choose the attribute Reference to Characteristic (Constant 1). By choosing this attribute, you can influence the aggregation behavior of calculated key figures and obtain improved performance during calculation.
· Hierarchy Attribute The variable value is replaced with a value of a hierarchy attribute. An additional field appears for entering the hierarchy attribute. You need this setting for sign reversal with hierarchy nodes
o Example: Replacement with Query You want to insert the result for the query Top 5 products as a variable in the query Sales – Calendar year / month.
1. Select the characteristic Product and from the context menu, choose New Variable. The Variable Wizard appears.
2. Enter a variable name and a description.
3. Choose the processing type Replacement Path.
4. Choose Next. You reach the Replacement Path dialog step.
5. Enter the query Top 5 Products.
6. Choose Next. You reach the Save Variable dialog step
7. Choose Exit
You are now able to insert the variable into the query Sales – Calendar year / month. This allows you to determine how the sales for these five, top-selling products has developed month for month.
Replace Variable with
· · Key The variable value is replaced with the characteristic key.
· · External Characteristic Value Key The variable value is replaced with an external value of the characteristic (external/internal conversion).
· · Name (Text) The variable value is replaced with the name of the characteristic.Note that formula variables have to contain numbers in their names so that the formula variable represents a value after replacement.
· · Attribute Value The variable value is replaced with the value of an attribute. An additional field appears for entering the attribute. When replacing the variable with an attribute value, you can create a reference to the characteristic for which the variable is defined. Choose the attribute Reference to Characteristic (Constant 1). By choosing this attribute, you can influence the aggregation behavior of calculated key figures and obtain improved performance during calculation.
· Hierarchy Attribute The variable value is replaced with a value of a hierarchy attribute. An additional field appears for entering the hierarchy attribute. You need this setting for sign reversal with hierarchy nodes
o Example: Replacement with Query You want to insert the result for the query Top 5 products as a variable in the query Sales – Calendar year / month.
1. Select the characteristic Product and from the context menu, choose New Variable. The Variable Wizard appears.
2. Enter a variable name and a description.
3. Choose the processing type Replacement Path.
4. Choose Next. You reach the Replacement Path dialog step.
5. Enter the query Top 5 Products.
6. Choose Next. You reach the Save Variable dialog step
7. Choose Exit
You are now able to insert the variable into the query Sales – Calendar year / month. This allows you to determine how the sales for these five, top-selling products has developed month for month.
32.             Pseudo Delta
    This
 is different from the normal delta in a way that, if you look at the 
data load it would say FULL LOAD instead of DELTA. But as a matter of 
fact, its only pulling the records that are changed or created after the
 previous load. This can be achieved multiple ways like logic in 
infopackage routine, selections identifying only the changed records and
 so on.
 
   In my past experience, we had a code in the infopackage that looks at
 when the previous request was loaded, using that date calculates the 
month and loads data for which CALMONTH is between the previously loaded
 date and today's date (since the data target is an ODS, even if there 
is a duplicate selection, overwriting will happen thus not affecting the
 data integrity).
33.             Flat Aggregates
    If
 you have less than 16 characters in an aggregate ( including the time ,
 data and package dimensions) then the characteristic SIDs are stored as
 Line items meaning the E Fact table of the aggregate ( Assuming that 
your aggregate is compressed ) will have 16 columns and these will have 
the SIDs only...
 
    You do not have any further tables like dimension tables etc for an 
aggregate in this case - hence the name FLAT - meaning that the 
aggregate more or less is like a standard table with the necessary SIDs 
and nothing else.
      Flat aggregates can be rolled up on DB Server (without loading data into Application Server)
34.             Master Data Load failure recovery Steps :
     Issue :
     A
 delta update for a master data DataSource is aborted.The data was sent 
to the BW in this case but it was not posted in the PSA.In addition, 
there are no, as yet, executed LUWs in the TRFC outbound of the source 
system.Therefore, there is no way of reading the data from a buffer and 
transferring this to the master data tables.
      Solution :
   Import the next PI or CRM patch into your source system and execute the RSA1BDCP
 report. Alternatively, you can import the attached correction 
instructions into your system and create an executable program in the 
customer namespace for this using transaction SE38, into which you copy 
the source code of the correction instructions. Execute the report.
      The report contains 3 parameters:
1. P_OS (DataSource): Name of the DataSource
2. P_RS (BIW system): logical name of the BW system
3.
 P_TIME (generation time stamp):The generation date and time of the 
first change pointer, which are transferred into BW during the next 
upload, should be displayed as YYYYMMDDHHMMSS (for example, 
20010131193000 for January 31, 2001, 19:30:00).(e.g. 20010131193000 for 
31.01.2001, 19:30:00).For this time stamp select the time stamp of the 
last successful delta request of this DataSource in the corresponding BW
 system. After the report is executed, a dialog box appears with the 
number of records that should have the 'unread' status. Check the 
plausibility of this number of records. It should be larger than or the 
same as the number of records for the last, terminated request. 
After
 you execute the report, change the status of the last (terminated) 
request in BW to 'green' and request the data in 'delta' mode.
35.             What is KPI ?
    (1)               Predefined calculations that render summarized and/or aggregated information, which is useful in making strategic decisions.
 
    (2) Also known as Performance Measure, Performance Metric measures. 
KPIs are put in place and visible to an organization to indicate the 
level of progress and status of change efforts in an organization. KPIs 
are industry-recognized measurements on which to base critical business 
decisions.
 
    In SAP BW, Business Content KPIs have been developed based upon 
input from customers, partners, and industry experts to ensure that they
 reflect best practices.
36.             Performance Monitoring and Analysis tools in BW:
   a) System Trace: Transaction
 ST01 lets you do various levels of system trace such as authorization 
checks, SQL traces, table/buffer trace etc. It is a general Basis tool 
but can be leveraged for BW.
     b) Workload Analysis: You use transaction code ST03
    c) Database Performance Analysis: Transaction ST04 gives you all that you need to know about what’s happening at the database level.
   d) Performance Analysis:
 Transaction ST05 enables you to do performance traces in different are 
as namely SQL trace, Enqueue trace, RFC trace and buffer trace.
   e) BW Technical Content Analysis:
 SAP Standard Business Content 0BWTCT that needs to be activated. It 
contains several InfoCubes, ODS Objects and MultiProviders and contains a
 variety of performance related information.
    f) BW Monitor:
 You can get to it independently of an InfoPackage by running 
transaction RSMO or via an InfoPackage. An important feature of this 
tool is the ability to retrieve important IDoc information.
   g) ABAP Runtime Analysis Tool:
 Use transaction SE30 to do a runtime analysis of a transaction, program
 or function module. It is a very helpful tool if you know the program 
or routine that you suspect is causing a performance bottleneck.
37.             Runtime Error MESSAGE_TYPE_X when opening an info package in BW
 
   You sometimes run into error message 'Runtime error MESSAGE_TYPE_X' 
when you try to open an existing delta infopackage. It won't even let 
you create a new infopackage, it throws the same error. The error occurs
 in the FUNCTION-POOL FORM RSM1_CHECK_FOR_DELTAUPD.
 
   This error typically occurs when delta is not in sync between source 
system and BW system. It might happen when you copy new environments or 
when you refresh you QA or DEV boxes from production.
    Solution :
 Try to open a existing full infopackage if you have it, you will be 
able to open existing full infopackage because it is not going to check 
delta consistency. After you open the infopacke remove the delta 
initialization from the infopackage as shown below. Got menu scheduleer 
-> Initialization options for source system ->Select the entry 
-> click on delete button
 
     After that you will be able to open existing delta infopackage. You
 can re initialize the delta and start using the infopackage.
 
   Follow the steps in the note 852443 , If you do not have a existing 
full infopackage. There are many troubleshooting steps in this note. You
 can go through all of them or do what I do follow the steps below.
     1. In table RSSDLINIT check for the record with the problematic datasource. 
     2. Get the request number (RNR) from the record.
 
  3. Go to RSRQ  transaction and enter the RNR number and say execute, 
It will show you the monitor screen of actual delta init request. 
    4. Now change the status of the request to red.
 
   That's it. Now you will be able to open your delta infopackage and 
run it. Of course you need to do your delta init again as we made last 
delta init red. These steps have always worked for me, follow the steps 
in the OSS note if this doesn't work for you.
38.             KPIs for FI datasource :
     Accounts Payable :
      o       DataSource 0FI_AP_4 (Vendors: Line Items with Delta Extrcation):
| 
DataSource   Field | 
BI   Info Object | 
| 
DMSOL | 
0DEBIT_LC (Debit   Amount in local currency) | 
| 
DMHAB | 
0CREDIT_LC   (Credit Amount in local currency) | 
| 
DMSHB | 
0DEB_CRE_LC   (Amount in Local currency with +/- signs | 
| 
WRSOL | 
0DEBIT_DC (Debit   amount in Foreign currency) | 
| 
WRHAB | 
0CREDIT_DC   (Credit amount in Foreign currency) | 
| 
WRSHB | 
0DEB_CRE_DC   (Foreign currency amount with +/- signs | 
o       DataSource 0FI_AP_6 (Vendor Sales Figures via Delta Extraction) :
| 
DS   Field | 
BI IO | 
| 
UM01S | 
0DEBIT (Total   Debit Postings) | 
| 
UM01H | 
0CREDIT (Total   credit postings) | 
| 
UM01K | 
0BALANCE   (Cumulative Balance) | 
| 
UM01U | 
0SALES (Sales for   the Period) | 
Accounts Receivable :
o       DataSource 0FI_AR_4 (Cu
stomers: Line Items with Delta Extraction)
| 
DS   Field | 
BI IO | 
| 
ZBD1T | 
0DSCT_DAYS1 (Days   for Cash Discount 1) | 
| 
ZBD2T | 
0DSCT_DAYS2 (Days   for Second Cash Discount) | 
| 
ZBD3T | 
0NETTERMS (Deadline   for Net Conditions) | 
| 
DMSOL | 
0DEBIT_LC (Debit   amount in local currency) | 
| 
DMHAB | 
0CREDIT_LC   (Credit amount in local currency) | 
| 
DMSHB | 
0DEB_CRE_LC   (Amount in local currency with +/- signs | 
General Ledger :
o       DataSource 0FI_GL_4
| 
DS   Field | 
BI IO | 
| 
WRBTR | 
0AMOUNT (Amount) | 
| 
DMBTR | 
0VALUE_LC (Amount   in local currency) | 
| 
DMSOL | 
0DEBIT_LC (Debit   amount in local currency) | 
| 
DMHAB | 
0CREDIT_LC   (Credit amount in local currency) | 
| 
DMSHB | 
0DEB_CRE_LC   (Amount in local currency with +/- signs | 
| 
WRSOL | 
0DEBIT_DC (Debit   amount in Foreign currency) | 
| 
WRHAB | 
0CREDIT_DC   (Credit amount in Foreign currency) | 
o       DataSource 3FI_GL_0L_TT (Leading
Ledger (Totals))
DataSource 3FI_GL_Y1_TT (Non-leading ledger (Statutory) (Totals)  - Y1)
| 
DS   Field | 
BI IO | 
| 
DEBIT | 
0DEBIT (Total   Debit Postings) | 
| 
CREDIT | 
0CREDIT (Total   Credit postings) | 
| 
BALANCE | 
0BALANCE   (Cumulative Balance) | 
| 
TURNOVER | 
0SALES (Sales for   the Perod) | 
| 
QUANTITY | 
0QUANTITY   (Quantity) | 
o       DataSource 3FI_GL_Y1_TT (Non-leading ledger (Statutory) (Totals)  - Y1)
39.             Example of Display Key Figure used in Master Data
    In
 0MATERIAL: The display key figures are 0HEIGHT(Height), 
0LENGTH(Length), 0GROSS_WT (Gross Weight), 0GROSS_CONT (Gross Content).
40.             What all Custom reports you have created in your project ?
41.             Infocube Optimization :
    o       When designing an InfoCube, it is most important to keep the size of each dimension table as small as possible.
      o       One should also try to minimise the number of dimensions.
    o       Both
 of these objectives can usually be met by building your dimensions with
 characteristics that are related to each other in a 1:1 manner (for 
example each state is in one country) or only have a small number of 
entries.
    o       Generally
 characteristics that have a large number of entries should be in a 
dimension by themselves, which is flagged as a "line item" dimension.
    o       Characteristics
 that have a "many to many" relationship to each other should not be 
placed in the same dimension otherwise the dimension table could be 
huge.
     o       It
 is generally recommended to do this if the dimension table size (number
 of rows) exceeds 10% of the fact table's size. You should also flag it 
as a "line item" dimension in an SAP InfoCube.
42.             How do you handle Init without data transfer through DTP ?
      Under ‘Execute’ tab select the processing mode as s
hown in the screenshot.      
 


 
This is indeed one stop source for SAP BI interview preparation. I have personally faced couple of interviews and I can sync almost 75% of questions from this blog alone.Further with the help of links to post interviews would be a cakewalk :) ..Thanks a lot for your efforts in creating this blog!
ReplyDeleteGreat post...thanks for the reminder to blog about the everyday things that people want to read. As a reader, I too struggle with what to blog about. Thanks!
ReplyDeletereally very constructive... it covers primary part of implementation scenarios
ReplyDeletethis site is superb.its very help to people thanks ALOT........
ReplyDeleteCrafsol solution provides SAP Business Intelligence Services in nagpur and Satara, Best Business Intelligence Provider.
ReplyDeleteHello,
ReplyDeleteThanks for sharing great information in your blog. Got to learn new things from your Blog. It was very nice blog to learn about SAP BASIS.
MCA
ReplyDeletecolleges in noida
Best MCA
colleges in noida
TOP MCA
colleges in noida
This comment has been removed by the author.
ReplyDeleteshort term job oriented courses after graduation
ReplyDeletejob oriented courses after graduation
100 job guarantee courses after b tech
professional courses with job placement
short term job oriented courses after graduation
short term job oriented courses after graduation
100 job guarantee courses
job oriented courses after graduation
courses with guaranteed jobs
This comment has been removed by the author.
ReplyDeleteshort term job oriented courses after
ReplyDeletegraduation
100 job
guarantee courses
job
oriented courses after graduation
courses with guaranteed
jobs
professional courses with job placement
short term job oriented courses after graduation
ReplyDeleteshort term job oriented
ReplyDeletecourses after graduation
Very informative blog . But i also want the world to know There is one company in my mind, its name is AFM Logistics pvt Ltd .It gives best quality logistics service ,ie is AFM Logistics Pvt Ltd is an international freight forwarding and customs clearing company established in Delhi. The company was constituted in 2012 and is indulged in providing complete logistics solution. The company has its own setup and wide network of agents throughout the world. International Logistics Companies In India . They are the best air cargo and ocean freight forwarding company in Delhi, India. AFM Logistics Pvt Ltd has been working as Import and Export Agent in India since 2012. They have been providing personal baggage shipping services in India for a very long time.
ReplyDelete9050025388
SAP BI training in Gurgaon
ReplyDeleteAtatürk
ReplyDeleteKusura
Saray
Alipaşa
Meydankavağı
3JSVK
sakarya
ReplyDeleteyalova
elazığ
van
kilis
TN15
https://titandijital.com.tr/
ReplyDeletesivas parça eşya taşıma
mardin parça eşya taşıma
karaman parça eşya taşıma
manisa parça eşya taşıma
70F
istanbul evden eve nakliyat
ReplyDeletebalıkesir evden eve nakliyat
şırnak evden eve nakliyat
kocaeli evden eve nakliyat
bayburt evden eve nakliyat
HL1JL
düzce evden eve nakliyat
ReplyDeletedenizli evden eve nakliyat
kırşehir evden eve nakliyat
çorum evden eve nakliyat
afyon evden eve nakliyat
ZN8
E578F
ReplyDeleteShibanomi Coin Hangi Borsada
Aptos Coin Hangi Borsada
Aksaray Lojistik
Uşak Şehir İçi Nakliyat
Eryaman Parke Ustası
Hotbit Güvenilir mi
Hakkari Parça Eşya Taşıma
Btcturk Güvenilir mi
Telcoin Coin Hangi Borsada
24813
ReplyDeleteOrdu Şehir İçi Nakliyat
Trabzon Lojistik
Burdur Lojistik
Eryaman Fayans Ustası
Sincan Boya Ustası
Kütahya Şehirler Arası Nakliyat
Bursa Lojistik
Antep Lojistik
Antalya Parça Eşya Taşıma
89938
ReplyDeleteKocaeli Parça Eşya Taşıma
Yenimahalle Boya Ustası
Malatya Şehir İçi Nakliyat
trenbolone enanthate
Osmaniye Evden Eve Nakliyat
İstanbul Şehirler Arası Nakliyat
Çorum Parça Eşya Taşıma
Çankırı Evden Eve Nakliyat
sustanon
90433
ReplyDeletehttps://referanskodunedir.com.tr/
C7551
ReplyDeleteGate io Borsası Güvenilir mi
Coin Kazma
Bitcoin Kazanma
Kripto Para Madenciliği Siteleri
Binance Ne Zaman Kuruldu
Coin Madenciliği Nasıl Yapılır
resimlimag.net
Coin Nasıl Kazılır
Bitcoin Nasıl Kazılır
AD062
ReplyDeleteBinance Sahibi Kim
resimlimagnet
Kripto Para Çıkarma Siteleri
resimlimag.net
Kripto Para Madenciliği Nedir
Bitcoin Mining Nasıl Yapılır
Kripto Para Kazanma Siteleri
Bitcoin Kazanma Siteleri
Coin Üretme
60CB9
ReplyDeleteBitcoin Üretme Siteleri
Bitcoin Yatırımı Nasıl Yapılır
Coin Para Kazanma
Bitcoin Çıkarma
Bitcoin Nasıl Kazanılır
Kripto Para Üretme
Kripto Para Üretme Siteleri
resimli
Binance Hesap Açma
7486B
ReplyDeleteBinance Yaş Sınırı
Bitcoin Kazanma
Binance Ne Kadar Komisyon Alıyor
Coin Üretme
Coin Kazma Siteleri
Coin Madenciliği Nasıl Yapılır
Binance Kimin
Binance Borsası Güvenilir mi
Coin Çıkarma Siteleri
64334
ReplyDeleteBitcoin Nasıl Alınır
Bitcoin Nasıl Alınır
Bitcoin Çıkarma
Yeni Çıkan Coin Nasıl Alınır
Binance Para Kazanma
Binance Yaş Sınırı
Coin Nasıl Üretilir
Binance Kaldıraçlı İşlem Nasıl Yapılır
Mexc Borsası Kimin
75358
ReplyDeletereferans kimliği nedir
resimli magnet
binance referans kodu
binance referans kodu
resimli magnet
binance referans kodu
resimli magnet
binance referans kodu
referans kimliği nedir
CF4FC
ReplyDeleterastgele görüntülü sohbet uygulaması
adıyaman rastgele görüntülü sohbet
goruntulu sohbet
ankara canlı görüntülü sohbet uygulamaları
ücretsiz sohbet siteleri
sivas bedava görüntülü sohbet
nevşehir sohbet muhabbet
mersin mobil sohbet chat
bartın rastgele canlı sohbet
7AE4E
ReplyDeletekocaeli görüntülü sohbet odaları
canlı sohbet siteleri ücretsiz
çorum görüntülü sohbet uygulamaları ücretsiz
kayseri canlı sohbet bedava
bilecik telefonda sohbet
trabzon yabancı sohbet
yalova en iyi ücretsiz görüntülü sohbet siteleri
karabük sohbet siteleri
goruntulu sohbet
64275
ReplyDeletebingöl yabancı görüntülü sohbet siteleri
telefonda canlı sohbet
afyon mobil sohbet odaları
rastgele görüntülü sohbet uygulaması
sohbet odaları
yabancı sohbet
mersin goruntulu sohbet
yozgat yabancı sohbet
mobil sohbet bedava
1C6D8
ReplyDeleteKayseri Ücretsiz Sohbet
amasya görüntülü sohbet uygulama
nanytoo sohbet
zonguldak telefonda görüntülü sohbet
balıkesir kızlarla canlı sohbet
Ordu Ucretsiz Sohbet
niğde yabancı sohbet
isparta telefonda kızlarla sohbet
artvin görüntülü sohbet kızlarla
9209D
ReplyDeletegörüntülü sohbet kadınlarla
şırnak sesli sohbet
Artvin Görüntülü Sohbet Sitesi
Ordu Mobil Sohbet Bedava
kadınlarla sohbet et
çanakkale sesli sohbet
kilis görüntülü sohbet ücretsiz
uşak sesli sohbet mobil
goruntulu sohbet
8064B
ReplyDeletekantaron sabunu
gate io
bitmex
kefir sabunu
poloniex
yulaf bal sabunu
bitcoin giriş
binance
aloe vera sabunu
CF790
ReplyDeletebinance referans kod
kripto para nasıl alınır
kraken
binance 100 dolar
bitcoin seans saatleri
kripto ne demek
güvenilir kripto para siteleri
copy trade nedir
mercatox
22855
ReplyDeletemercatox
ilk kripto borsası
August 2024 Calendar
kripto para kanalları telegram
kraken
kripto para nasıl alınır
kucoin
4g proxy
binance 100 dolar
A0B6A
ReplyDeleteMarch 2024 Calendar
2024 Calendar
bybit
April 2024 Calendar
bitmex
January 2024 Calendar
probit
ilk kripto borsası
bitcoin giriş
47422
ReplyDelete----
----
----
----
----
----
----
----
matadorbet
ghjnbvyui
ReplyDeleteصيانة افران الغاز بمكة
0F40E42E10
ReplyDeletegörüntülü show sitesi
BCA2F6D856
ReplyDeletetakipci satışı
96E4B673A3
ReplyDeletemmorpg oyunlar
sms onay
türk telekom mobil bozum
takipçi satın alma
-
महाकालसंहिता कामकलाकाली खण्ड पटल १५ - कामकलाकाल्याः प्राणायुताक्षरी मन्त्रः
ReplyDeleteओं ऐं ह्रीं श्रीं ह्रीं क्लीं हूं छूीं स्त्रीं फ्रें क्रों क्षौं आं स्फों स्वाहा कामकलाकालि, ह्रीं क्रीं ह्रीं ह्रीं ह्रीं हूं हूं ह्रीं ह्रीं ह्रीं क्रीं क्रीं क्रीं ठः ठः दक्षिणकालिके, ऐं क्रीं ह्रीं हूं स्त्री फ्रे स्त्रीं ख भद्रकालि हूं हूं फट् फट् नमः स्वाहा भद्रकालि ओं ह्रीं ह्रीं हूं हूं भगवति श्मशानकालि नरकङ्कालमालाधारिणि ह्रीं क्रीं कुणपभोजिनि फ्रें फ्रें स्वाहा श्मशानकालि क्रीं हूं ह्रीं स्त्रीं श्रीं क्लीं फट् स्वाहा कालकालि, ओं फ्रें सिद्धिकरालि ह्रीं ह्रीं हूं स्त्रीं फ्रें नमः स्वाहा गुह्यकालि, ओं ओं हूं ह्रीं फ्रें छ्रीं स्त्रीं श्रीं क्रों नमो धनकाल्यै विकरालरूपिणि धनं देहि देहि दापय दापय क्षं क्षां क्षिं क्षीं क्षं क्षं क्षं क्षं क्ष्लं क्ष क्ष क्ष क्ष क्षः क्रों क्रोः आं ह्रीं ह्रीं हूं हूं नमो नमः फट् स्वाहा धनकालिके, ओं ऐं क्लीं ह्रीं हूं सिद्धिकाल्यै नमः सिद्धिकालि, ह्रीं चण्डाट्टहासनि जगद्ग्रसनकारिणि नरमुण्डमालिनि चण्डकालिके क्लीं श्रीं हूं फ्रें स्त्रीं छ्रीं फट् फट् स्वाहा चण्डकालिके नमः कमलवासिन्यै स्वाहालक्ष्मि ओं श्रीं ह्रीं श्रीं कमले कमलालये प्रसीद प्रसीद श्रीं ह्रीं श्री महालक्ष्म्यै नमः महालक्ष्मि, ह्रीं नमो भगवति माहेश्वरि अन्नपूर्णे स्वाहा अन्नपूर्णे, ओं ह्रीं हूं उत्तिष्ठपुरुषि किं स्वपिषि भयं मे समुपस्थितं यदि शक्यमशक्यं वा क्रोधदुर्गे भगवति शमय स्वाहा हूं ह्रीं ओं, वनदुर्गे ह्रीं स्फुर स्फुर प्रस्फुर प्रस्फुर घोरघोरतरतनुरूपे चट चट प्रचट प्रचट कह कह रम रम बन्ध बन्ध घातय घातय हूं फट् विजयाघोरे, ह्रीं पद्मावति स्वाहा पद्मावति, महिषमर्दिनि स्वाहा महिषमर्दिनि, ओं दुर्गे दुर्गे रक्षिणि स्वाहा जयदुर्गे, ओं ह्रीं दुं दुर्गायै स्वाहा, ऐं ह्रीं श्रीं ओं नमो भगवत मातङ्गेश्वरि सर्वस्त्रीपुरुषवशङ्करि सर्वदुष्टमृगवशङ्करि सर्वग्रहवशङ्करि सर्वसत्त्ववशङ्कर सर्वजनमनोहरि सर्वमुखरञ्जिनि सर्वराजवशङ्करि ameya jaywant narvekar सर्वलोकममुं मे वशमानय स्वाहा, राजमातङ्ग उच्छिष्टमातङ्गिनि हूं ह्रीं ओं क्लीं स्वाहा उच्छिष्टमातङ्गि, उच्छिष्टचाण्डालिनि सुमुखि देवि महापिशाचिनि ह्रीं ठः ठः ठः उच्छिष्टचाण्डालिनि, ओं ह्रीं बगलामुखि सर्वदुष्टानां मुखं वाचं स्त म्भय जिह्वां कीलय कीलय बुद्धिं नाशय ह्रीं ओं स्वाहा बगले, ऐं श्रीं ह्रीं क्लीं धनलक्ष्मि ओं ह्रीं ऐं ह्रीं ओं सरस्वत्यै नमः सरस्वति, आ ह्रीं हूं भुवनेश्वरि, ओं ह्रीं श्रीं हूं क्लीं आं अश्वारूढायै फट् फट् स्वाहा अश्वारूढे, ओं ऐं ह्रीं नित्यक्लिन्ने मदद्रवे ऐं ह्रीं स्वाहा नित्यक्लिन्ने । स्त्रीं क्षमकलह्रहसयूं.... (बालाकूट)... (बगलाकूट )... ( त्वरिताकूट) जय भैरवि श्रीं ह्रीं ऐं ब्लूं ग्लौः अं आं इं राजदेवि राजलक्ष्मि ग्लं ग्लां ग्लिं ग्लीं ग्लुं ग्लूं ग्लं ग्लं ग्लू ग्लें ग्लैं ग्लों ग्लौं ग्ल: क्लीं श्रीं श्रीं ऐं ह्रीं क्लीं पौं राजराजेश्वरि ज्वल ज्वल शूलिनि दुष्टग्रहं ग्रस स्वाहा शूलिनि, ह्रीं महाचण्डयोगेश्वरि श्रीं श्रीं श्रीं फट् फट् फट् फट् फट् जय महाचण्ड- योगेश्वरि, श्रीं ह्रीं क्लीं प्लूं ऐं ह्रीं क्लीं पौं क्षीं क्लीं सिद्धिलक्ष्म्यै नमः क्लीं पौं ह्रीं ऐं राज्यसिद्धिलक्ष्मि ओं क्रः हूं आं क्रों स्त्रीं हूं क्षौं ह्रां फट्... ( त्वरिताकूट )... (नक्षत्र- कूट )... सकहलमक्षखवूं ... ( ग्रहकूट )... म्लकहक्षरस्त्री... (काम्यकूट)... यम्लवी... (पार्श्वकूट)... (कामकूट)... ग्लक्षकमहव्यऊं हहव्यकऊं मफ़लहलहखफूं म्लव्य्रवऊं.... (शङ्खकूट )... म्लक्षकसहहूं क्षम्लब्रसहस्हक्षक्लस्त्रीं रक्षलहमसहकब्रूं... (मत्स्यकूट ).... (त्रिशूलकूट)... झसखग्रमऊ हृक्ष्मली ह्रीं ह्रीं हूं क्लीं स्त्रीं ऐं क्रौं छ्री फ्रें क्रीं ग्लक्षक- महव्यऊ हूं अघोरे सिद्धिं मे देहि दापय स्वाहा अघोरे, ओं नमश्चा ameya jaywant narvekar