List of Posts

Get this widget
To get notifications, Please like our Facebook Page >>>>>>

Use Search box to find a topic in this Blog!!!

SAP BW Interview Questions

Below are set of questions for BW Interview

BW Administration and Design

Basic Concepts
Q. What are the differences between OLAP and OLTP applications

            OLAP                                                   OLTP
a. Summarized data                                    Detailed
b. Read only                                               Read write
c. Not optmized                                          Optimized for data applications
d. Lot of historical data                              Not much old data

Q. What is a star schema?
    A fact table at the center and surrounded (linked) by dimension tables.

Q. What is a slowly changing dimension?
   A dimension containing a characteristics which changes over a time; for example take employee job title; this changes over a period of times with different job titles.
Q. What are the advantages of Extended star schema of BW vs. the star schema    
   a. use of generated keys (numeric) for faster access 
   b. external hierarchy  
   c. multi language support       
   d. master data common to all cubes     
   e. slowly changing dimensions supported   
   f. aggregates in its own tables for faster access

Q. What is the namespace for BW?
All SAP objects start with 0 and customer is A-Z; all tables begin with /BIO for
SAP and /BIC for customers; All generated objects start with l-8 (like export
data source); prefix 9A is used in APO.

Q. What is an info object?
Business object like customer, product, etc; they are divided into characteristics
and key figures; characteristics are evaluation objects like customer and key
figures are measurable objects like sales quantity, etc; characteristics also
include special objects like unit and time.

Q. What are the data types supported by characteristics?
NUMC, CHAR (up to 60), DATS and TIMS

Q. What is an extemal hierarchy?
Presentation hierarchies are stored in its own tables (hierarchy tables) for
characteristic values

Q. What are time dependent text / attribute of characteristics?
If the text (for example a name of the product /person over time) or if the
attribute changes over time (for example job title) then these must be marked as
time dependent.

Q. Can you create your own time characteristics?

Q. What are the types of attributes?
Display only and navigational; display only attributes are only for display and
no analysis can be done; navigational attributes behave like regular
characteristics; for example assume that we have a customer characteristics
with country as a navigational attribute; you can analyze the data using
customer and country.

Q. What is Alpha conversion?
Alpha conversion is used to store data consistently by storing any numeric
values with prefixing with Os; for example if you defined material as 6 Numc
then number 1 is stored as 000001 but displayed as 1; this removes
inconsistencies between 01 vs. 001.

Q. What is the alpha check execution program?
This is used to check consistency for BW 2.x before upgrading the system to
3.x; the transaction is RSMDCNVEXIT

Q. What is the attributes only flag?
If the flag is set, no master data is stored; this is only used as attribute for other
characteristics; for example comments on a AR document.

Q. What is compounding?
This defines the superior info object which must be combined to define an
object; for example when you define cost center then controlling area is the
compounding (superior) object.

Q. What is the Bex options for characteristics like F4 help for query definition and
This defines how the data is displayed in query definition screen or when query
is executed; options are from the data displayed, from master data table (all
data) and from dimension data; for example let us assume that you have 100
products in all, 10 products in a cube; in bex you display the query for 2
products; the following options for product will display different data
a. selective data only - will display 2 products
b. dimension data - will display 10 products
c. from master data - will display all 100 products

Q. What are the data types allowed for key figures?
Amount, number, integer, date and time. 

Q. What is the difference between arnount/quantity and number - amount/quantity
always comes with units; for example sales will be amount and inventory

Q. What are the aggregation options for key figures?
If you are defining prices then you may want to set “no aggregation” or you can
define max, min, sum; you can also define exception aggregation like first, last,
etc; this is helpful in getting headcount; for example if you define a monthly
inventory count key figure you want the count as of last day of the month.

Q. What is the maximum number of key figures you can have in an info cube?

Q. What is the maximum number of characteristics you can have per dimension?

Q. What are the nine decision points of data warehousing?
a. Identify fact table
b. Identify dimension tables
c. Define attributes of entities
d. Define granularity of the fact table (how often
e. Pre calculated key figures
f. Slowly changing dimensions
g. Aggregates
h. How long data will be kept

i. How often data is extracted

Q. How many dimensions in a cube
Total 16 out of which 3 are predefined time, unit and request; customer is left
with 13 dimensions

Q. What is a SID table and advantages
The SID table (Surrogate ID table) is the interface between master data and the
dimension tables; advantages :-
a. uses numeric as indexes for faster access
b. master data independent of into cubes
c. language support
d. slowly changing dimension support

Q. What are the other tables created for master data?
a. P table - Time independent master data attributes
b. Q table - Time dependent master data attributes
c. M view - Combines P and Q
d. X table - Interface between master data SIDS and time independent
navigational attributes SIDS ( P is linked to the X table)
e. Y table - Interface between master data SIDS and time dependent
navigational attributes SIDS (Q is linked to the Y table) 

Q. What is the transfer routine of the info object?
It is like a start routine; this is independent of the data source and valid for all
transfer routines; you can use this to define global data and global checks.

Q. What is the DIM ID?
Dim ids link dimensions to the fact table

Q. What is table partition?
SAP is using fact table partitioning to improve performance; you can partition

Q. How many extra partitions are created and why?
Usually 2 extra partitions are created to accommodate data before the begin date
and after the end date

Q. Can you partition a cube which has data already?
No; the cube must be empty to do this; one work around is to make a copy of
the cube A to cube B; export data from A to B using export data source; empty
cube A; create partition on A; re-import data from B; delete cube B

Q. What is the transaction for Administrator work bench?

Q. What is a source system?
Any system that is sending data to BW like R/3, flat file, oracle database or
external systems.

Q. What is a data source?
The source which is sending data to a particular info source on BW; for
example we have a 0CUSTOMER_ATTR data source to supply attributes to

Q. What is an info source?
Group of logically related objects; for example the OCUSTOMER info source
will contain data related to customer and attributes like customer number,
address, phone no, etc

Q. What are the types of info source?
Transactional, attributes, text and hierarchy

Q. What is communication structure?
Is an independent structure created from info source; it is independent of the
source system/data source

Q. What are transfer rules?
The transformation rules for data from source system to info
source/communication structure

Q. What is global transfer rule?
This is a transfer routine (ABAP) defined at the info object level; this is
common for all source systems. 

Q. What are the options available in the transfer rule
Assign info object, assign a constant, ABAP routine or a Formula (From version
3.x); example are :
a. Assign info object - direct transfer; no transformation
b. Constant- for example if you are loading data from a specified country in a
flat file, you can make the country as constant and assign the value
c. ABAP routine - for example if you want to do some complex string
manipulation; assume that you are getting a flag file from legacy data and
the cost center is in a field and you have to “massage” the data to get it; in
this case use ABAP code
d. For simple calculations use formula; for example you want to convert all
lower case characters to upper case; use the TOUPPER formula

Q. Give some important formula available
Concatenate, sub string, condense, left/light (n characters), l_t1im, r_trim,
replace, date routines like DATECONV, date_week, add_to_date, date_diff,
logical functions like if, and;

Q. When you do the ABAP code for transfer rule, what are the important variables
you use?
  a. RESULT - this gets the result of the ABAP code
  b. RETURNCODE - you set this to 0 if everything is OK; else this record is
  c. ABORT - set this to a value not 0, to abort the entire package

Q. What is the process of replication?
This copies data source structures from R/3 to BW

Q. What is the update rule?
Update rule defines the transformation of data from the communication
structure to the data targets; this is independent of the source systems/data

Q. What are the options in update rules?
a. one to one move of info objects
b. constant
c. lookup for master data attributes
d. formula
e. routine (ABAP)
f. initial value

Q. What are the special conversions for time in update rules?
Time dimensions are automatically converted; for example if the cube contains
calendar month and your transfer structure contains date, the date to calendar
month is converted automatically. 

Q.What is the time distribution option in update rule?
This is to distribute data according to time; for example if the source contains
calendar week and the target contains calendar day, the data is split for each
calendar day. Here you can select either the normal calendar or the factory

Q. What is the return table option in update rules for key figures?
Usually the update rule sends one record to the data target; using this option you
can send multiple records; for example if we are getting total telephone
examples for the cost center, you can use this to return telephone expenses for
each employee (by dividing the total expenses by the number of employees in
the cost center) and creating cost record for each employee in the ABAP code.

Q. What is the start routine?
The first step in the update process is to call start routine; use this to fill global
variables to be used in update routines;

Q. How would you optimize the dimensions?
Use as many dimensions as possible for performance improvement; for example
assume that you have 100 products and 200 customers; if you make one
dimension for both, the size of the dimension will be 20,000; if you make
individual dimensions then the total number of rows will be 300. Even if you
put more than one characteristic per dimension, do the math considering worst
case scenario and decide which characteristics may be combined in a

Q. What is the conversion routine for units and currencies in the update rule?
Using this option you can write ABAP code for unit/currency conversion; if you
enable this flag, then unit of the key figure appears in the ABAP code as an
additional parameter; for example you can use this to convert quantity in
pounds to kilo grams.

Q. How do you add an entry in the monitor log from the update rules?
This is added in the intemal table MONITOR; the following fields describe the
MONITOR structure
a. MONITOR-MSGID -> gives an ID
b. MONITOR-MSGTY -> message type
c. MONITOR-MSGNO -> message number
d. MONITOR-MSGV1 -> monitor messagel
e. MONITOR-MSGV2 -> monitor message 2
f. Append it to the MONITOR table; this will show up in the monitor

Q. What is a data mart?
The bw system can be a source to another BW system or to itself; the
ODS/cube/infoprovider which provide data to another system are called data

Q. What is the myself data mart?
The BW system feeding data to itself is called the myself data mart; this is
created automatically; uses ALE for data transfer

Q. How do you create a data mart?
a. Right click and create the export data source for the ODS/cube
b. In the target system replicate the data source
c. Create transfer rules and update rules
d. Create info package to load

Q. Can you make multi providers and master data as data marts?

Q. What are the benefits of data marts?a. Simple to use
b. Hub and spoke usage
c. Distributed data
d. Performance improvement in some cases 

Q. What are events and how you use it in BW?
Events are background signal to tell the system that certain status has been
reached; you can use events in batch jobs; for example after you load data to the
cube you can trigger an event which will start another job to run the reporting
agent. Use SM62 to create and maintain events.

Q. What is an event chain?
This is a group of events which complete independently of one another; use this
to check the successful status of multiple events; for example you can trigger a
chain event if all loads are successful.

 Q. How do you create event chains?
AWB -> Tools -> Event collector

Q. What is PSA?
Persistent staging area -is based on the transfer structure and source system

Q. What are the options available for updates to data target?a. PSA and data targets in parallel - improves performance
b. PSA and data target in sequence
c. PSA only - you have to manually load data to data targets
d. Data targets only - No PSA

Q. Why if one request fails, all the subsequent requests are tumed to “red”?
This is to avoid inconsistency and make sure that only verified data is entered
into the system

Q. What are the two fact tables?
There are two fact tables for each info cube; it is the E table and the F table; 

Q. What is compression or collapse?
This is the process by which we delete the request 1Ds; this saves space; all the
regular requests are stored in the F table; when you compress, the request H) is
deleted and data is moved from F table to E table; this saves space and improves
performance but the disadvantage is that you can not delete the compressed
requests individually

Q. What is reconstruction?
This is the process by which you load data in to the same cube or to a different

Q. What is a remote cube?
Remote cube is a logical cube where the data is extracted from an external
source; usually it is used to report real time data from an r/3 system instead of
dlilling down from BW to R3

Q. What is virtual info cube with services?
In this case a user defined function module is used as data source

Q. What are the restrictions/recommendations for using remote cube?
These are used for small volume of data with few users; no master data allowed

Q. Give examples of data sources that support remote cubes?
0FI_AP_3 - vendor line items, 0FI_AR_3 - customer line items

Q. What is a multi provider?
Using multi provider you can access data from different data sources like cubes,
ODS, infosets, master data

Q. What are the added features in 3.x for multi provider?
Prior to 3.x only multi cubes were available; you can not combine a ODS and
cube for example

Q. What is an info set?
An info provider giving data by joining data from different sources like ODS,
master data, etc

Q. What is the difference between multi provider and infoset?
Multi provider is a Union whereas infoset is a “Join” (intersection)

Q. Can you create an info set with info cube?
No; only ODS and master data are allowed

Q. What is a line item (or degenerate dimensions) -
If the size of a dimension of a cube is more than the normal (more than 20%) of
the fact table, you define that dimension as a line item dimension; for example if
you store sales document number in one dimension in a sales cube, usually the
dimension size and the fact table size will be the same; when you add the
overhead of look ups for DIMID/SIDS the performance will be very slow; by
flagging it as a line item dimension, the system puts the SID in the fact table 

instead of DIMID for the sales document number; this avoids one look up into
dimension table (the dimension table is not created in this case)

Q. What are the limitations of line item dimension?
Only one characteristic is allowed per line item dimension.

Q. What is a transactional info cube?
These cubes are used for both read and write; standard cubes are optimized for
reading. The transactional cubes are used in SEM.

Q. What is the cache monitoring transaction?

Q. What are the profile parameters for cache?
rdsb/esm/buffersize_kb (max size of cache) and rsdb/esm/max_objects (max
number of entries in cache)

Q. Can you disable cache?
Yes either globally or using query debug tool RSRT

Q. What does the program RSMDCNV EXIT check?
a. all characteristics with conversion exit ALPHA, NUMC and GJAI-IR
b. all characteristics which are compounded to the earlier

Q. Can you restart the conversion?

Q. When should you do the alpha conversion?
If you are upgrading you must do it before PREPARE phase of upgrade

Q. Can you make an info object as info provider and why?
Yes; when you want to report on characteristics or master data, you can make
them as an info provider; for example you can make OCUSTOMER as an info
provider and do bex reporting on OCUSTOMER; right click on the info area and
select “Insert characteristic as data target”

Q. What are the control parameters for data transfer?
This defines the maximum size of packet, max no of records per packet, the
number of parallel processes, etc

Q. What is number range object?
This defines the characteristic attributes; for example the object MATERIALNR
defines the attributes of material master like the length, etc

Q. How do you set up the permitted characters?
Using transaction RSKC.

Q. What is aggregate realignment run maintenance?
Defines the level of percentage change for realignment run will cause a
reconstruction of aggregates. 

Q. What is update mode for master data?
Defines whether the master data (auto sid) is added automatically for non
existing master data when you load transaction data.

Q. What is the ODS object settings?
Defines number of parallel processors in activation, min number of data records
and wait time.

Q. What are the settings for flat files?
Defines thousand separator, decimal pointer, field separator (default is ;) and
field delimiter (default “)

Q. Which transaction defines the background user in source system?

Non Cumulative Key Figures

Q. What are non cumulative key figures?
These kinds of key figures are not summarized (unlike sales, etc); examples are
head count, inventory amount; these are always in relation to a point in time; for
example we will ask how many employees we had as of last quarter. We don’t
add up the head count.
Give an example - the content key figure OTOTALSTCK (Quantity Total Stock)
Is a non cumulative key figure. It has exception aggregation as “Last value” and
inflow as “receipt qty total stock” and outflow as “issue qty total stock”.

Q. What is standard and exception aggregation?
Standard aggregation -> specifies how a key figure is compressed using all
characteristics except time; exception aggregation -> specifies how key figure is
compressed using time characteristics.

Q. What is inflow and outflow?
These are non cumulative changes used to get the right quantity

Q. What is a “Marker”?
Non cumulatives are stored using a “Marker” for the current period.

Q. What is a time reference characteristic?
Is a time characteristic which determines all other time characteristic;

Q. Give example data sources supporting this?
2LIS_03_BF and 2LIS_03_UM

Q. What is the opening balance?
When you start loading inventory data from R/3 you start with a certain point in
time; this is what is called opening balance; assume that you have inventory
since Jan 2002; you are loading data on J an 2003 and the opening balance for
the product is 200; the data before Jan 2003 is “Historic data”; any data loaded
after J an 2003 is a delta load.

Q. What is “No Marker Update”?
If you choose this option when compressing Non cumulative cube, the reference
point is not updated but the requests are moved to Request O (usual
compression); you must do this for compressing historical data; for example use
this option to compress data before Jan 2003;

Q. What are the steps to load non cumulative cube?
a. initialize opening balance in R/3 (S278)
b. activate extract structure MC03BFO for data source 2LIS_03_BF
c. set up historical material documents in R/3
d. load opening balance using data source 2LIS_40_S278
e. load historical movements and compress without marker update.
f. set up V3 update
g. load deltas using 2LIS_03_BF

Q. How does the query calculated?
Qty = Ref point in time Qty - Non compressed delta qtys - deltas for backward

Q. What is a validity determining characteristic?
That determine validity period of non cumulative cube; example plants opening
and closing different time periods

Q. What are the dos and don’ts?
a. use few validity objects
b. compress the cube ASAP


Q. What is ar1 authorization object?
Defines the fields for authorization checks

Q. What is the role maintenance transaction?

Q. What is a role?
Usually defines the responsibility of a user with proper menu and authorization
- example receiving clerk

Q. Give some examples of the roles delivered with SAP BW?
All the BW roles start with S_RS; S_RS__ROPAD- Production system
administrator; S_RS_RREPU - bex user

Q. What are the different authorization approaches available in BW?
      a. infocube based approach - use this in conjunction with Info
          area to limit access       
      b. query name based approach - many customers use this to                 limit access; for z queries are read only, Y queries are                       read/write; FI* query names for FI use, etc     
      c. Dataset approach - limitation of characteristics and key                 figures; you can use reporting authorization for this.

Q. What are the two object classes of BW authorization?
BW Warehouse authorization - SAP standard; BW Reporting - Not delivered by SAP - user has to create

Q. How many fields you can assign to authorization object? “

Q. What are the values for ACTVT?
Create, change and display

Q. Give some examples of standard authorization objects delivered for BW?
      a. S_RS_IOMAD - Master data      
      b. S_RS_ADMWB - AWB objects      
      c. S_RS_ODSO - ODS objects      
      d. S_RS_TOOLS - Bex tools       
      e. S_RS_ICUBE - info cube      
      f. S__RS_H1ER - hierarchy       
      g. S_RS_COMP, S_RS_COMP1 - reporting authorization      
      h. S_RS_FOLD - folders       
      i. S_RS_IOBJ - info object      
      j. S_RS_ISOUR -info source (transaction data)       
      k. S_RS_ISRCM -info source (master data)      l. S_GUI - GUI          Activities (workbooks)       
      m. S_BDS_DS - document set (for workbooks) 
      n. S_USER_AGR - role check for saving workbook in a role
      o. S_USER_TCD - transaction in roles for saving workbook  in a role

Q. What is an reporting object?
This is used for BW reporting to check authorizations by the OLAP processor

Q. Give a step by step approach to create an authorization object; let us assume that we want to restrict the report by cost center. A
      a. make the info object as Authorization relevant (flag ) 
          and activate it; in this example OCOSTCENTER      b. create an authorization object using transaction RSSM      c. assign the object to one or more info providers      d. create role(s) with different values for cost centers; for           example you can create a role called “IT Manager” and                     assign all IT cost centers      e. Assign the role to users      f. Create a query; create a variable within the query for                OCOSTCENTER of type “Authorization” and include in the query; if the IT manager runs the query it shows only the cost centers assigned to him/her.

Q. How to implement structural authorization in BW?
      a. create profile using transaction OOSP      b. assign user to profile using transaction OOSB      c. update T77UU table      d. run the program RI-IBAUS00      e. activate the data source and related components 0HR_PA_2           in BW      f. load ODS from R/3
      g. activate target info objects as “Authorization relevant”
      h. run the function module RSSB to generate BW authorization.

Q. What are the new BW 3.x authorizations?
S_RS_COMP1 checks for authorization depending on the owner; S_RS_FOLD info area view of Bex elements (to suppress); S_RS_ISET for info sets; S_GUI - new activity code 60 loaded for upload.

Q. What is the use ‘:’ as an authorization value?
      a. it enables queries that do not contain a authorization relevant          object that have been checked in info cube      b. it allows summary data to be displayed if the user does not           have access to detailed data; for example if you create 2           authorizations for one user one with Sales Org * and              customers : and second with sales org 1000 and customers *,
the user sees all customers for sales org 1000 and only summarized report for other sales org.

Q. What is $ as an authorization value?
You use S followed by a variable name (values populated in user exit for bex); this avoids having too many roles

Q. What is info object OTCTAUTHH?
This is used in hierarchy authorization

What is the t-code to see log of transport connection?
in RSA1 -> Transport Connection you can collect the Queries and the Role and after this you can transport them (enabling the transport in SE10, import it in STMS
1. RSA1
2. Transport connection (button on the left bar menu)
3. Sap transport -> Object Types (button on the left bar menu)
4. Find Query Elements -> Query
5. Find your query
6. Group necessery object
7. Transport Object (car icon)
8. Release transport (SE10 T-code)
9. load transport (STMS T-code)

or directly got o se01

LO - MM inventory data source with marker significance?
Marker is as like check point when u upload the data from inventory data source
2lis_03_bx data source for current stock and BF for movement type
after uploading data from BX u should rlise the request in cube or i menn to say compress it then load data from another data source BF and set this updated data to no marker update so marker is use as a check point if u dont do this u getting data missmatch at bex level bcz system get confuse .
(2LIS_03_BF Goods Movement From Inventory Management-- -----Unckeck the no marker update tab)
(2LIS_03_BX Stock Initialization for Inventory Management-- ---select the no marker update check box)
2LIS_03_UM Revaluations ----Uncheck the no marker update tab) in the infopackege of "collaps"

How can you navigate to see the error idocs ?
If it is fine check the IDOCs in source system go to BD87->give Ur user ID and date->execute->you can find Red status Idocs select the erroneous Idoc-> and select Manual process.

You need to Reprocess this IDOC which are RED. For this you can take help of Any of your Team (ALE IDOC Team or BAsis Team)Or Else
youcan push it manually. Just search it in bd87 screen only to Reprocess.
Also, Try to find why this IDocs are stuck there

How can you decide the query performance is slow or fast ?
You can check that in RSRT tcode.
execute the query in RSRT and after that follow the below steps
Goto SE16 and in the resulting screen give table name as RSDDSTAT for BW 3.x and RSDDSTAT_DM for BI 7.0 and press enteryou can view all the details about the query like time taken to execute the query and the timestmaps

Why we have construct setup tables?
The R/3 database structure for accounting is much more easier than the Logistical structure.
Once you post in a ledger that is done. You can correct, but that give just another posting.
BI can get information direct out of this (relatively) simple database structure.
In LO, you can have an order with multiple deliveries to more than one delivery addresses. And the payer can also be different.
When 1 item (orderline) changes, this can have its reflection on order, supply, delivery, invoice, etc.
Therefore a special record structure is build for Logistical reports.and this structure now is used for BI.
In order to have this special structre filled with your starting position, you must run a set-up. from that moment on R/3 will keep filling this LO-database.
If you wouldn't run the setup. BI would start with data from the moment you start the filling of LO (with the logistica cocpit)

How can you eliminate the duplicate records in TD, MD?
 Try to check the system logs through SM21 for the same.

What use marker in MM?
Marker update is just like check point.
ie it will give the snapshot of the stock on a particular date ie when was the marker updated.
Because we are using Noncumulative keyfigure it will lot of time to calculate the current stock for example at report time. to overcome this we use marker update
Marker updates do not summarize the data.. In inventory management scenarios, we have to calculate opening stock and closing stock on a daily basis. In order to facilitate this, we set a marker which will add and subtract the values for each record.
In the absence of marker update, the data will be added up and will not provide the correct values.

 web template
You get information on where the web template details are stored from the following tables :
RSZWOBJ Storage of the Web Objects
RSZWOBJTXT Texts for Templates/Items/ Views
RSZWOBJXREF Structure of the BW Objects in a Template

RSZWTEMPLATE Header Table for BW HTML Templates
You can check these tables and search for your web template entry . However, If I understand your question correctly , you will have to open the template in the WAD and then make the corrections in the same to correct it.

What is dashboard?
A dash board can be created using the web application Designer (WAD) or the visual composer (VC). A dashboard is just a collection of reports, views and links etc in a single view. For e.g. igoogle is a dashboard.

A dashboard is a graphical reporting interface, which displays KPIs (Key Performance Indicators) as charts and graphs. A dashboard is a performance management system

When we look at the all organization measures how they are performing with helicopter view, we need a report that teaches and shows the trend in a graphical display quickly. These reports are called as Dashboard Reports, still we can report these measures individually, but by keeping all measures in a single page, we are creating single access point to the users to view all information available to them. Absolutely this will save lot of precious time, gives clarity on decision that needs to be taken, helps the users to understand the measure(s) trend with business flow creating dashboard
Dashboards : Could be built with Visual Composer & WAD
create your dashboard in BW,

(1) Create all BEx Queries with required variants,tune them perfectly.
(2) Differentiate table queries and graph queries.
(3) Choose the graph type required that meet your requirement.
(4) Draw the layout how the Dashboard page looks like.
(5) Create a web template that has navigational block / selection information.
(6) Keep navigational block fields are common across the measures.
(7) Include the relevant web items into web template.
(8) Deploy the URL/Iview to users through portal/intranet

The steps to be followed in the creation of Dashboard using WAD are summarized as below:

1) Open a New Web template in WAD.
2) Define the tabular layout as per the requirements so as to embed the necessary web items.
3) Place the appropriate web items in the appropriate tabular grids
4) Assign queries to the web items (A Query assigned to a web item is called as a data provider)
5) Care should be taken to ensure that the navigation block’s selection parameters are common across all the BEx queries of the affected dataproviders.
6) Properties of the individual web items are to be set as per the requirements. They can be modified in Properties window or in the HTML code.
7) The URL when this web template is executed should be used in the portal/intranet

what we do in Business Blue Print Stage?
SAP has defined a business blueprint phase to help extract pertinent information about your company that is necessary for implementation. These blueprints are in the form of questionnaires that are designed to probe for information that uncovers how your company does business. As such, they also serve to document the implementation. Each business blueprint document essentially outlines your future business processes and business requirements. The kinds of questions asked are germane to the particular business function, as seen inthe following sample questions:1) What information do you capture on a purchase order?2) What information is required to complete a purchase order?Accelerated SAP question and answer database:The question and answer database (QADB) is a simple although aging tool designed to facilitate the creation and maintenance of your business blueprint.This database stores the questions and the answers and serves as the heart of your blue print. Customers are provided with a customer input template for each application that collects the data. The question and answer format is standard across applications to facilitate easier use by the project team.Issues database: Another tool used in the blueprinting phase is the issues database. Thisdatabase stores any open concerns and pending issues that relate to the implementation. Centrally storing this information assists in gathering and then managing issues to resolution, so that important matters do not fall through the cracks. You can then track the issues in database, assign them to teammembers, and update the database accordingly.

How do we gather the requirements for an Implementation Project?
 One of the biggest and most important challenges in any implementation is gathering and understanding the end user and process team functional requirements. These functional requirements represent the scope of analysis needs and expectations (both now and in the future) of the end user. These typically involve all of the following:- Business reasons for the project and business questions answered by the implementation- Critical success factors for the implementation- Source systems that are involved and the scope of information needed from each- Intended audience and stakeholders and their analysis needs- Any major transformation that is needed in order to provide the information- Security requirements to prevent unauthorized useThis process involves one seemingly simple task: Find out exactly what theend users' analysis requirements are, both now and in the future, and buildthe BW system to these requirements. Although simple in concept, in practicegathering and reaching a clear understanding and agreement on a complete setof BW functional requirements is not always so simple.

How do we decide what cubes has to be created?
Its depends on your project requirement. Customized cubes are not mandatory for all the projects. If your bussines requirement is differs from given scenario ( BI content cubes ) then only we will opt for customized cubes.Normally your BW customization or creation of new info providers all are depending on your source system.If your source system other that R3 then you should go with customization of your all objects.If your source system is R3 and your users are using only R3 standard business scenarios like SD,MM or FI... etc., then you dont want to create any info providers or you dont want to enhance any thing in the existing BW Business Content. But 99% this is not possible. Because surely they should have included their new business scenario or new enhancements.For example, In my first project we implemented for Solution Manager BW implemention. There we have activated all the business content in CRM. But the source system have new scenarios for message escalation, ageing calculation etc., According their business scenrio we could't use standard business content. For that we have taken only existing info objects and created new info objects which are not there in the business content. After that we have created custom data source to info providers as well asreports.

Who used to make the Technical and Functional Specifications?
Technical Specification:Here we will mention all the BW objects (info objects, data sources, info sources and info providers). Then we are going to say the data flow and behaviour of the data load (either delta or full) also we can tell the duration of the cube activation or creation. Pure BW technical things are available in this document. This is not for End users document.Functional Specification:Here we will describe the business requirements. That means here we are going to say which are all business we are implementing like SD, MM and FI etc., then we are going to tell the KPI and deliverable reports detail to the users. This document is going to mingle with both Function Consultants and Business Users. This document is applicable for end users also.

Give me one example of a Functional Specification and explain what information we will get from that?
Functional Specs are requirements of the business user.Technical Specs translate these requirements in a technical fashion.Let's say Functional Spec says,1. the user should be able to enter the Key date, Fiscal Year, Fiscal Version.2. The Company variable should be defaulted to USA but then if the user wants to change it, they can check the drop down list and choose other countries.3. The calculations or formulas for the report will be displayed in precision of one decimal point.4. The report should return values for 12 months of data depending on the fiscal year that the user enters Or it should display in quarterly values. Functional specs are also called as Software requirements.Now from this Techinal Spec follows, to resolve each of the line items listed above.1. To give the option of key date, Fiscal year and Fiscal Version – certain Info Obejcts should be availble in the system. If available, then should we create any variables for them - so that they are used as user entry variable. To create any varaibles, what is the approch, where do you do it, what is the technical of the objects you'll use, what'll be the technical name of the objects you'll crete as a result of this report.2. Same explanation goes for the rest. How do you set up the varaible,
3. What changes in properties willu do to get the precision.4. How will you get the 12 months of data.What will be the technical and display name of the report, who'll be authorized to run this report, etc are clearly specified in the technical specs.

What is Customization? How do we do in LO?

How to do basic LO extraction for SAP-R3-BW1. Go to transaction code RSA3 and see if any data is available related to your DataSource. If data is there in RSA3 then go to transaction code LBWG (Delete Setup data) and delete the data by entering the application name.2. Go to transaction SBIW --> Settings for Application Specific Datasource --> Logistics --> Managing extract structures --> Initialization --> Filling the Setup table --> Application specific setup of statistical data --> perform setup (relevant application)3. In OLI*** (for example OLI7BW for Statistical setup for old documents : Orders) give the name of the run and execute. Now all the available records from R/3 will be loaded to setup tables.4. Go to transaction RSA3 and check the data.5. Go to transaction LBWE and make sure the update mode for the corresponding DataSource is serialized V3 update.6. Go to BW system and create infopackage and under the update tab select the initialize delta process. And schedule the package. Now all the data available in the setup tables are now loaded into the data target.7.Now for the delta records go to LBWE in R/3 and change the update mode for the corresponding DataSource to Direct/Queue delta. By doing this record will bypass SM13 and directly go to RSA7. Go to transaction code RSA7 there you can see green light # Once the new records are added immediately you can see the record in RSA7.

Tickets and Authorization in SAP Business Warehouse What is tickets? And example?
Tickets are the tracking tool by which the user will track the work which we do. It can be a change requests or data loads or what ever. They will of types critical or moderate. Critical can be (Need to solve in 1 day or half a day) depends on the client. After solving the ticket will be closed by informing the client that the issue is solved. Tickets are raised at the time of support project these may be any issues, problems.... .etc. If the support person faces any issues then he will ask/request to operator to raise a ticket.
Operator will raise a ticket and assign it to the respective person. Critical means it is most complicated issues ....depends how you measure this...hope it helps. The concept of Ticket varies from contract to contract in between companies. Generally Ticket raised by the client can be considered based on the priority. Like High Priority, Low priority and so on. If a ticket is of high priority it has to be resolved ASAP. If the ticket is of low> priority it must be considered only after attending to high priority tickets. The typical tickets in a production Support work could be: 1. Loading any of the missing master data attributes/texts. 2. Create ADHOC hierarchies. 3. Validating the data in Cubes/ODS. 4. If any of the loads runs into errors then resolve it. 5. Add/remove fields in any of the master data/ODS/Cube. 6. Data source Enhancement. 7. Create ADHOC reports.
1. Loading any of the missing master data attributes/texts - This would be done by scheduling the infopackages for the attributes/texts mentioned by the client. 2. Create ADHOC hierarchies. - Create hierarchies in RSA1 for the info-object. 3. Validating the data in Cubes/ODS. - By using the Validation reports or by comparing BW data with R/3. 4. If any of the loads runs into errors then resolve it. - Analyze the error and take suitable action. 5. Add/remove fields in any of the master data/ODS/Cube. - Depends upon the requirement 6. Data source Enhancement. 7. Create ADHOC reports. - Create some new reports based on the requirement of client.

Change attribute run.
Generally attribute change run is used when there is any change in the master is used for realingment of the master data..Attribute change run is nothing but adjusting the master data after its been loaded from time to time so that it can change or generate or adjust the sid's so that u may not have any problem when loading the trasaction data in to data targets.the detail explanation about Attribute change run.The hierarchy/attribute change run which activates hierarchy and attribute changes and adjusts the corresponding aggregates is devided, into 4 phases:1. Finding all affected aggregates2. set up all affected aggregates again and write the result in the new aggregate table.3. Activating attributes and hierarchies4. rename the new aggregate table. When renaming, it is not possible to execute queries. In some databases, which cannot rename the indexes, the indexes are also created in this phase.

Different types of Delta updates?
Delta loads will bring any new or changed records after the last upload.This method is used for better loading in less time. Most of the std SAP data sources come as delta enabled, but some are not. In this case you can do a full load to the ODS and then do a delta from the ODS to the cube. If you create generic datasources, then you have the option of creating a delta onCalday, timestamp or numeric pointer fields (this can be doc number, etc).You'll be able to see the delta changes coming in the delta queue through RSA7 on the R3 side.To do a delta, you first have to initialize the delta on the BW side and then set up the delta.The delta mechanism is the same for both Master data and Transaction data loads.============ ========= ==There are three deltas
Direct Delta: With this update mode, the extraction data is transferred with each document posting directly into the BW delta queue. In doing so, each document posting with delta extraction is posted for exactly one LUW in the respective BW delta queues.
Queued Delta: With this update mode, the extraction data is collected for the affected application instead of being collected in an extraction queue, and can be transferred as usual with the V3 update by means of an updating collective run into the BW delta queue. In doing so, up to 10000 deltaextractions of documents for an LUW are compressed for each Data Source into the BW delta queue, depending on the application.Non-serialized
V3 Update: With this update mode, the extraction data for the application considered is written as before into the update tables with the help of a V3 update module. They are kept there as long as the data is selected through an updating collective run and are processed. However, in contrast to the current default settings (serialized V3 update), the data in the updating collective run are thereby read without regard to sequence from the update tables and are transferred to the BW delta queue.

An SAP BW functional consultant is responsible for the following: Key responsibilities include:
Maintain project plans Manage all project activities, many of which are executed by resources not directly managed by the project leader (central BW development team, source system developer, business key users) Liase with key users to agree reporting requirements, report designs Translate requirements into design specifications( report specs, data mapping / translation, functional specs) Write and execute test plans and scripts .
Coordinate and manage business / user testing Deliver training to key users Coordinate and manage product ionization and rollout activities Track CIP (continuous improvement) requests, work with users to prioritize, plan and manage CIP An SAP BW technical consultant is responsible for:SAP BW extraction using standard data extractor and available development tools for SAP and non-SAP data sources. -SAP ABAP programming with BWData modeling, star schema, master data, ODS and cube design in BWData loading process and procedures (performance tuning)Query and report development using Bex Analyzer and Query DesignerWeb report development using Web Application.

Production support
In production support there will be two kind jobs which you will be doing mostly 1, looking into the data load errors. 2, solving the tickets raised by the user. Data loading involves monitoring process chains, solving the errors related to data load, other than this you will also be doing some enhancements to the present cubes and master data but that done on requirement. User will raise a ticket when they face any problem with the query, like report showing wrong values incorrect data etc.if the system response is slow or if the query run time is high. Normally the production support activities include * Scheduling * R/3 Job Monitoring * B/W Job Monitoring * Taking corrective action for failed data loads. * Working on some tickets with small changes in reports or in AWB objects. The activities in a typical Production Support would be as follows: 1.Data Loading - could be using process chains or manual loads. 2. Resolving urgent user issues - helpline activities 3. Modifying BW reports as per the need of the user. 4. Creating aggregates in Prod system 5. Regression testing when version/patch upgrade is done. 6. Creating adhoc hierarchies. We can perform the daily activities in Production 1. monitoring Dataload failures thru RSMO 2. Monitoring Process Chains Daily/weekly/ monthly 3. Perform Change run Hirerachy 4. Check Aggr's Rollup.

How to convert a BeX query Global structure to local structure (Steps involved)
BeX query Global structure to local structureSteps; ***a local structure when you want to add structure elements that are unique to the specific query. Changing the global structure changes the structure for all the queries that use the global structure. That is reason you go for a local structure.Coming to the navigation part--In the BEx Analyzer, from the SAP Business Explorer toolbar, choose the open query icon (icon tht looks like a folder) On the SAP BEx Open dialog box:Choose Queries.Select the desired InfoCubeChoose New.On the Define the query screen:In the left frame, expand the Structure node.Drag and drop the desired structure into either the Rows or Columnsframe.Select the global structure.Right-click and choose Remove reference.A local structure is created.Remember that you cannot revert back the changes made to global structure inthis regard. You will have to delete the local structure and then drag ndrop global structure into query definition.*When you try to save a global structure, a dialogue box prompts you tocomfirm changes to all queries. that is how you identify a global structure*

What is the use of Define cell in BeX & where it is useful?
Cell in BEX:::Use*When you define selection criteria and formulas for structural components and there are two structural components of a query, generic cell definitions are created at the intersection of the structural components that determine the values to be presented in the cell.Cell-specific definitions allow you to define explicit formulas, along with implicit cell definition, and selection conditions for cells and in this way, to override implicitly created cell values. This function allows you to design much more detailed queries.In addition, you can define cells that have no direct relationship to the structural components. These cells are not displayed and serve as containers for help selections or help need two structures to enable cell editor in bex. In every query you have one structure for key figures, then you have to do another structure with selections or formulas inside.Then having two structures, the cross among them results in a fix reporting area of n rows * m columns. The cross of any row with any column can be defined as formula in cell editor.This is useful when you want to any cell had a diferent behaviour that the general one described in your query defininion.For example imagine you have the following where % is a formula kfB/KfA *100.kfA kfB %chA 6 4 66%chB 10 2 20%chC 8 4 50%Then you want that % for row chC was the sum of % for chA and % chB. Then in cell editor you are enable to write a formula specifically for that cell as sum of the two cell before. chC/% = chA/% + chB/% then:kfA kfB %chA 6 4 66%chB 10 2 20%chC 8 4 86%

 What is 0Recordmode?
A. it is an info object , 0Record mode is used to identify the delta images in BW which is used in DSO .it is automatically activated when u activate DSO in BW. Like that in R/3 also have field 0cancel. It holds delta images in R/3. When ever u extracting data from R/3 using LO or Generic.. Etc. this field 0Cancel is mapping with 0Record mode in BW. Like this BW identify the Delta images.

What is the difference between filter & Restricted Key Figures? Examples & Steps in BI?
Filter restriction applies to entire query. RKF is restriction applied on a keyfigure.Suppose for example, you want to analyse data only after 2006...showing sales in 2007,2008 against Materials..You have got a keyfigure called Sales in your cube
Now you will put global restriction at query level by putting Fiscyear > 2006 in the Filter.This will make only data which have fiscyear >2006 available for query to process or show.
Now to meet your requirement. belowMaterial Sales in 2007 Sales in 2008M1 200 300M2 400 700You need to create two RKF's.Sales in 2007 is one RKF which is defined on keyfigure Sales restricted by Fiscyear = 2007Similarly,Sales in 2008 is one RKF which is defined on Keyfigure Sales restricted by Fiscyear = 2008Now i think u understood the differenceFilter will make the restriction on query level..Like in above case putting filter Fiscyear>2006 willmake data from cube for yeaers 2001,2002,2003, 2004,2005 ,2006 unavailable to the query for showing up.So query is only left with data to be shown from 2007 and 2008.Within that can design your RKF to show only 2007 or something like that...

How to create condition and exceptions in Bi.7.0? But I know in Bw3.5 version.?
From a query name or description, you would not be able to judge whether the query is having any exception.There are two ways of finding exception against a query:1. Execute queries one by one, the one which is having background colour as exception reporting are with exceptions.2. Open queries in the BEX Query Designer. If you are finding exception tab at the right side of filter and rows/column tab, the query is having exception.

The FI Business Flow related to BW. case studies or scenarios
FI Flow Basically there are 5 major topics/areas in FI,
1. GL Accounting -related tables are SKA1, SKB1 Master data BSIS and BSAS are the Transaction Data
2. Account Receivables - related to Customer All the SD related data when transferred to FI these are created. Related Tables BSID and BSAD
3. Account Payables - related Vendor All the MM related documents data when transferred to FI these are created Related Tables BSIK and BSAK All the above six tables data is present in BKPF and BSEG tables You can link these tables with the hlp of BELNR and GJAHR and with Dates also.
4. Special Purpose Ledger.. which is rarely used.
5. Asset Management In CO there are Profit center Accounting Cost center Accounting will be there.

More Questions to come.....


  1. I am going to add more questions :)

  2. Nice collection of questions and answers. Appreciate your efforts.

    Thanks a lot for providing these.


  3. Best blog.Got to learn new things.Thanks for this Blog SAP HR Training in Chennai

  4. Hello,
    Thank you for submitting such wonderful content and very informative and useful for SAP BW learners. We are very happy to receive such wonderful content. Highly informative and one of the recommended sites for SAP BW learners.



This blog is solely for the purpose of getting educated in SAP. This blog does not encourage distribution of SAP hold copyright and any other publication and/or materials bearing SAP mark. If you find any copyright materials, please mail me so I can remove them. This blog is not in association with SAP.

ALL Posts

Get this widget