A/B Testing for Fiori Apps in ABAP Cloud

Category
DevOps
Published
Author
Johannes

A/B testing enables data-driven decisions in product development. Instead of guessing which UI variant works better, you test both variants with real users and measure the results.

What is A/B Testing?

A/B testing (also called split testing) compares two variants of a feature by randomly distributing users to variant A or B:

┌─────────────────────────────────────────────────────────────────┐
│ A/B Testing Flow │
├─────────────────────────────────────────────────────────────────┤
│ │
│ User ┌──────────────┐ │
│ │ │ Experiment │ │
│ │ │ Assignment │ │
│ ▼ └──────┬───────┘ │
│ ┌─────┐ │ │
│ │ 50% │─────────────────►│ Variant A (Control) │
│ └─────┘ │ └─► Measure: Clicks, Time, etc. │
│ │ │
│ ┌─────┐ │ │
│ │ 50% │─────────────────►│ Variant B (Treatment) │
│ └─────┘ │ └─► Measure: Clicks, Time, etc. │
│ │ │
│ ┌──────▼───────┐ │
│ │ Statistical │ │
│ │ Evaluation │ │
│ └──────────────┘ │
│ │ │
│ ┌──────▼───────┐ │
│ │ Decision │ │
│ │ A or B? │ │
│ └──────────────┘ │
└─────────────────────────────────────────────────────────────────┘

Typical Metrics

MetricDescriptionExample
Conversion RatePercentage of users completing an action15% button clicks
Time on TaskTime until task completion45 seconds
Error RatePercentage of incorrect inputs3% validation errors
EngagementInteractions per session8 clicks
Bounce RateUsers who leave immediately12%

Architecture for A/B Testing in ABAP Cloud

The implementation is based on feature flags extended with tracking functionality:

┌─────────────────────────────────────────────────────────────────┐
│ A/B Testing Architecture │
├─────────────────────────────────────────────────────────────────┤
│ │
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │
│ │ Fiori App │────►│ RAP Service │────►│ Experiment │ │
│ │ │ │ │ │ Assignment │ │
│ └──────┬───────┘ └──────────────┘ └──────────────┘ │
│ │ │
│ │ User Actions │
│ ▼ │
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │
│ │ Event │────►│ Tracking │────►│ Analytics │ │
│ │ Tracking │ │ Service │ │ Evaluation │ │
│ └──────────────┘ └──────────────┘ └──────────────┘ │
│ │
└─────────────────────────────────────────────────────────────────┘

Data Model for Experiments

Experiment Configuration

@EndUserText.label : 'A/B Experiments'
@AbapCatalog.enhancement.category : #NOT_EXTENSIBLE
@AbapCatalog.tableCategory : #TRANSPARENT
@AbapCatalog.deliveryClass : #A
define table zab_experiment {
key client : abap.clnt not null;
key experiment_id : abap.char(40) not null;
experiment_name : abap.char(100);
description : abap.char(255);
hypothesis : abap.string(1000);
status : abap.char(10); // DRAFT, RUNNING, COMPLETED, CANCELLED
start_date : abap.dats;
end_date : abap.dats;
target_sample_size : abap.int4;
control_percent : abap.int2; // e.g. 50 for 50%
created_by : abap.uname;
created_at : timestampl;
}

Variant Definition

@EndUserText.label : 'Experiment Variants'
@AbapCatalog.tableCategory : #TRANSPARENT
define table zab_variant {
key client : abap.clnt not null;
key experiment_id : abap.char(40) not null;
key variant_id : abap.char(10) not null; // A, B, C...
variant_name : abap.char(100);
is_control : abap_boolean; // Control group (A)
allocation_weight : abap.int2; // Weight
variant_config : abap.string(4000); // JSON configuration
}

User Assignment

@EndUserText.label : 'User Experiment Assignment'
@AbapCatalog.tableCategory : #TRANSPARENT
define table zab_assignment {
key client : abap.clnt not null;
key experiment_id : abap.char(40) not null;
key user_id : abap.char(40) not null;
variant_id : abap.char(10);
assigned_at : timestampl;
assignment_hash : abap.char(64); // For deterministic assignment
}

Variant Assignment for Users

The assignment must be deterministic and consistent - a user should always see the same variant.

Experiment Service Class

CLASS zcl_ab_experiment_service DEFINITION
PUBLIC
FINAL
CREATE PUBLIC.
PUBLIC SECTION.
TYPES:
BEGIN OF ty_variant_config,
variant_id TYPE zab_variant-variant_id,
variant_name TYPE zab_variant-variant_name,
config TYPE string,
END OF ty_variant_config.
METHODS get_variant_for_user
IMPORTING
iv_experiment_id TYPE zab_experiment-experiment_id
iv_user_id TYPE sy-uname DEFAULT sy-uname
RETURNING
VALUE(rs_result) TYPE ty_variant_config
RAISING
cx_static_check.
METHODS is_experiment_active
IMPORTING
iv_experiment_id TYPE zab_experiment-experiment_id
RETURNING
VALUE(rv_is_active) TYPE abap_bool.
PRIVATE SECTION.
METHODS calculate_bucket
IMPORTING
iv_experiment_id TYPE zab_experiment-experiment_id
iv_user_id TYPE sy-uname
RETURNING
VALUE(rv_bucket) TYPE i.
METHODS assign_variant
IMPORTING
iv_experiment_id TYPE zab_experiment-experiment_id
iv_user_id TYPE sy-uname
iv_bucket TYPE i
RETURNING
VALUE(rv_variant_id) TYPE zab_variant-variant_id.
ENDCLASS.
CLASS zcl_ab_experiment_service IMPLEMENTATION.
METHOD get_variant_for_user.
" Check if experiment is active
IF is_experiment_active( iv_experiment_id ) = abap_false.
RAISE EXCEPTION TYPE zcx_ab_experiment
EXPORTING
textid = zcx_ab_experiment=>experiment_not_active.
ENDIF.
" Check existing assignment
SELECT SINGLE variant_id FROM zab_assignment
WHERE experiment_id = @iv_experiment_id
AND user_id = @iv_user_id
INTO @DATA(lv_existing_variant).
IF sy-subrc = 0.
" Already assigned - load variant configuration
SELECT SINGLE variant_id, variant_name, variant_config
FROM zab_variant
WHERE experiment_id = @iv_experiment_id
AND variant_id = @lv_existing_variant
INTO CORRESPONDING FIELDS OF @rs_result.
RETURN.
ENDIF.
" Create new assignment
DATA(lv_bucket) = calculate_bucket(
iv_experiment_id = iv_experiment_id
iv_user_id = iv_user_id
).
DATA(lv_variant_id) = assign_variant(
iv_experiment_id = iv_experiment_id
iv_user_id = iv_user_id
iv_bucket = lv_bucket
).
" Load variant configuration
SELECT SINGLE variant_id, variant_name, variant_config
FROM zab_variant
WHERE experiment_id = @iv_experiment_id
AND variant_id = @lv_variant_id
INTO CORRESPONDING FIELDS OF @rs_result.
ENDMETHOD.
METHOD is_experiment_active.
SELECT SINGLE status, start_date, end_date
FROM zab_experiment
WHERE experiment_id = @iv_experiment_id
INTO @DATA(ls_experiment).
IF sy-subrc <> 0.
rv_is_active = abap_false.
RETURN.
ENDIF.
rv_is_active = xsdbool(
ls_experiment-status = 'RUNNING' AND
ls_experiment-start_date <= sy-datum AND
( ls_experiment-end_date >= sy-datum OR
ls_experiment-end_date IS INITIAL )
).
ENDMETHOD.
METHOD calculate_bucket.
" Deterministic hash based on experiment + user
DATA(lv_input) = |{ iv_experiment_id }{ iv_user_id }|.
" Calculate hash (CL_ABAP_MESSAGE_DIGEST)
DATA(lo_digest) = cl_abap_message_digest=>create(
algorithm = 'SHA256'
).
lo_digest->update(
message = cl_abap_codepage=>convert_to( lv_input )
).
DATA(lv_hash) = lo_digest->digest_hex_string( ).
" Convert hash to bucket 0-99
DATA(lv_hash_part) = substring( val = lv_hash off = 0 len = 8 ).
DATA(lv_number) = cl_abap_conv_in_ce=>conv_hex_to_int( lv_hash_part ).
rv_bucket = lv_number MOD 100.
ENDMETHOD.
METHOD assign_variant.
" Load variants with weights
SELECT variant_id, allocation_weight
FROM zab_variant
WHERE experiment_id = @iv_experiment_id
ORDER BY variant_id
INTO TABLE @DATA(lt_variants).
" Map bucket to variant
DATA(lv_cumulative) = 0.
LOOP AT lt_variants INTO DATA(ls_variant).
lv_cumulative = lv_cumulative + ls_variant-allocation_weight.
IF iv_bucket < lv_cumulative.
rv_variant_id = ls_variant-variant_id.
EXIT.
ENDIF.
ENDLOOP.
" Persist assignment
INSERT INTO zab_assignment VALUES @(
VALUE #(
client = sy-mandt
experiment_id = iv_experiment_id
user_id = iv_user_id
variant_id = rv_variant_id
assigned_at = utclong_current( )
)
).
ENDMETHOD.
ENDCLASS.

Tracking User Actions

Tracking captures all relevant user interactions for later evaluation.

Event Table

@EndUserText.label : 'Experiment Events'
@AbapCatalog.tableCategory : #TRANSPARENT
define table zab_event {
key client : abap.clnt not null;
key event_uuid : sysuuid_x16 not null;
experiment_id : abap.char(40);
variant_id : abap.char(10);
user_id : abap.char(40);
event_type : abap.char(50); // VIEW, CLICK, SUBMIT, ERROR
event_name : abap.char(100); // button_primary, form_submit
event_value : abap.string(1000); // JSON with additional data
page_url : abap.string(500);
session_id : abap.char(64);
timestamp : timestampl;
duration_ms : abap.int4;
}

Tracking Service

CLASS zcl_ab_tracking_service DEFINITION
PUBLIC
FINAL
CREATE PUBLIC.
PUBLIC SECTION.
TYPES:
BEGIN OF ty_event,
experiment_id TYPE zab_event-experiment_id,
event_type TYPE zab_event-event_type,
event_name TYPE zab_event-event_name,
event_value TYPE string,
page_url TYPE zab_event-page_url,
session_id TYPE zab_event-session_id,
duration_ms TYPE zab_event-duration_ms,
END OF ty_event.
METHODS track_event
IMPORTING
is_event TYPE ty_event.
METHODS track_conversion
IMPORTING
iv_experiment_id TYPE zab_experiment-experiment_id
iv_conversion_id TYPE string
iv_value TYPE decfloat34 OPTIONAL.
METHODS track_page_view
IMPORTING
iv_experiment_id TYPE zab_experiment-experiment_id
iv_page_url TYPE string.
PRIVATE SECTION.
DATA mo_experiment_service TYPE REF TO zcl_ab_experiment_service.
METHODS get_user_variant
IMPORTING
iv_experiment_id TYPE zab_experiment-experiment_id
RETURNING
VALUE(rv_variant_id) TYPE zab_variant-variant_id.
ENDCLASS.
CLASS zcl_ab_tracking_service IMPLEMENTATION.
METHOD track_event.
DATA(lv_variant_id) = get_user_variant( is_event-experiment_id ).
DATA(ls_event) = VALUE zab_event(
client = sy-mandt
event_uuid = cl_system_uuid=>create_uuid_x16_static( )
experiment_id = is_event-experiment_id
variant_id = lv_variant_id
user_id = sy-uname
event_type = is_event-event_type
event_name = is_event-event_name
event_value = is_event-event_value
page_url = is_event-page_url
session_id = is_event-session_id
timestamp = utclong_current( )
duration_ms = is_event-duration_ms
).
INSERT INTO zab_event VALUES @ls_event.
ENDMETHOD.
METHOD track_conversion.
DATA(lv_event_value) = |\{ "conversion_id": "{ iv_conversion_id }"|.
IF iv_value IS NOT INITIAL.
lv_event_value = |{ lv_event_value }, "value": { iv_value }|.
ENDIF.
lv_event_value = |{ lv_event_value } \}|.
track_event( VALUE #(
experiment_id = iv_experiment_id
event_type = 'CONVERSION'
event_name = iv_conversion_id
event_value = lv_event_value
) ).
ENDMETHOD.
METHOD track_page_view.
track_event( VALUE #(
experiment_id = iv_experiment_id
event_type = 'PAGE_VIEW'
event_name = 'page_view'
page_url = iv_page_url
) ).
ENDMETHOD.
METHOD get_user_variant.
IF mo_experiment_service IS NOT BOUND.
mo_experiment_service = NEW zcl_ab_experiment_service( ).
ENDIF.
TRY.
DATA(ls_variant) = mo_experiment_service->get_variant_for_user(
iv_experiment_id = iv_experiment_id
).
rv_variant_id = ls_variant-variant_id.
CATCH cx_static_check.
rv_variant_id = 'UNKNOWN'.
ENDTRY.
ENDMETHOD.
ENDCLASS.

RAP Integration for Frontend Tracking

" Behavior Definition
unmanaged implementation in class zbp_i_ab_tracking;
define behavior for ZI_AB_TRACKING_EVENT alias TrackingEvent
{
static action trackEvent parameter ZA_TRACKING_EVENT_PARAM;
static action trackConversion parameter ZA_CONVERSION_PARAM;
}
CLASS zbp_i_ab_tracking DEFINITION PUBLIC
FINAL FOR BEHAVIOR OF zi_ab_tracking_event.
PUBLIC SECTION.
METHODS trackEvent FOR MODIFY
IMPORTING it_event FOR ACTION TrackingEvent~trackEvent.
METHODS trackConversion FOR MODIFY
IMPORTING it_conv FOR ACTION TrackingEvent~trackConversion.
ENDCLASS.
CLASS zbp_i_ab_tracking IMPLEMENTATION.
METHOD trackEvent.
DATA(lo_tracking) = NEW zcl_ab_tracking_service( ).
LOOP AT it_event INTO DATA(ls_event).
lo_tracking->track_event( VALUE #(
experiment_id = ls_event-%param-experiment_id
event_type = ls_event-%param-event_type
event_name = ls_event-%param-event_name
event_value = ls_event-%param-event_value
page_url = ls_event-%param-page_url
session_id = ls_event-%param-session_id
) ).
ENDLOOP.
ENDMETHOD.
METHOD trackConversion.
DATA(lo_tracking) = NEW zcl_ab_tracking_service( ).
LOOP AT it_conv INTO DATA(ls_conv).
lo_tracking->track_conversion(
iv_experiment_id = ls_conv-%param-experiment_id
iv_conversion_id = ls_conv-%param-conversion_id
iv_value = ls_conv-%param-value
).
ENDLOOP.
ENDMETHOD.
ENDCLASS.

Collecting and Evaluating Metrics

CDS View for Experiment Metrics

@AbapCatalog.viewEnhancementCategory: [#NONE]
@AccessControl.authorizationCheck: #NOT_REQUIRED
@EndUserText.label: 'Experiment Metrics'
@Analytics: { dataCategory: #CUBE }
define view entity ZI_AB_EXPERIMENT_METRICS
as select from zab_event as Event
inner join zab_assignment as Assignment
on Event.experiment_id = Assignment.experiment_id
and Event.user_id = Assignment.user_id
{
key Event.experiment_id,
key Event.variant_id,
key Event.event_type,
key Event.event_name,
@EndUserText.label: 'Event Count'
@Aggregation.default: #SUM
cast( 1 as abap.int4 ) as event_count,
@EndUserText.label: 'Unique Users'
@Aggregation.default: #COUNT_DISTINCT
Event.user_id,
@EndUserText.label: 'Average Duration'
@Aggregation.default: #AVG
Event.duration_ms,
@EndUserText.label: 'Event Date'
cast( Event.timestamp as abap.dats ) as event_date
}

Conversion Rate Calculation

@AbapCatalog.viewEnhancementCategory: [#NONE]
@AccessControl.authorizationCheck: #NOT_REQUIRED
@EndUserText.label: 'Conversion Rates per Variant'
define view entity ZI_AB_CONVERSION_RATE
as select from zab_assignment as Assignment
left outer join zab_event as Conversion
on Assignment.experiment_id = Conversion.experiment_id
and Assignment.user_id = Conversion.user_id
and Conversion.event_type = 'CONVERSION'
{
key Assignment.experiment_id,
key Assignment.variant_id,
@EndUserText.label: 'Assigned Users'
count( distinct Assignment.user_id ) as assigned_users,
@EndUserText.label: 'Converted Users'
count( distinct Conversion.user_id ) as converted_users,
@EndUserText.label: 'Conversion Rate'
division(
count( distinct Conversion.user_id ) * 100,
count( distinct Assignment.user_id ),
2
) as conversion_rate_percent
}
group by
Assignment.experiment_id,
Assignment.variant_id

Metrics Service Class

CLASS zcl_ab_metrics_service DEFINITION
PUBLIC
FINAL
CREATE PUBLIC.
PUBLIC SECTION.
TYPES:
BEGIN OF ty_variant_metrics,
variant_id TYPE zab_variant-variant_id,
sample_size TYPE i,
conversions TYPE i,
conversion_rate TYPE decfloat34,
avg_duration_ms TYPE decfloat34,
total_events TYPE i,
END OF ty_variant_metrics,
tt_variant_metrics TYPE STANDARD TABLE OF ty_variant_metrics WITH KEY variant_id.
TYPES:
BEGIN OF ty_experiment_result,
experiment_id TYPE zab_experiment-experiment_id,
status TYPE string,
is_significant TYPE abap_bool,
winning_variant TYPE zab_variant-variant_id,
confidence_level TYPE decfloat34,
variants TYPE tt_variant_metrics,
END OF ty_experiment_result.
METHODS get_experiment_metrics
IMPORTING
iv_experiment_id TYPE zab_experiment-experiment_id
RETURNING
VALUE(rs_result) TYPE ty_experiment_result.
PRIVATE SECTION.
METHODS calculate_chi_square
IMPORTING
it_variants TYPE tt_variant_metrics
RETURNING
VALUE(rv_chi_square) TYPE decfloat34.
METHODS get_p_value
IMPORTING
iv_chi_square TYPE decfloat34
iv_degrees_freedom TYPE i
RETURNING
VALUE(rv_p_value) TYPE decfloat34.
ENDCLASS.
CLASS zcl_ab_metrics_service IMPLEMENTATION.
METHOD get_experiment_metrics.
rs_result-experiment_id = iv_experiment_id.
" Load metrics per variant
SELECT variant_id,
assigned_users AS sample_size,
converted_users AS conversions,
conversion_rate_percent AS conversion_rate
FROM zi_ab_conversion_rate
WHERE experiment_id = @iv_experiment_id
INTO TABLE @DATA(lt_rates).
LOOP AT lt_rates INTO DATA(ls_rate).
APPEND VALUE #(
variant_id = ls_rate-variant_id
sample_size = ls_rate-sample_size
conversions = ls_rate-conversions
conversion_rate = ls_rate-conversion_rate
) TO rs_result-variants.
ENDLOOP.
" Calculate statistical significance
IF lines( rs_result-variants ) >= 2.
DATA(lv_chi_square) = calculate_chi_square( rs_result-variants ).
DATA(lv_p_value) = get_p_value(
iv_chi_square = lv_chi_square
iv_degrees_freedom = lines( rs_result-variants ) - 1
).
rs_result-confidence_level = ( 1 - lv_p_value ) * 100.
rs_result-is_significant = xsdbool( lv_p_value < '0.05' ).
" Determine winner
DATA(lv_max_rate) = VALUE decfloat34( ).
LOOP AT rs_result-variants INTO DATA(ls_variant).
IF ls_variant-conversion_rate > lv_max_rate.
lv_max_rate = ls_variant-conversion_rate.
rs_result-winning_variant = ls_variant-variant_id.
ENDIF.
ENDLOOP.
ENDIF.
rs_result-status = COND #(
WHEN rs_result-is_significant = abap_true THEN 'SIGNIFICANT'
ELSE 'COLLECTING_DATA'
).
ENDMETHOD.
METHOD calculate_chi_square.
" Chi-square test for independence
DATA(lv_total_users) = REDUCE i( INIT sum = 0
FOR variant IN it_variants NEXT sum = sum + variant-sample_size ).
DATA(lv_total_conversions) = REDUCE i( INIT sum = 0
FOR variant IN it_variants NEXT sum = sum + variant-conversions ).
DATA(lv_expected_rate) = COND decfloat34(
WHEN lv_total_users > 0
THEN lv_total_conversions / lv_total_users
ELSE 0
).
rv_chi_square = 0.
LOOP AT it_variants INTO DATA(ls_variant).
DATA(lv_expected_conv) = ls_variant-sample_size * lv_expected_rate.
DATA(lv_expected_non_conv) = ls_variant-sample_size * ( 1 - lv_expected_rate ).
IF lv_expected_conv > 0.
rv_chi_square = rv_chi_square +
( ( ls_variant-conversions - lv_expected_conv ) ** 2 ) / lv_expected_conv.
ENDIF.
IF lv_expected_non_conv > 0.
DATA(lv_non_conv) = ls_variant-sample_size - ls_variant-conversions.
rv_chi_square = rv_chi_square +
( ( lv_non_conv - lv_expected_non_conv ) ** 2 ) / lv_expected_non_conv.
ENDIF.
ENDLOOP.
ENDMETHOD.
METHOD get_p_value.
" Simplified p-value approximation
" For production use: Use statistical library or external API
" Critical values for Chi-square (df=1)
" p=0.10 -> chi=2.706
" p=0.05 -> chi=3.841
" p=0.01 -> chi=6.635
rv_p_value = COND #(
WHEN iv_chi_square >= '6.635' THEN '0.01'
WHEN iv_chi_square >= '3.841' THEN '0.05'
WHEN iv_chi_square >= '2.706' THEN '0.10'
ELSE '0.50'
).
ENDMETHOD.
ENDCLASS.

Calculating Statistical Significance

Statistical significance indicates whether the observed differences between variants are random or actually exist.

Basic Concepts

TermMeaning
p-valueProbability that difference is random
Significance levelThreshold (typically 0.05 = 5%)
Confidence intervalRange where true value lies
Sample SizeNumber of observations per variant
Effect SizeMagnitude of the difference

Sample Size Calculation

CLASS zcl_ab_sample_calculator DEFINITION
PUBLIC
FINAL
CREATE PUBLIC.
PUBLIC SECTION.
METHODS calculate_required_sample_size
IMPORTING
iv_baseline_rate TYPE decfloat34 " e.g. 0.10 (10%)
iv_min_detectable_effect TYPE decfloat34 " e.g. 0.02 (2%)
iv_significance_level TYPE decfloat34 DEFAULT '0.05'
iv_power TYPE decfloat34 DEFAULT '0.80'
RETURNING
VALUE(rv_sample_size) TYPE i.
METHODS estimate_test_duration
IMPORTING
iv_required_sample TYPE i
iv_daily_users TYPE i
RETURNING
VALUE(rv_days) TYPE i.
ENDCLASS.
CLASS zcl_ab_sample_calculator IMPLEMENTATION.
METHOD calculate_required_sample_size.
" Simplified formula for two proportions
" n = 2 * (z_alpha + z_beta)^2 * p * (1-p) / delta^2
" Z-values for typical parameters
DATA(lv_z_alpha) = COND decfloat34(
WHEN iv_significance_level = '0.05' THEN '1.96'
WHEN iv_significance_level = '0.01' THEN '2.58'
ELSE '1.64'
).
DATA(lv_z_beta) = COND decfloat34(
WHEN iv_power = '0.80' THEN '0.84'
WHEN iv_power = '0.90' THEN '1.28'
ELSE '0.84'
).
DATA(lv_p) = iv_baseline_rate.
DATA(lv_delta) = iv_min_detectable_effect.
DATA(lv_numerator) = 2 * ( lv_z_alpha + lv_z_beta ) ** 2 * lv_p * ( 1 - lv_p ).
DATA(lv_denominator) = lv_delta ** 2.
IF lv_denominator > 0.
rv_sample_size = ceil( lv_numerator / lv_denominator ).
ENDIF.
ENDMETHOD.
METHOD estimate_test_duration.
" With 50/50 split we need double sample size
DATA(lv_total_needed) = iv_required_sample * 2.
IF iv_daily_users > 0.
rv_days = ceil( lv_total_needed / iv_daily_users ).
ENDIF.
ENDMETHOD.
ENDCLASS.

Result Interpretation

CLASS zcl_ab_result_interpreter DEFINITION
PUBLIC
FINAL
CREATE PUBLIC.
PUBLIC SECTION.
TYPES:
BEGIN OF ty_recommendation,
action TYPE string,
reasoning TYPE string,
confidence TYPE string,
END OF ty_recommendation.
METHODS interpret_result
IMPORTING
is_result TYPE zcl_ab_metrics_service=>ty_experiment_result
RETURNING
VALUE(rs_recommendation) TYPE ty_recommendation.
ENDCLASS.
CLASS zcl_ab_result_interpreter IMPLEMENTATION.
METHOD interpret_result.
DATA(lv_total_sample) = REDUCE i( INIT sum = 0
FOR v IN is_result-variants NEXT sum = sum + v-sample_size ).
" At least 100 samples per variant
DATA(lv_min_sample) = 100 * lines( is_result-variants ).
IF lv_total_sample < lv_min_sample.
rs_recommendation = VALUE #(
action = 'WAIT'
reasoning = |Not enough data. { lv_total_sample } of { lv_min_sample } required samples.|
confidence = 'LOW'
).
RETURN.
ENDIF.
IF is_result-is_significant = abap_false.
rs_recommendation = VALUE #(
action = 'CONTINUE'
reasoning = 'No significant difference detected. Continue test or review hypothesis.'
confidence = 'MEDIUM'
).
RETURN.
ENDIF.
" Significant - recommend winner
READ TABLE is_result-variants INTO DATA(ls_control)
WITH KEY variant_id = 'A'.
READ TABLE is_result-variants INTO DATA(ls_winner)
WITH KEY variant_id = is_result-winning_variant.
DATA(lv_improvement) = COND decfloat34(
WHEN ls_control-conversion_rate > 0
THEN ( ls_winner-conversion_rate - ls_control-conversion_rate ) /
ls_control-conversion_rate * 100
ELSE 0
).
rs_recommendation = VALUE #(
action = |IMPLEMENT_{ is_result-winning_variant }|
reasoning = |Variant { is_result-winning_variant } shows { lv_improvement }% improvement at { is_result-confidence_level }% confidence.|
confidence = COND #(
WHEN is_result-confidence_level >= 99 THEN 'VERY_HIGH'
WHEN is_result-confidence_level >= 95 THEN 'HIGH'
ELSE 'MEDIUM'
)
).
ENDMETHOD.
ENDCLASS.

Integration with SAP Analytics

For extended analysis and visualization, you can send data to SAP Analytics Cloud.

Analytical CDS View

@AbapCatalog.viewEnhancementCategory: [#NONE]
@AccessControl.authorizationCheck: #NOT_REQUIRED
@EndUserText.label: 'A/B Test Analytics'
@Analytics: { dataCategory: #CUBE, internalName: #LOCAL }
@ObjectModel.usageType: {
serviceQuality: #D,
sizeCategory: #L,
dataClass: #MIXED
}
define view entity ZI_AB_ANALYTICS
as select from zab_event as Event
inner join zab_experiment as Experiment
on Event.experiment_id = Experiment.experiment_id
inner join zab_variant as Variant
on Event.experiment_id = Variant.experiment_id
and Event.variant_id = Variant.variant_id
{
@Analytics.dimension: true
@EndUserText.label: 'Experiment'
key Event.experiment_id,
@Analytics.dimension: true
@EndUserText.label: 'Variant'
key Event.variant_id,
@Analytics.dimension: true
@EndUserText.label: 'Event Type'
key Event.event_type,
@Analytics.dimension: true
@EndUserText.label: 'Event Date'
cast( Event.timestamp as abap.dats ) as event_date,
@EndUserText.label: 'Experiment Name'
Experiment.experiment_name,
@EndUserText.label: 'Variant Name'
Variant.variant_name,
@EndUserText.label: 'Is Control'
Variant.is_control,
@Analytics.measure: true
@Aggregation.default: #SUM
@EndUserText.label: 'Event Count'
cast( 1 as abap.int4 ) as event_count,
@Analytics.measure: true
@Aggregation.default: #COUNT_DISTINCT
@EndUserText.label: 'Unique Users'
Event.user_id,
@Analytics.measure: true
@Aggregation.default: #AVG
@EndUserText.label: 'Avg Duration'
Event.duration_ms
}

Dashboard Configuration

@Metadata.layer: #CORE
annotate view ZI_AB_ANALYTICS with
{
@UI.chart: [{
title: 'Conversion Rate per Variant',
chartType: #BAR,
dimensions: ['variant_id'],
measures: ['event_count'],
qualifier: 'ConversionChart'
}]
@UI.presentationVariant: [{
sortOrder: [{ by: 'event_date', direction: #DESC }],
visualizations: [{
type: #AS_CHART,
qualifier: 'ConversionChart'
}]
}]
experiment_id;
}

Ethical Considerations

A/B testing should be conducted responsibly:

Guidelines

AspectGuideline
TransparencyInform users about tests (e.g., in privacy policy)
No harmVariants must not disadvantage users
Data minimizationOnly capture necessary data
FairnessDon’t test critical features (e.g., prices)
Time limitsDon’t run tests indefinitely
DocumentationDocument hypothesis and results

Implementing an Ethics Check

CLASS zcl_ab_ethics_checker DEFINITION
PUBLIC
FINAL
CREATE PUBLIC.
PUBLIC SECTION.
TYPES:
BEGIN OF ty_ethics_result,
is_approved TYPE abap_bool,
warnings TYPE string_table,
requires_review TYPE abap_bool,
END OF ty_ethics_result.
METHODS check_experiment
IMPORTING
is_experiment TYPE zab_experiment
it_variants TYPE STANDARD TABLE
RETURNING
VALUE(rs_result) TYPE ty_ethics_result.
ENDCLASS.
CLASS zcl_ab_ethics_checker IMPLEMENTATION.
METHOD check_experiment.
rs_result-is_approved = abap_true.
" Rule 1: Maximum test duration
DATA(lv_duration) = is_experiment-end_date - is_experiment-start_date.
IF lv_duration > 90.
APPEND 'Test runs longer than 90 days' TO rs_result-warnings.
rs_result-requires_review = abap_true.
ENDIF.
" Rule 2: Sample size not too small
IF is_experiment-target_sample_size < 100.
APPEND 'Sample size below 100 - not statistically meaningful' TO rs_result-warnings.
ENDIF.
" Rule 3: Hypothesis must be documented
IF is_experiment-hypothesis IS INITIAL.
APPEND 'No hypothesis documented' TO rs_result-warnings.
rs_result-is_approved = abap_false.
ENDIF.
" Rule 4: Control variant must exist
DATA(lv_has_control) = abap_false.
LOOP AT it_variants INTO DATA(ls_variant).
IF ls_variant-is_control = abap_true.
lv_has_control = abap_true.
EXIT.
ENDIF.
ENDLOOP.
IF lv_has_control = abap_false.
APPEND 'No control group defined' TO rs_result-warnings.
rs_result-is_approved = abap_false.
ENDIF.
ENDMETHOD.
ENDCLASS.

Best Practices

Dos and Don’ts

DoDon’t
Formulate clear hypothesis”Let’s see what happens”
One metric as primary goalDozens of metrics simultaneously
Wait for sufficient sample sizeStop early on positive trends
Analyze segmentsOnly look at averages
Document resultsDon’t share learnings

Checklist Before Starting

  1. Hypothesis formulated and documented?
  2. Primary metric defined?
  3. Sample size calculated?
  4. Test duration planned?
  5. Ethics check completed?
  6. Tracking implemented and tested?
  7. Rollback plan available?

Summary

ComponentPurpose
Experiment ServiceVariant assignment and management
Tracking ServiceEvent capture
Metrics ServiceEvaluation and statistics
Ethics CheckerResponsible testing

A/B testing enables data-based decisions in Fiori development. With the right infrastructure of experiment management, tracking, and statistical evaluation, you can make informed product decisions.