Module B: Business Rules, Benchmarks & Calculation Logic¶
Great! I'm glad Module A is clearer now.
Let's move on to Module B: Business Rules, Benchmarks & Calculation Logic, again using our example question to illustrate its purpose and content.
Example User Question for Pihu (Reminder):
"Compare the overall liking of SLHB5 and SLHB8 from the Signature Latte Cold V2 study. Did SLHB8 meet the benchmark, and what were the main aroma notes for it among millennials?"
How Module B helps answer our example question: To answer "Did CLB8 meet the benchmark?", Pihu needs to:
- Know what the relevant benchmark is for "overall liking" of a product like CLB8.
- Understand the target score and the different performance bands (e.g., "Exceeds Expectations," "Meets Expectations").
- Potentially know how to calculate a derived metric if the benchmark was based on something like "Top-2-Box Liking" (though our example focuses on mean score).
Module B: Business Rules, Benchmarks & Calculation Logic¶
Goal: To store TagTaste's specific rules for evaluating product performance, defining how metrics are calculated, and interpreting results against standards.
1) Text-Based Structure with Actual Examples (Relevant to the User Question)¶
We'll focus on the "Standard Liking Benchmark" as it's directly relevant to our example.
Benchmark Rule Entry Form: Standard Liking Benchmark for Food/Beverage Products
- Benchmark ID:
BM_LIKING_TAGTASTE_STD_001 - Benchmark Name:
TagTaste Standard Liking Benchmark - Status:
Active - Description:
Defines TagTaste's standard performance bands for overall liking scores of food and beverage products, based on a 9-point hedonic scale. - Applies To:
- Attribute IDs (select from Module A):
[X] ATTR_LIKING_OVERALL_001 (Overall Product Liking)
- Product Categories (select multiple, optional):
[X] Food General[X] Beverage General- (This can be refined if benchmarks vary significantly by broad category, e.g., a separate one for "Confectionery" vs. "Savory Snacks")
- Collaboration Types (select multiple, optional):
[X] CLT (Central Location Test)[X] HUT (Home Use Test)
- Attribute IDs (select from Module A):
- Benchmark Type:
Threshold-Based - Target Metric:
Mean Score (9-point Hedonic)(This is the value Pihu will compare against the bands) - Target Value Details:
- Primary Target Value (TagTaste's key threshold):
6.80 - Comparison Operator:
>= - Target Value Interpretation:
A mean score of 6.80 or higher generally signifies a product that 'Exceeds Expectations' and is market-ready or performing well.
- Primary Target Value (TagTaste's key threshold):
- Bands (TagTaste Standard Sensory Evaluation Scale "Measuring Scale"):
- Band 1:
- Band ID:
BM_LIKING_TAGTASTE_STD_001_B1 - Label:
Pinnacle of Quality - Condition:
score >= 7.80 - Min Score (Inclusive):
7.80 - Max Score (Inclusive):
9.00(Assuming 9 is max) - Interpretation:
Exceptional performance, significantly preferred. - Color Code (for UI):
Dark Green(e.g.,#006400)
- Band ID:
- Band 2:
- Band ID:
BM_LIKING_TAGTASTE_STD_001_B2 - Label:
Clearly Exceeds Expectations - Condition:
score >= 7.30 AND score < 7.80 - Min Score (Inclusive):
7.30 - Max Score (Exclusive):
7.80 - Interpretation:
Strong performance, well above average liking. - Color Code:
Medium Green(e.g.,#228B22)
- Band ID:
- Band 3:
- Band ID:
BM_LIKING_TAGTASTE_STD_001_B3 - Label:
Exceeds Expectations - Condition:
score >= 6.80 AND score < 7.30 - Min Score (Inclusive):
6.80 - Max Score (Exclusive):
7.30 - Interpretation:
Good performance, meets the primary target for high acceptance. - Color Code:
Light Green(e.g.,#32CD32)
- Band ID:
- Band 4:
- Band ID:
BM_LIKING_TAGTASTE_STD_001_B4 - Label:
Meets Expectations - Condition:
score >= 6.40 AND score < 6.80 - Min Score (Inclusive):
6.40 - Max Score (Exclusive):
6.80 - Interpretation:
Acceptable performance, but may not be a strong differentiator. - Color Code:
Yellow-Green(e.g.,#ADFF2F)
- Band ID:
- Band 5:
- Band ID:
BM_LIKING_TAGTASTE_STD_001_B5 - Label:
Partially Meets Expectations - Condition:
score >= 5.80 AND score < 6.40 - Min Score (Inclusive):
5.80 - Max Score (Exclusive):
6.40 - Interpretation:
Performance is borderline, may require improvements. - Color Code:
Yellow(e.g.,#FFFFE0)
- Band ID:
- Band 6:
- Band ID:
BM_LIKING_TAGTASTE_STD_001_B6 - Label:
NOT YET MARKET WORTHY - Condition:
score < 5.80 - Min Score (Inclusive):
0.00(Assuming 0 or 1 is min score) - Max Score (Exclusive):
5.80 - Interpretation:
Performance is below the desired level for market consideration, requires significant attention. - Color Code:
Red(e.g.,#FFB6C1)
- Band ID:
- Band 1:
- Source Reference:
TagTaste Standard Sensory Evaluation Scale ("Measuring Scale") - Notes for Pihu:
When a user asks if a product "met the benchmark," Pihu should retrieve the product's mean overall liking score, compare it to these bands, and state which band it falls into along with the band's label and interpretation. The primary success threshold is >= 6.80. - Created By:
tagtaste_sensory_head - Created At:
2023-01-01T12:00:00Z - Last Modified By:
pihu_kb_admin - Last Modified At:
2023-11-10T10:30:00Z
2) Actual Data Example in JSON (for Module B - focusing on the Benchmark Rule)¶
This JSON would contain the benchmark rule defined above. It might also contain other rules, like how to calculate a "Weighted Average Score" if that's a standard TagTaste metric, or rules for JAR scale interpretation.
{
"moduleName": "Business Rules, Benchmarks & Calculation Logic",
"version": "1.1",
"lastUpdated": "2023-11-15T11:00:00Z",
"rules": {
"benchmarks": [
{
"benchmarkId": "BM_LIKING_TAGTASTE_STD_001",
"benchmarkName": "TagTaste Standard Liking Benchmark",
"status": "Active",
"description": "Defines TagTaste's standard performance bands for overall liking scores (9-point hedonic scale).",
"appliesTo": {
"attributeIds": ["ATTR_LIKING_OVERALL_001"], // From Module A
"productCategories": ["Food General", "Beverage General"],
"collaborationTypes": ["CLT", "HUT"]
},
"benchmarkType": "Threshold-Based",
"targetMetric": "Mean Score (9-point Hedonic)",
"targetValueDetails": {
"primaryTargetValue": 6.80,
"comparisonOperator": ">=",
"targetValueInterpretation": "A mean score of 6.80 or higher generally signifies 'Exceeds Expectations'."
},
"bands": [
{
"bandId": "BM_LIKING_TAGTASTE_STD_001_B1",
"label": "Pinnacle of Quality",
"condition": "score >= 7.80",
"minScoreInclusive": 7.80,
"maxScoreInclusive": 9.00,
"interpretation": "Exceptional performance, significantly preferred.",
"colorCode": "#006400"
},
{
"bandId": "BM_LIKING_TAGTASTE_STD_001_B2",
"label": "Clearly Exceeds Expectations",
"condition": "score >= 7.30 AND score < 7.80",
"minScoreInclusive": 7.30,
"maxScoreExclusive": 7.80,
"interpretation": "Strong performance, well above average liking.",
"colorCode": "#228B22"
},
{
"bandId": "BM_LIKING_TAGTASTE_STD_001_B3",
"label": "Exceeds Expectations",
"condition": "score >= 6.80 AND score < 7.30",
"minScoreInclusive": 6.80,
"maxScoreExclusive": 7.30,
"interpretation": "Good performance, meets the primary target.",
"colorCode": "#32CD32"
},
{
"bandId": "BM_LIKING_TAGTASTE_STD_001_B4",
"label": "Meets Expectations",
"condition": "score >= 6.40 AND score < 6.80",
"minScoreInclusive": 6.40,
"maxScoreExclusive": 6.80,
"interpretation": "Acceptable performance.",
"colorCode": "#ADFF2F"
},
{
"bandId": "BM_LIKING_TAGTASTE_STD_001_B5",
"label": "Partially Meets Expectations",
"condition": "score >= 5.80 AND score < 6.40",
"minScoreInclusive": 5.80,
"maxScoreExclusive": 6.40,
"interpretation": "Performance is borderline.",
"colorCode": "#FFFFE0"
},
{
"bandId": "BM_LIKING_TAGTASTE_STD_001_B6",
"label": "NOT YET MARKET WORTHY",
"condition": "score < 5.80",
"minScoreInclusive": 0.00, // Or 1, depending on scale start
"maxScoreExclusive": 5.80,
"interpretation": "Performance below desired level.",
"colorCode": "#FFB6C1"
}
],
"sourceReference": "TagTaste Standard Sensory Evaluation Scale",
"notesForPihu": "Primary success threshold is >= 6.80. score, band label, and interpretation.",
"createdBy": "tagtaste_sensory_head",
"createdAt": "2023-01-01T12:00:00Z",
"lastModifiedBy": "pihu_kb_admin",
"lastModifiedAt": "2023-11-10T10:30:00Z"
}
// Potentially add other benchmarks if they exist, e.g., for specific attributes or product types.
],
"derivedMetrics": [
// Example: If "Weighted Average Score" is a standard calculation
{
"metricId": "DM_WEIGHTED_AVG_SCORE_001",
"metricName": "Weighted Average Sensory Score",
"status": "Active",
"description": "Calculates a weighted average score based on the liking scores of different sensory sections (Appearance, Aroma, Taste, etc.) and their predefined weights.",
"appliesToAttributeIds": ["ATTR_LIKING_OVERALL_001"], // This is what it *results* in, or is compared to
"calculationLogic": {
"type": "Formula",
"descriptionOrFormula": "Sum of (SectionLikingScore * SectionWeight) for all relevant sections.",
"inputParameters": [
"AppearanceLikingScore", "AppearanceWeight",
"AromaLikingScore", "AromaWeight",
"TasteLikingScore", "TasteWeight",
"MouthfeelLikingScore", "MouthfeelWeight",
"AromaticsToFlavorLikingScore", "AromaticsToFlavorWeight"
// Add other sections if applicable
],
"outputDescription": "A single weighted score, typically on a 9-point scale.",
"pythonFunctionName": "calculate_weighted_average_score" // For Pihu's Code Gen agent
},
"interpretationGuidelines": "Provides a holistic score considering the relative importance of different sensory aspects. Often compared to Overall Product Liking to check for alignment (OL vs WA).",
"notesForPihu": "Ensure weights sum to 100%. Weights may vary by product category (see weightingSchemes).",
"createdBy": "tagtaste_analyst_lead",
"createdAt": "2023-02-15T09:00:00Z",
"lastModifiedBy": "tagtaste_analyst_lead",
"lastModifiedAt": "2023-11-05T15:00:00Z"
}
],
"weightingSchemes": [
// Example: Weights for a "Hot Beverage/Coffee" category for the Weighted Average Score
{
"weightSetId": "WS_HOTBEV_COFFEE_001",
"weightSetName": "Hot Beverage/Coffee Section Weights for Weighted Average",
"status": "Active",
"description": "Standard weights for calculating weighted average for hot beverages like coffee.",
"contextOfUse": ["Weighted Average Score Calculation for Hot Beverages"],
"productCategory": "Hot Beverage/Coffee", // Link to product category
"items": [
{"itemId": "WS_HB_APP", "itemBeingWeightedDescription": "Appearance Liking", "appliesToAttributeId": "ATTR_LIKING_APPEARANCE_001", "weightValue": 0.15, "normalizationGroup": "HotBevOverall"},
{"itemId": "WS_HB_ARO", "itemBeingWeightedDescription": "Aroma Liking", "appliesToAttributeId": "ATTR_LIKING_AROMA_001", "weightValue": 0.225, "normalizationGroup": "HotBevOverall"},
{"itemId": "WS_HB_MOU", "itemBeingWeightedDescription": "Mouthfeel Liking", "appliesToAttributeId": "ATTR_LIKING_MOUTHFEEL_001", "weightValue": 0.25, "normalizationGroup": "HotBevOverall"},
{"itemId": "WS_HB_TAS", "itemBeingWeightedDescription": "Taste Liking", "appliesToAttributeId": "ATTR_LIKING_TASTE_001", "weightValue": 0.20, "normalizationGroup": "HotBevOverall"},
{"itemId": "WS_HB_ATF", "itemBeingWeightedDescription": "Aromatics To Flavor Liking", "appliesToAttributeId": "ATTR_LIKING_AROMATICS_FLAVOR_001", "weightValue": 0.175, "normalizationGroup": "HotBevOverall"}
],
"notesForPihu": "Use these weights when calculating Weighted Average for products in 'Hot Beverage/Coffee' category.",
"createdBy": "tagtaste_sensory_head",
"createdAt": "2023-03-01T10:00:00Z",
"lastModifiedBy": "pihu_kb_admin",
"lastModifiedAt": "2023-11-01T17:00:00Z"
}
]
// jarInterpretationRules would be structured similarly if needed for the example.
}
}
3) JSON Structure (for Module B - Simplified Comments)¶
{
"moduleName": "Business Rules, Benchmarks & Calculation Logic",
"version": "1.0",
"lastUpdated": "YYYY-MM-DDTHH:MM:SSZ",
"rules": {
"benchmarks": [
{
"benchmarkId": "STRING_UNIQUE_ID",
"benchmarkName": "STRING",
"status": "ENUM_STRING", // "Active", "Deprecated"
"description": "TEXT_AREA_STRING",
"appliesTo": {
"attributeIds": ["STRING_UNIQUE_ID_ATTR_1"], // From Module A
"productCategories": ["STRING_CATEGORY_1"], // Optional
"collaborationTypes": ["STRING_COLLAB_TYPE_1"] // Optional
},
"benchmarkType": "ENUM_STRING", // "Threshold-Based", "Comparative"
"targetMetric": "STRING", // e.g., "Mean Score (9-point Hedonic)"
"targetValueDetails": { // For threshold-based
"primaryTargetValue": "NUMBER_OR_NULL",
"comparisonOperator": "ENUM_STRING_OR_NULL", // e.g., ">="
"targetValueInterpretation": "TEXT_AREA_STRING"
},
"bands": [
{
"bandId": "STRING_UNIQUE_ID_BAND",
"label": "STRING", // e.g., "Exceeds Expectations"
"condition": "STRING_LOGICAL_CONDITION", // e.g., "score >= 6.80 AND score < 7.30"
"minScoreInclusive": "NUMBER_OR_NULL",
"maxScoreExclusive": "NUMBER_OR_NULL", // or maxScoreInclusive
"interpretation": "TEXT_AREA_STRING",
"colorCode": "NULLABLE_STRING_HEX_OR_NAME"
}
],
"sourceReference": "NULLABLE_STRING",
"notesForPihu": "TEXT_AREA_STRING",
"createdBy": "STRING_USER_ID",
"createdAt": "YYYY-MM-DDTHH:MM:SSZ",
"lastModifiedBy": "STRING_USER_ID",
"lastModifiedAt": "YYYY-MM-DDTHH:MM:SSZ"
}
],
"derivedMetrics": [
{
"metricId": "STRING_UNIQUE_ID_METRIC",
"metricName": "STRING",
"status": "ENUM_STRING",
"description": "TEXT_AREA_STRING",
"appliesToAttributeIds": ["STRING_UNIQUE_ID_ATTR_1"], // Attribute(s) this metric relates to
"calculationLogic": {
"type": "ENUM_STRING", // "Formula", "Statistical Test"
"descriptionOrFormula": "TEXT_AREA_STRING",
"inputParameters": ["STRING_INPUT_PARAM_1"], // Describes data needed for calculation
"outputDescription": "STRING", // Describes the result
"pythonFunctionName": "NULLABLE_STRING" // If a specific Pihu function exists
},
"interpretationGuidelines": "TEXT_AREA_STRING",
"notesForPihu": "TEXT_AREA_STRING",
"createdBy": "STRING_USER_ID",
"createdAt": "YYYY-MM-DDTHH:MM:SSZ",
"lastModifiedBy": "STRING_USER_ID",
"lastModifiedAt": "YYYY-MM-DDTHH:MM:SSZ"
}
],
"weightingSchemes": [ // Defines how different attributes contribute to a composite score
{
"weightSetId": "STRING_UNIQUE_ID_WEIGHT",
"weightSetName": "STRING",
"status": "ENUM_STRING",
"description": "TEXT_AREA_STRING",
"contextOfUse": ["STRING_CONTEXT_1"], // e.g., "Overall Score Calculation"
"productCategory": "NULLABLE_STRING", // If weights are category-specific
"items": [
{
"itemId": "STRING_UNIQUE_ID_ITEM",
"itemBeingWeightedDescription": "STRING", // e.g., "Aroma Liking Score"
"appliesToAttributeId": "STRING_UNIQUE_ID_ATTR", // Attribute from Module A being weighted
"weightValue": "NUMBER_DECIMAL", // The weight (e.g., 0.25)
"normalizationGroup": "NULLABLE_STRING" // To ensure weights in a group sum to 1
}
],
"notesForPihu": "TEXT_AREA_STRING",
"createdBy": "STRING_USER_ID",
"createdAt": "YYYY-MM-DDTHH:MM:SSZ",
"lastModifiedBy": "STRING_USER_ID",
"lastModifiedAt": "YYYY-MM-DDTHH:MM:SSZ"
}
]
// jarInterpretationRules can be added similarly if needed
}
}
Next Steps for Your Team (for Module B):
- Sensory/Analytical Team:
- Review the
Benchmark Rule Entry Form. Does it accurately capture how TagTaste defines and uses benchmarks? - Populate this form for the "Standard Liking Benchmark" using the exact bands and interpretations from your internal documentation (TagTaste Standard Sensory Evaluation Scale "Measuring Scale").
- Identify any other standard benchmarks TagTaste uses (e.g., for specific attributes, or different product categories if they have different thresholds).
- Define any standard "Derived Metrics" (like Top-2-Box, Weighted Averages) including their calculation logic.
- Document any standard "Weighting Schemes" used for calculating composite scores (like the example weights for Appearance, Aroma, etc.).
- Review the
- Technical Team:
- Review the
JSON Structure. - Consider how Pihu's "Analysis Planning Agent" or "Response Synthesis Agent" would use this module to:
- Fetch the correct benchmark rules based on the product/attribute in the query.
- Retrieve calculation logic for derived metrics.
- Apply performance band labels and interpretations.
- Review the