Engineering in the Wild

Thoughts on Composable Context

I've been thinking about different approaches to context engines in AI systems.

Loosely, I've been thinking of them into two categories:

Each has distinct strengths, but the most interesting problems emerge when you combine both approaches.

Let me make this concrete with an example: optimizing pineapple supply chains. :)

The Pineapple Problem

Imagine you're managing pineapple operations across multiple markets. Your semantic context knows everything about pineapples — varieties, growing conditions, harvest seasons, market preferences, regulatory requirements. It can tell you that Costa Rican pineapples are sweeter, that Asian markets prefer smaller fruits, and that certain varieties ship better.

But when a typhoon hits the Philippines and pineapple futures spike 15%, your semantic engine struggles. It knows what happened, but it can't quickly adapt to how this changes your optimal sourcing, pricing, and inventory strategies across dozens of markets.

This is where analytical context becomes essential.

Context Engine Categories

One way to slice it is that context engines fall into two primary categories:

Aspect Semantic Context Engines Analytical Context Engines
Core Function Understanding meaning and relationships Numerical optimization and pattern recognition
Data Representation Knowledge graphs, ontologies, symbolic structures Tensors, matrices, time series
Reasoning Pattern Logical inference, rule-based deduction Function approximation, gradient optimization
Strengths Interpretability, domain knowledge, complex relationships Adaptability, real-time optimization, anomaly detection
Typical Use Cases Entity resolution, question answering, semantic search Forecasting, control systems, dynamic pricing
Response to Change Manual rule updates, knowledge base modifications Automatic learning from data patterns
Latency Characteristics Variable (depends on graph complexity) Predictable (matrix operations)

Semantic Context Engines excel at understanding meaning:

Analytical Context Engines excel at numerical reasoning:

The Augmentation Pattern

The magic happens when you use analytical engines to augment semantic ones. Here's how this works in practice:

Semantic Foundation

Your semantic engine maintains the domain knowledge:

Entity: Golden Sweet Pineapple
- Variety: Premium
- Origin: Costa Rica
- Shelf life: 14 days
- Market preference: Asia (high), Europe (medium), US (low)
- Seasonal peak: March-May

Analytical Augmentation

Your analytical engine operates on numerical tensors extracted from this semantic knowledge:

State vector: [price_movement, inventory_levels, demand_forecast, weather_impact, transport_costs]

When market conditions shift, the analytical engine:

  1. Detects the anomaly in the numerical state space
  2. Computes optimal responses using reinforcement learning
  3. Feeds recommendations back to the semantic engine
  4. The semantic engine translates these into actionable business decisions

Concrete Example: Typhoon Response

When the typhoon hits:

Semantic engine understands the implications:

Analytical engine optimizes the response:

The semantic engine provides the context — what these numbers mean in the real world. The analytical engine provides the optimization — how to respond numerically to changing conditions.

Architecture for Augmentation

The key insight is that these engines should be loosely coupled but tightly integrated:

Semantic Layer:

Analytical Layer:

Bridge Functions:

So What?

The straw man statement I'm making here is: "Most AI systems I see are either too semantic (can't adapt to numerical changes) or too analytical (can't explain their reasoning). But real-world problems require both semantic understanding and analytical intelligence."

In the pineapple example, you need to understand what it means that "Golden Sweet varieties have higher profit margins in Japan" (semantic) AND be able to dynamically optimize your sourcing mix when commodity prices shift (analytical).

Implementation Thoughts

Realizing that context engines should be composable creates an exciting roadmap for development. Instead of building monolithic systems, you can build specialized domain models that excel at their core competencies with a general-purpose analytical layer and integrate them through well-defined interfaces.

Here's how this looks in practice:

Semantic Engine Interface

class SemanticContextEngine:
    def get_entity_attributes(self, entity_id: str) -> Dict[str, Any]:
        """Return semantic attributes for an entity"""
        return {
            "variety": "Golden Sweet",
            "origin": "Costa Rica", 
            "shelf_life_days": 14,
            "market_preferences": {"asia": 0.8, "europe": 0.6, "us": 0.3}
        }
    
    def get_relationships(self, entity_id: str) -> List[Relationship]:
        """Return semantic relationships"""
        return [
            Relationship("sourced_from", "costa_rica_farm_001"),
            Relationship("competes_with", "thai_pineapple_variety_x"),
            Relationship("preferred_by", "asian_premium_market")
        ]

Analytical Engine Interface

class AnalyticalContextEngine:
    def __init__(self, state_dim: int):
        self.state_dim = state_dim
        self.model = ActorCriticModel(state_dim)
    
    def extract_numerical_features(self, semantic_context: Dict) -> np.ndarray:
        """Convert semantic context to numerical state vector"""
        features = []
        features.append(semantic_context["shelf_life_days"])
        features.extend(semantic_context["market_preferences"].values())
        # Add market signals, pricing data, etc.
        return np.array(features)
    
    def optimize_decision(self, state_vector: np.ndarray) -> Dict[str, float]:
        """Return optimal actions based on numerical state"""
        action = self.model.predict(state_vector)
        return {
            "inventory_allocation": action[0],
            "pricing_adjustment": action[1],
            "hedging_ratio": action[2]
        }

Bridge Functions

class ContextBridge:
    def __init__(self, semantic_engine, analytical_engine):
        self.semantic = semantic_engine
        self.analytical = analytical_engine
    
    def get_optimized_recommendation(self, entity_id: str) -> Dict:
        # Get semantic context
        semantic_context = self.semantic.get_entity_attributes(entity_id)
        
        # Convert to numerical features
        state_vector = self.analytical.extract_numerical_features(semantic_context)
        
        # Get analytical optimization
        analytical_decision = self.analytical.optimize_decision(state_vector)
        
        # Combine for actionable recommendation
        return {
            "entity": entity_id,
            "semantic_context": semantic_context,
            "recommended_actions": analytical_decision,
            "confidence": self.analytical.model.get_confidence(state_vector)
        }

Typhoon Response Example

def handle_market_disruption(disruption_event: Dict):
    # Semantic engine identifies affected entities
    affected_entities = semantic_engine.query_affected_by(disruption_event)
    
    # Analytical engine detects anomaly and computes response
    for entity in affected_entities:
        current_state = analytical_engine.get_current_state(entity.id)
        anomaly_score = analytical_engine.detect_anomaly(current_state)
        
        if anomaly_score > threshold:
            # Get optimized response
            recommendation = bridge.get_optimized_recommendation(entity.id)
            
            # Execute through semantic understanding
            semantic_engine.execute_recommendation(recommendation)

Your semantic engine doesn't need to do numerical optimization — it just needs to expose the right features to the analytical engine. Your analytical engine doesn't need to understand domain semantics — it just needs to optimize over the numerical state space.

Yes, more upfront thought has to be invested on the abstractions and boundaries, but this makes it a more powerful and maintainable system than monolithic approaches.

Reflections

Building AI systems that work in practice often comes down to finding the right abstractions. The semantic vs. analytical categorization is a fiction that helps clarify what each component should be responsible for, and how they should interact.

The pineapple example might seem domain-specific, but the same compositional pattern applies broadly — financial markets, supply chain management, dynamic pricing, resource allocation. In each case, you need both semantic understanding of the domain and analytical optimization of numerical objectives.

The future of context engines is about finding elegant patterns to compose them for our soon to be AI overlords.