Building LLM Prompts From Enterprise Data in DataWeave: 2 Traps That Garbled My AI Output
I connected a MuleSoft API to an LLM last quarter for a support ticket classifier. The API call was easy — the MuleSoft AI Connector handles that. Building the prompt payload from enterprise data? ...

Source: DEV Community
I connected a MuleSoft API to an LLM last quarter for a support ticket classifier. The API call was easy — the MuleSoft AI Connector handles that. Building the prompt payload from enterprise data? That's where I spent 2 hours debugging escape sequences. TL;DR DataWeave transforms ticket data into structured LLM prompt payloads (system + user roles) joinBy "\n" produces literal backslash-n in JSON — not actual newlines. The LLM sees one continuous line. No token estimation → prompt consumes most of the context window → truncated response The pattern builds system role, user role, model config, and structured response format in 12 lines The Pattern: Enterprise Data to LLM Prompt %dw 2.0 output application/json var systemPrompt = "You are an enterprise support analyst." var lines = payload.ticketHistory map (t) -> "- [$(t.priority upper)] $(t.id): $(t.subject)" var userPrompt = "Analyze tickets for $(payload.customer.name):\n" ++ (lines joinBy "\n") --- { model: payload.model, max_toke