// Blog content. Each post has slug, title, excerpt, dek, category, readTime, date, body (array of blocks).
// Block types: { type: 'p', content }, { type: 'h2', content }, { type: 'pull', content }, { type: 'list', items },
//              { type: 'beforeAfter', rows: [{ before, after }] }

const POSTS = [
  {
    slug: "voice-is-the-entry-point",
    title: "Voice is the entry point. The value is everything after.",
    excerpt: "Most managed AI conversations start with a phone call. They shouldn't end there.",
    category: "Positioning",
    readTime: "6 min",
    date: "April 22, 2026",
    author: "NoralAI Team",
    feature: true,
    body: [
      { type: "p", content: "Ask a buyer what they want from managed AI and the answer almost always starts with the channel. We need someone to pick up the phone. Our front desk is drowning. We are losing leads after hours. Voice is the symptom they can see and the cost they can count. It is also the easiest part of the problem to solve." },
      { type: "p", content: "That is the inversion most operators miss. Eighteen months ago, voice quality was the wall. Today the wall has moved. The conversation itself is no longer the hard part. The hard part is what happens in the ninety seconds after the caller hangs up, and the ninety days after that." },
      { type: "pull", content: "Voice is not the problem. The problem is what does not happen after the call." },
      { type: "h2", content: "Voice crossed the line" },
      { type: "p", content: "The infrastructure layer matured in public. Deepgram closed a 130 million dollar Series C in early 2026 at a 1.3 billion dollar valuation, and the round included Twilio, ServiceNow Ventures, and SAP, which is what it looks like when the enterprise stack starts treating voice as a first-class citizen. ElevenLabs raised 180 million in January 2025 at a 3.3 billion dollar valuation, then crossed 11 billion within the year. Vapi, the developer platform layer, took 20 million from Bessemer in late 2024. Bland AI raised a 40 million dollar Series B in early 2025. Retell, founded in 2024, was reportedly running 40 million minutes of AI calls a month by 2026." },
      { type: "p", content: "Read those rounds together and a single message comes through. The model layer, the synthesis layer, the orchestration layer, the developer tooling, the QA tooling. All of it got funded, shipped, and benchmarked in the same eighteen-month window. Voice quality is now a baseline expectation, not a differentiator." },
      { type: "p", content: "Which means the conversation is the easy part. And the easy part is rarely where the money is." },
      { type: "h2", content: "Where the money actually lives" },
      { type: "p", content: "There is a 2011 study most operators have heard cited and almost none have read. James Oldroyd at MIT, working with InsideSales.com, looked at 15,000 leads across 100 companies. Firms that contacted a lead within five minutes were 100 times more likely to make contact and 21 times more likely to qualify the lead than firms that waited 30. Harvard Business Review ran the writeup. Drift later found the average B2B response time was 47 hours. Forty-seven." },
      { type: "p", content: "Pause on what that gap actually represents. The lead came in. Someone, somewhere, had budget and intent. And then a queue, a CRM field, a rep on lunch, a missed Slack ping, and a calendar that did not get checked added up to roughly two days of silence. By the time anyone reached out, the lead had moved on, gone cold, or already booked the competitor." },
      { type: "p", content: "A voice agent that picks up in two rings does not solve that problem. It just moves the bottleneck downstream. The lead got captured. The lead got greeted. And then, if nothing else fires, the lead sits in the same queue waiting for the same human to do the same follow-up that did not happen before." },
      { type: "pull", content: "Picking up faster is not the win. Closing the loop in real time is." },
      { type: "h2", content: "What buyers should actually be specifying" },
      { type: "p", content: "When the conversation about a managed AI deployment never leaves the call itself, how it sounds, how natural it feels, how it handles barge-in, the buyer is underspecifying the product. The agent will sound fine. They all sound fine now. That is what the funding rounds bought." },
      { type: "p", content: "The questions that matter are operational, not acoustic." },
      { type: "list", items: [
        "What systems does the agent write to within sixty seconds of the call ending? CRM, calendar, dispatch, billing, ticketing.",
        "What event in the conversation triggers what action in the stack? Specific phrases, not vibes.",
        "What does the agent escalate, to whom, with what context attached, and within what window?",
        "How does the system know a lead has stalled, and what does it do automatically when it does?",
        "Where does the orchestration logic live, who edits it, and how fast can it change when the business does?",
      ]},
      { type: "p", content: "Those are the questions that separate a chatbot with a microphone from infrastructure that runs a piece of an operation. The first is a feature. The second is a system." },
      { type: "h2", content: "The category is shifting under the buyers" },
      { type: "p", content: "Gartner placed AI agents at the peak of its 2025 Hype Cycle and predicted that 40 percent of enterprise applications will integrate task-specific AI agents by the end of 2026. They also predicted, in the same breath, that more than 40 percent of agentic AI projects will be canceled by the end of 2027 because of unclear business value or inadequate risk controls. Those two numbers are the same story told from two ends. The category is real. Most of the deployments will fail anyway. The ones that survive will be the ones built around outcomes instead of demos." },
      { type: "p", content: "A voice agent that books a call back is a demo. A system that picks up the call, captures the intent, fires a job to the CRM with a transcript, books the appointment on the right rep's calendar, sends the day-before reminder, escalates a stalled deal at fourteen days, and reports its own conversion data weekly is an operation. The first one is what most providers sell. The second is what survives the contract renewal." },
      { type: "h2", content: "The honest framing" },
      { type: "p", content: "Voice is the front door. The front door matters because it is how customers enter, and a broken front door loses revenue. But nobody buys a house for the door. They buy it for what is inside, and how it all connects." },
      { type: "p", content: "Managed AI works the same way. The conversation is the entry point. The execution is the product. Most of the value, most of the operational lift, most of the actual money, sits in the orchestration that fires after the call. Anyone selling voice as the whole offering is selling the easy part and leaving the hard part to the buyer." },
      { type: "p", content: "Pick the operator who treats the call as the beginning of the workflow, not the end of it. That is where the work, and the durable value, actually lives." },
    ],
  },
  {
    slug: "from-conversation-to-execution",
    title: "From conversation to execution",
    excerpt: "An anatomy of what an AI agent actually does after the call ends.",
    category: "Platform",
    readTime: "7 min",
    date: "April 14, 2026",
    author: "NoralAI Team",
    body: [
      { type: "p", content: "Most weeks, somebody asks me what an AI worker actually does that a phone tree doesn't. Here is what I tell them. The phone tree ends when the call ends. An AI worker keeps going." },
      { type: "p", content: "That difference sounds small. It is the entire product. Everything I care about as an operator, the lead that becomes a job, the job that becomes revenue, the revenue that becomes a renewal, lives downstream of a conversation. If the conversation is the only thing the system does, the system has not done very much." },
      { type: "h2", content: "A short, honest history of automating work" },
      { type: "p", content: "It helps to know how we got here, because the shape of the previous decade explains why this one feels different." },
      { type: "p", content: "The first wave was RPA. Blue Prism, founded in 2001, coined the term robotic process automation. Automation Anywhere followed in 2003. UiPath, founded in 2005 in Bucharest as DeskOver, shipped its first RPA product in 2013 and went on to define the category. The pitch was clean. Record what a human does in a software interface. Replay it. Save the headcount. For repetitive, screen-based work, it genuinely worked." },
      { type: "p", content: "Then came the maintenance bill. Forrester research has found that roughly 45 percent of companies running RPA experience bot breakage on a weekly basis or more often, and that maintenance can run a substantial share of the total program cost. The reason is structural. RPA bots are coupled to the screens they were trained on. A button moves. A field gets renamed. A vendor pushes a UI update overnight. The bot breaks, the queue backs up, and someone in IT spends Tuesday rebuilding what worked on Monday. That is the bot rot problem, and it is what most enterprise RPA programs spend the back half of their second year fighting." },
      { type: "p", content: "The second wave was workflow automation. Zapier launched out of Y Combinator in 2012, and Make, n8n, and the rest of the if-this-then-that category followed. The architecture moved from screen-scraping to APIs. That was a significant upgrade. APIs are stable in a way that pixels are not. But the logic was still rigid. A Zap fires when X happens and does Y. There is no judgment in the loop. If the trigger is wrong or the data is messy, the workflow fires anyway, and someone downstream has to clean it up." },
      { type: "p", content: "The third wave is what we are inside of now. Tool-using AI." },
      { type: "h2", content: "Why this wave is structurally different" },
      { type: "p", content: "Anthropic shipped tool use generally available in 2024 and added computer use in public beta on October 22, 2024 with the upgraded Claude 3.5 Sonnet. OpenAI launched Operator on January 23, 2025, an agent that controls a browser to complete real tasks. Cognition introduced Devin in March 2024 as an autonomous AI software engineer that plans, codes, debugs, and ships against real tickets. None of these are perfect. All of them are different in kind from anything RPA or Zapier ever did, because they can read context, choose between tools, recover from errors mid-task, and reason about whether they have actually accomplished the goal." },
      { type: "p", content: "That is the thing that changes the math. An RPA bot does what it was told. A workflow fires what it was wired to fire. An AI worker decides. Imperfectly, with supervision, with guardrails. But it decides." },
      { type: "pull", content: "RPA mimics keystrokes. Workflows fire triggers. AI workers make decisions and call the right tool to act on them." },
      { type: "h2", content: "What actually has to happen after a call" },
      { type: "p", content: "Consider a roofing call. A homeowner sees water on the ceiling, picks up the phone, and says they need someone out fast. From the moment they hang up, the work that has to happen is roughly this." },
      { type: "list", items: [
        "Capture: pull the structured data out of unstructured speech. Address, scope, urgency, homeowner status, decision-maker on the call.",
        "Trigger: write to the CRM as a qualified lead, attach the transcript, set the lead source, set the priority field.",
        "Assign: route to the rep who covers that territory, with the right calendar, with the right vehicle availability, and notify them in the channel they actually check.",
        "Schedule: hold a slot on the rep's calendar that respects existing routes and drive time, and confirm the booking back to the homeowner.",
        "Follow up: send the day-before confirmation. Send the day-of arrival window. Send the post-visit summary with the estimate attached.",
        "Advance: at fourteen days, if no signed contract, re-engage with the financing option the rep flagged on the call.",
      ]},
      { type: "p", content: "Each of those bullets is a different system. The CRM is Salesforce or HubSpot. The calendar is Google or Outlook. The dispatch is whatever the field-service vendor sold them. The notifications run through SMS or Slack or email. The financing offer lives in a third-party portal. Every one of these systems exposes APIs and webhooks. Salesforce and HubSpot have been documenting public APIs for over a decade. The plumbing exists. What did not exist, until very recently, was a single layer that could read the conversation, decide what to do, and call the right plumbing in the right order without a human stitching it together." },
      { type: "h2", content: "Where the system actually lives" },
      { type: "p", content: "When I describe the architecture to someone who has done enterprise software, they usually picture a giant orchestration layer with a thousand nodes. It is much less ornate than that. The conversation runs on a voice stack. The transcript and the structured outputs go to an agent runtime that holds the playbook. The playbook calls the integrations. The integrations write to the systems of record. The systems of record drive the next set of triggers. Everything is observable, every step is logged, every decision is auditable, and the operator who runs the deployment can see what happened on every call without asking anyone." },
      { type: "p", content: "The reason this design wins over RPA is the same reason APIs won over screen-scraping. It is coupled to the contract a system promises, not to the way the system happens to look on a Tuesday. The reason it wins over workflow automation is that the trigger is not a rigid event. It is a model-graded read of the conversation, with fallback paths when the model is unsure. That last part is what most operators underestimate. A real production system spends a meaningful share of its energy on the cases where it should not act, not just on the cases where it should." },
      { type: "h2", content: "The honest part" },
      { type: "p", content: "None of this is magic. The Gartner read on agentic AI is that more than 40 percent of agentic projects will be canceled by 2027 because of unclear value, runaway cost, or weak controls. That is a real risk and operators should price it in. The deployments that survive are the ones with a tight, observable scope. One workflow, one set of systems, one measurable outcome, and a human in the room who actually owns the result." },
      { type: "p", content: "The phone call lasted three minutes. The system runs for ninety days. That is the difference between a chatbot with a microphone and an AI worker that closes the loop. Voice is the entry point. Execution is the product. And the execution is what determines whether the conversation was worth having." },
    ],
  },
  {
    slug: "the-category-is-ai-operations",
    title: "The category isn't AI tools. It's AI operations.",
    excerpt: "On the structural difference between productivity software and operational infrastructure.",
    category: "Industry",
    readTime: "5 min",
    date: "April 03, 2026",
    author: "NoralAI Team",
    body: [
      { type: "p", content: "Two distinct categories of AI are forming, and most buyers are still using one word for both. That sloppiness is going to cost them, because the two categories solve different problems, sit in different budget lines, and produce different kinds of returns. Get the category wrong and the procurement decision is wrong before the demo even starts." },
      { type: "h2", content: "Two categories, not one" },
      { type: "p", content: "The first category is the copilot. It augments a person who is already doing the work. Microsoft 365 Copilot inside Word and Excel. GitHub Copilot inside the editor. Glean inside enterprise search. The user stays in the loop, the tool gets faster, the human gets out more output per hour. The pricing model is per seat. The success metric is productivity." },
      { type: "p", content: "The second category is the operator. It does not augment a worker. It runs the workflow. The user is not in the loop because the loop does not require one. Anthropic shipped computer use in public beta on October 22, 2024. OpenAI launched Operator on January 23, 2025. Cognition introduced Devin in March 2024 as an AI software engineer that plans and ships against real tickets. The pricing model trends toward outcomes, not seats. The success metric is throughput per dollar, or jobs completed, or tickets closed." },
      { type: "p", content: "These are not two flavors of the same product. They are two different products with different customers and different P&L lines." },
      { type: "pull", content: "Copilots compete for the software budget. Operators compete for the labor budget. Those budgets do not look anything like each other." },
      { type: "h2", content: "Service as software" },
      { type: "p", content: "Foundation Capital's Joanne Chen and Jaya Gupta named the shift better than anyone. They call it service as software. Their argument, in plain language. Almost every company has a budget for salaries. Fewer have a real budget for software. Software has historically competed against software. When an AI system can deliver an entire job task end to end, it stops being software and starts being categorized as a personnel cost. Foundation pegs the shift at a 4.6 trillion dollar opportunity. Salesforce, the canonical software juggernaut, generates roughly 35 billion a year. Global spend on sales and marketing salaries is north of 1.1 trillion. The math is not subtle." },
      { type: "p", content: "Andreessen Horowitz has been making the adjacent argument from the vertical SaaS side. Their framing is that AI is unlocking markets that were too small to target before because the labor budget plus the software budget combined creates an addressable market that pure software never could. Veterinary clinics, laundromats, chiropractors, home services. The work is unglamorous. The labor spend is enormous. The technology can finally meet the operation where it lives." },
      { type: "p", content: "Same thesis, two firms, two angles. Both correct." },
      { type: "h2", content: "Why now and not five years ago" },
      { type: "p", content: "Two trends had to collide. First, the model layer matured. Voice synthesis, transcription, reasoning, and tool use all crossed the line from prototype to production in roughly the same eighteen months. Deepgram closed a 130 million dollar Series C at a 1.3 billion dollar valuation in early 2026. ElevenLabs went from 3.3 billion in January 2025 to 11 billion within the year. The plumbing is funded, deployed, and benchmarked." },
      { type: "p", content: "Second, the labor side got tighter. The U.S. Bureau of Labor Statistics reported that healthcare and social assistance employment grew 2.9 percent year-over-year through March 2026, adding 680,500 jobs, and projected the sector to add roughly 2 million more by 2034. Construction wages have continued to climb against persistent skilled-labor shortages, with average hourly earnings in construction up 3.7 percent year-over-year in mid-2025. The Employment Cost Index showed compensation costs for private workers rising 3.4 percent in 2025. The aggregate picture is the same one operators feel in the field. The work has to get done, the people to do it are scarcer and more expensive, and the math has shifted in favor of deploying systems for the repetitive operational tier." },
      { type: "p", content: "That is not a software story. That is an operations story." },
      { type: "h2", content: "What this means for buyers" },
      { type: "p", content: "If the procurement question is which seat license to add for the team, the buyer is shopping copilots. Per-user pricing, productivity uplift, change management around adoption. Important work, but a different category." },
      { type: "p", content: "If the procurement question is whether to make the next operational hire or deploy a system to run that workload, the buyer is shopping operators. Outcome pricing, throughput metrics, integration depth, observable governance, and a vendor who is comfortable being measured on jobs completed rather than seats sold." },
      { type: "p", content: "Confusing the two is the most expensive mistake in this market right now. A copilot will not run an operation. An operator will not show up as line items on a productivity dashboard. They are not substitutes. They are different categories with different success criteria." },
      { type: "h2", content: "The honest version" },
      { type: "p", content: "The category is still forming. Gartner placed AI agents at the peak of inflated expectations on its 2025 Hype Cycle and predicted that more than 40 percent of agentic projects will be canceled by 2027 because of unclear value or weak controls. That is real, and any buyer who pretends otherwise is signing up to fail later. The deployments that survive will not be the ones with the most ambitious scope. They will be the ones with a single, measurable workflow, a clear owner, and an honest read on what the system can and cannot do." },
      { type: "p", content: "The buyers who get the category right early will spend less, deploy faster, and own a real operational moat by the time their competitors finish renaming their RPA programs. The ones who treat operators as just a fancier copilot will spend two years figuring out why the dashboard never moved." },
      { type: "p", content: "AI tools are useful. AI operations is the category that changes the business." },
    ],
  },
  {
    slug: "what-we-deploy-in-week-one",
    title: "What we deploy in week one",
    excerpt: "A look inside the first seven days of a typical NoralAI deployment.",
    category: "Behind the scenes",
    readTime: "4 min",
    date: "March 26, 2026",
    author: "NoralAI Team",
    body: [
      { type: "p", content: "I get asked how a managed AI deployment is even possible in weeks instead of quarters. The honest answer is that the question itself is loaded. It compares a managed AI deployment to the wrong baseline. So before I walk through what week one looks like, let me tell you what we are actually being compared to." },
      { type: "h2", content: "The baseline most buyers are anchored on" },
      { type: "p", content: "Panorama Consulting publishes an annual ERP report that is one of the better public reads on enterprise deployment timelines. Their 2025 report showed average ERP project duration dropping from 15.5 months to 9 months, attributing the compression mostly to SaaS adoption replacing on-premise installs. That is the new average. The old average is still very much alive in regulated industries and large multi-cloud rollouts." },
      { type: "p", content: "CRM is faster than ERP, but it is not fast. Industry write-ups put small Salesforce setups at 2 to 6 weeks, mid-size implementations at 2 to 4 months, and enterprise multi-cloud rollouts at 6 to 18 months. RPA is its own category, and the literature consistently shows that maintenance is a substantial share of total program cost, with Forrester research finding that roughly 45 percent of firms running RPA experience bot breakage on a weekly basis or more often." },
      { type: "p", content: "Against that backdrop, week one of a managed AI deployment is a different shape entirely. The reason is not that the engineering is magic. The reason is that the surface area is narrow on purpose." },
      { type: "h2", content: "The methodology, day by day" },
      { type: "p", content: "What follows is the cadence we run. It is not promised in a contract and the calendar shifts based on the customer's stack and approvals. But the structure holds." },
      { type: "h2", content: "Days 1 to 2: discovery" },
      { type: "p", content: "Map the workflows that matter. Which calls drive revenue. Which conversations get dropped today. Where the manual work piles up after the call ends. The goal is not to enumerate everything. The goal is to find the highest-impact loop to close first and write down what closing it actually means in operational terms. If the team cannot describe the win in a sentence, the scope is wrong." },
      { type: "h2", content: "Days 3 to 4: configuration" },
      { type: "p", content: "Define the agent's role. Write the conversation playbook. Confirm the integration points across CRM, calendar, and telephony. The agent is built against the systems the business already runs on. There is no rip-and-replace, because the rip-and-replace is the thing that turns a four-week project into a four-quarter project." },
      { type: "h2", content: "Days 5 to 6: integration and sandbox" },
      { type: "p", content: "Wire the agent into a sandboxed copy of the stack and run it against test traffic. Happy paths first. Then the calls the team finds hardest, the ones with bad audio, the ones with hostile callers, the ones where the right answer is to escalate to a human and stop talking. Watch what breaks. Fix it." },
      { type: "h2", content: "Day 7: soft launch" },
      { type: "p", content: "Move the agent onto a small slice of real traffic. After-hours overflow is the usual starting place because it is high signal and low risk. Monitor every interaction. Tune in real time." },
      { type: "pull", content: "The point of week one is not speed for its own sake. It is to put a real system in front of real traffic before the scope drifts." },
      { type: "h2", content: "Why this cadence works at all" },
      { type: "p", content: "Three reasons, in order." },
      { type: "p", content: "First, the integration story is fundamentally different than it was a decade ago. Salesforce, HubSpot, Google Workspace, Microsoft 365, and most modern field-service tools expose stable, well-documented APIs and webhooks. The plumbing exists. The work is to call it correctly, not to invent it." },
      { type: "p", content: "Second, the agent runtime is doing work that used to require custom code. Tool use went generally available across the major model providers in 2024. Anthropic shipped computer use in public beta on October 22, 2024. OpenAI launched Operator on January 23, 2025. The model can read context, choose between actions, and recover from errors mid-task. That collapses a meaningful share of the integration logic that an RPA program would have spent six months hand-rolling." },
      { type: "p", content: "Third, and most important, the scope is narrow on purpose. One workflow. One outcome. One observable metric. Gartner's own read is that more than 40 percent of agentic AI projects will be canceled by 2027 because of runaway scope and unclear value. The way you avoid being in that 40 percent is to refuse to be in it from the first sprint. Pick the loop. Close the loop. Then expand." },
      { type: "h2", content: "What this is not" },
      { type: "p", content: "It is not a promise that every deployment lands in seven days. Some take longer. The integrations are messier in some businesses. Procurement and security review can stretch the calendar in either direction. The methodology is the methodology, not a guarantee." },
      { type: "p", content: "What it is, is a deliberate inversion of the assumption that operational software has to take quarters to install. The reason it took quarters was that the work was big and undifferentiated. The work that wins now is small, observable, and tied to a single loop. Quarters were the symptom of vague scope. Weeks are the result of disciplined scope. The technology is part of the story. The discipline is the rest of it." },
    ],
  },
];

window.POSTS = POSTS;
