Scenario-based Tests
Define user personas, goals, and knowledge — Agentest generates realistic multi-turn conversations
Scenario-based testing with simulated users, tool-call mocks, and LLM-as-judge evaluation
Install Agentest:
npm install @agentesting/agentest --save-devCreate a config file:
// agentest.config.ts
import { defineConfig } from 'agentest'
export default defineConfig({
agent: {
name: 'my-agent',
endpoint: 'http://localhost:3000/api/chat',
},
})Write a scenario:
// tests/booking.sim.ts
import { scenario, sequence } from 'agentest'
scenario('user books a morning slot', {
profile: 'Busy professional who prefers mornings.',
goal: 'Book a haircut for next Tuesday morning.',
mocks: {
tools: {
check_availability: (args) => ({
available: true,
slots: ['09:00', '09:45', '10:30'],
}),
create_booking: sequence([
{ success: true, bookingId: 'BK-001' },
]),
},
},
assertions: {
toolCalls: {
matchMode: 'contains',
expected: [
{ name: 'check_availability', argMatchMode: 'ignore' },
{ name: 'create_booking', argMatchMode: 'ignore' },
],
},
},
})Run:
# If your agent runs on localhost, allow private endpoints:
AGENTEST_ALLOW_PRIVATE_ENDPOINTS=1 npx agentest runTesting agents is not like testing regular APIs. Traditional API tests send a request and assert on the response. Agent tests need to handle multi-turn conversations where the agent decides which tools to call, in what order, with what arguments — and the "correct" output is subjective.
Agentest solves this by:
Agentest complements eval platforms and observability tools — it doesn't replace them. Use Agentest to run your agent through test scenarios in CI. Use LangSmith/Langfuse to observe your agent in production.