vibecode.wiki
RU EN
~/wiki / kak-pisat-kod-s-ii / kak-pisat-testi-cherez-ii

How to get AI to write tests automatically

◷ 5 min read 3/3/2026

Next step

Open the bot or continue inside this section.

$ cd section/ $ open @mmorecil_bot

Article -> plan in AI

Paste this article URL into any AI and get an implementation plan for your project.

Read this article: https://vibecode.morecil.ru/en/kak-pisat-kod-s-ii/kak-pisat-testi-cherez-ii/ Work in my current project context. Create an implementation plan for this stack: 1) what to change 2) which files to edit 3) risks and typical mistakes 4) how to verify everything works If there are options, provide "quick" and "production-ready".
How to use
  1. Copy this prompt and send it to your AI chat.
  2. Attach your project or open the repository folder in the AI tool.
  3. Ask for file-level changes, risks, and a quick verification checklist.

Without tests, one small prompt in a week could break the whole project. And with the tests, you calmly refactor, add features and put into production with 99.9% confidence.

I use this system in every project (Telegram bots, Next.js applications, autonomous agents). AI doesn’t just “write a couple of asserts,” it creates a complete test structure, runs them, fixes errors, and reports coverage.

Why conventional prompts give weak tests

Regular request: Write tests for this function

The result: 2-3 tests on the happy path, no mosks, no edge cases, no error checking.

The right approach:

  1. First you ask for the **test architecture.
  2. Then you get the AI to write unit + integration + snapshot tests.
  3. Use meta-prompts that incorporate all the best practices of 2026.
  4. You give the command to run tests and fix it (Cursor and Windsurf can do it themselves).

How it works technically (in simple words)

AI sees:

  • code
  • your requirements (through the prompt)
  • project context (through Custom Instructions or project in Cursor)

He analyzes:

  • what does
  • what is her input/output data
  • what external services it calls (Claude API, Supabase, Telegram)
  • possible errors

Then generates:

  • wKVTXNTOKEN0X file next to source
  • poopy
  • coverage tests 80-100%
  • launcher

7 Rules for Generating AI Tests

TDD Approach: Tests First

Prompt template (most powerful):

prompt.txt
Work strictly on TDD:
1. Write ALL tests for this function/class.
2. Tests should cover:
   - happy path
   edge cases (empty lines, null, maximum values)
   - errors and exceptions
   integration with external services (with mocks)

Use Vitest + @testing-library + vi.fn() for moks.

Structure:
-   tests   /filename.test.ts
- mocks/ for external APIs

First, show only the tests (without implementing the function). Then I'll say "implement.".

Rule 2. Full coverage + external services

**Prompt:

prompt.txt
Create tests with 100% twig coverage.

Definitely:
Close all external calls (Claude API, Supabase, Telegram Bot API, n8n, etc.)
- Use vi.mock()/jest.mock()
- Add retry logic tests and timeouts
Check that errors are logged correctly
Add snapshot tests for UI components (if Next.js)

Save Moki in a separate folder   mocks  .

Rule 3. Integration tests (not just units)

prompt.txt
Now create a separate file integration.test.ts

Requirements:
Run a real database (Supabase test instance or Docker Postgres)
Test Full Flow: From Telegram Message to Response
Use testcontainers or supabase-test-utils
Add beforeAll/afterAll to clean up data

Rule 4. Automatic start and fix

prompt.txt
1. Write the tests.
2. Run the command `npm run test` (or `vitest`).
3. If you fall, automatically correct the code or tests until everything passes.
4. At the end, show the cover report (coverage).

Rule 5. Generating tests for the entire project at once

prompt.txt
Go through the project and create the missing .test.ts files for each feature and component.

Priority:
1. handlers/
2. services/
3. repositories
4. utils/

First, show a list of files that will be created.

Rule 6: CI/CD-ready tests

prompt.txt
Get ready for GitHub Actions:

- Add workflow .github/workflows/test.yml
Tests must work in CI (no real Telegram token)
Add coverage threshold 85%
- Drop the build if the cover is lower

Rule 7. Documentation of tests (so you understand what is being tested)

prompt.txt
For each test, add a detailed description/it with a description in Russian:
- What we're testing.
- Conditions
- Expected outcome
- Why is the test important

Universal meta-prompt

Copy and use always:

prompt.txt

The goal is to create a complete test base for the code provided.

Apply in order:
1. TDD (tests first)
2. 100% branching
3. Full of external services
4. Unit + Integration tests
5. Snapshot tests where necessary
6. Automatic start + error correction
7. GitHub Actions workflow
8. Coverage report + recommendations for improvement

Upon completion:
- Show the folder structure   tests
- Give the launch command
- Show me an example of the coverage report
- Add comments // TODO: for complex cases

Real example: from function to full tests in 3 minutes

Take a typical feature from the Telegram bot:

ts
// services/subscriptionService.ts
export async function activateSubscription(userId: string, plan: string) { ... }

What does AI do on a meta-prompt:

  1. Creates __tests__/services/subscriptionService.test.ts
  2. Creates moki for Supabase and payments
  3. Writes 12 tests (happy, expired plan, payment failed, rate limit, etc.)
  4. Runs vitest and fixes 2 minor errors
  5. Provides coverage of 98%

Checklist before considering the project ready

  • For each function there is .test.ts
  • Coverage ≥ 85%
  • All external APIs are locked
  • Integration tests for critical paths
  • GitHub Actions launches tests on every PR
  • Tests run locally in <10 seconds
  • There are snapshot tests for UI