Congratulations on the launch! Is it possible to replay the tests against another URL? My use case is that I have a nodejs backend that I want to rewrite in python. I wonder if I could use your tool to record the API requests to the current server and use them to replay against my rewritten server to check if the responses are the same.
Another useful thing would be if I could create the tests from saved requests exported from my browser's network tab. In this case your tool would work regardless of the backend language.
Thanks! Good question. Tusk Drift isn't quite designed for these use cases.
Currently, Drift is language specific. You'd need the SDK installed in your backend while recording tests. This is because Drift captures not just the HTTP request/response pairs, but also all underlying dependency calls (DB queries, Redis operations, etc.) to properly mock them during replay.
A use case we do support is refactors within the same language. You'd record traces in your current implementation, refactor your code, then replay those traces to catch regressions.
For cross-language rewrites or browser-exported requests, you might want to look at tools that focus purely on HTTP-level recording/replay like Postman Collections. Hope this helps!
How do you keep replayed tests trustworthy over time as dependencies and schemas evolve? (i.e. without turning into brittle snapshot tests)
Also, how do you normalize non-determinism (like time/IDs etc.), expire/refresh recordings, and classify diffs as "intentional change" vs "regression"?
1. With our Cloud offering, Tusk Drift detects schema changes, then automatically re-records traces from new live traffic to replace the stale traces in the test suite. If using Drift purely locally though, you'd need to manually re-record traces for affected endpoints by hitting them in record mode to capture the updated behavior.
2. Our CLI tool includes built-in dynamic field rules that handle common non-deterministic values with standard UUID, timestamp, and date formats during response comparison. You can also configure custom matching rules in your `.tusk/config.yaml` to handle application-specific non-deterministic data.
3. Our classification workflow correlates deviations with your actual code changes in the PR/MR (including context from your PR/MR title and body). Classification is "fine-tuned" over time for each service based on past feedback on test results.
I enjoy vcrpy and use it a lot, but it doesn't seem to be that similar.
Vcrpy is closer to an automock, where you create tests that hit external services, so vcrpy records them and replays for subsequent tests. You write the tests.
Here you don't write tests at all, just use the app. The tests are automatically created.
We instrument JWT libraries directly (jsonwebtoken, jwks-rsa). Both `jwt.sign()` and `jwt.verify()` are captured during recording and replayed with the original results. During replay, you get back the recorded verification result. So if the token was valid during recording, it stays valid during replay, even if it would be expired "now". The test runs in the temporal context of when it was recorded.
We capture the actual DB queries, Redis cache hits, JWT generation, and not just the HTTP calls (like you would see with mitmproxy), which lets us replay the full request chain without needing a live database or cache. This way each test runs idempotently.
You would need to add your own validation (determining deviations) into your mitm proxy. It's a testing framework that seems to want to streamline multiple streams of api testing. It's not reinventing the wheel, but it doesn't claim to either.
Looks like a nice tool, will check it out later when I get a chance.
Also yes, appreciate you calling this out. The deviation classification after replay + automated RCA for unintended deviations is another differentiator. Let me know if you have feedback when you get time to explore.
Congratulations on the launch! Is it possible to replay the tests against another URL? My use case is that I have a nodejs backend that I want to rewrite in python. I wonder if I could use your tool to record the API requests to the current server and use them to replay against my rewritten server to check if the responses are the same.
Another useful thing would be if I could create the tests from saved requests exported from my browser's network tab. In this case your tool would work regardless of the backend language.
Thanks! Good question. Tusk Drift isn't quite designed for these use cases.
Currently, Drift is language specific. You'd need the SDK installed in your backend while recording tests. This is because Drift captures not just the HTTP request/response pairs, but also all underlying dependency calls (DB queries, Redis operations, etc.) to properly mock them during replay.
A use case we do support is refactors within the same language. You'd record traces in your current implementation, refactor your code, then replay those traces to catch regressions.
For cross-language rewrites or browser-exported requests, you might want to look at tools that focus purely on HTTP-level recording/replay like Postman Collections. Hope this helps!
How do you keep replayed tests trustworthy over time as dependencies and schemas evolve? (i.e. without turning into brittle snapshot tests)
Also, how do you normalize non-determinism (like time/IDs etc.), expire/refresh recordings, and classify diffs as "intentional change" vs "regression"?
Good questions. I'll respond one by one:
1. With our Cloud offering, Tusk Drift detects schema changes, then automatically re-records traces from new live traffic to replace the stale traces in the test suite. If using Drift purely locally though, you'd need to manually re-record traces for affected endpoints by hitting them in record mode to capture the updated behavior.
2. Our CLI tool includes built-in dynamic field rules that handle common non-deterministic values with standard UUID, timestamp, and date formats during response comparison. You can also configure custom matching rules in your `.tusk/config.yaml` to handle application-specific non-deterministic data.
3. Our classification workflow correlates deviations with your actual code changes in the PR/MR (including context from your PR/MR title and body). Classification is "fine-tuned" over time for each service based on past feedback on test results.
Cool work, thanks. A bit like https://github.com/kevin1024/vcrpy in python, if you weren't aware OP.
I enjoy vcrpy and use it a lot, but it doesn't seem to be that similar.
Vcrpy is closer to an automock, where you create tests that hit external services, so vcrpy records them and replays for subsequent tests. You write the tests.
Here you don't write tests at all, just use the app. The tests are automatically created.
Similar ideas, but at a different layer.
Thanks for sharing this. :)
Cool. Definitely a pain point worth attacking. Bookmarked, plan to explore when time allows.
Sounds good Chris, would love to hear your thoughts once you've played around with it.
How do you handle expiring data, like JWTs?
We instrument JWT libraries directly (jsonwebtoken, jwks-rsa). Both `jwt.sign()` and `jwt.verify()` are captured during recording and replayed with the original results. During replay, you get back the recorded verification result. So if the token was valid during recording, it stays valid during replay, even if it would be expired "now". The test runs in the temporal context of when it was recorded.
What does this do that I can't do with mitmproxy?
Fair shout. Our instrumentations (https://github.com/Use-Tusk/drift-node-sdk?tab=readme-ov-fil...) hook directly into pg, mysql2, ioredis, firestore, etc., at the library level.
We capture the actual DB queries, Redis cache hits, JWT generation, and not just the HTTP calls (like you would see with mitmproxy), which lets us replay the full request chain without needing a live database or cache. This way each test runs idempotently.
You would need to add your own validation (determining deviations) into your mitm proxy. It's a testing framework that seems to want to streamline multiple streams of api testing. It's not reinventing the wheel, but it doesn't claim to either.
Looks like a nice tool, will check it out later when I get a chance.
Also yes, appreciate you calling this out. The deviation classification after replay + automated RCA for unintended deviations is another differentiator. Let me know if you have feedback when you get time to explore.