The Moderation API is required for all merchants running AI image or video
generation products on Creem. Every user-supplied prompt that will be routed
to an image or video generation model must be screened through this endpoint
before generation happens.
Why moderation matters
Running generative AI products carries real risk — for your users, for your business, and for the payments infrastructure underneath it. The Moderation API exists to protect your platform from unforeseen consequences: policy-violating content, high-risk chargebacks, processor escalations, and reputational damage that can follow a single bad generation. We are proud to offer a safe and inclusive environment for a wide range of AI products, and we want you to thrive on Creem. Correctly integrating the Moderation API is not just a compliance checkbox — it is directly for your own benefit. Merchants who route generative AI traffic without upstream screening risk:- Violation of Creem’s Terms of Service
- Permanent removal from the platform
- Frozen funds held against chargeback and risk exposure
- Loss of processor relationships that are difficult to recover
How it works
The Moderation API evaluates a text prompt against Creem’s content policies and returns a decision. Your application calls the Moderation API before sending the prompt to your image or video model, and uses the decision to route traffic:Your backend calls the Moderation API
Before anything else, your server sends the prompt to
POST /v1/moderation/prompt.The endpoint
Screen a prompt
POST /v1/moderation/prompt — full request and response schema.Request
| Field | Type | Required | Description |
|---|---|---|---|
prompt | string | Yes | The text prompt to evaluate against content policies. |
external_id | string | No | Optional identifier to associate this request with (e.g. your internal user or generation ID). Useful for auditing. |
Response
| Field | Type | Description |
|---|---|---|
id | string | Unique identifier for the moderation result. |
object | string | Always moderation_result. |
prompt | string | The prompt that was screened. |
external_id | string | Echoes the external_id you supplied, if any. |
decision | enum | One of allow, flag, or deny. |
usage | object | Usage information for the call. Contains units (number). |
Decision values
| Decision | Meaning | Recommended action |
|---|---|---|
allow | The prompt passed screening. | Forward to your model and generate normally. |
flag | The prompt is not explicitly denied, but it is closely monitored by Creem. | We recommend blocking it as well. |
deny | The prompt violates Creem’s content policies. | Do not forward to your model. Surface an error to the user. |
The endpoint is currently marked experimental in the OpenAPI spec. The
decision values above are stable, but additional fields may be added in the
future. Design your integration to ignore unknown fields rather than fail on
them.
Integration guide
1. Add moderation to your generation handler
The rule is simple: no prompt reaches your model without a decision from the Moderation API first. Put the call at the very top of your generation handler, before any queueing, billing, or model invocation.2. Choose an environment
Moderation is available on both environments:| Environment | Base URL | API key prefix |
|---|---|---|
| Sandbox | https://test-api.creem.io | creem_test_xxxxx |
| Production | https://api.creem.io | creem_xxxxx |
deny and flag.
3. Treat flag as a block
A flag decision means the prompt was not explicitly denied, but Creem is
closely monitoring it. We recommend treating flag exactly like deny —
block the request and surface an error to the user. This keeps your
integration simple and keeps your account on the safe side of the line.
4. Fail closed, not open
If the Moderation API call fails (network error, timeout, unexpected 5xx), do not fall back to generating anyway. Treat failures as a temporary block and return an error to the user. Failing open defeats the entire purpose of the control and will be treated as a policy violation.5. Screen the prompt, not the output
Moderation is designed for pre-generation prompt screening. The correct place to call it is before your model runs, on the raw user input. Do not skip screening because you plan to inspect the output afterwards — by the time the output exists, you have already spent compute on disallowed content and the risk event has already occurred.End-to-end example
A minimal Next.js API route that gates an image generation model behind the Moderation API:app/api/generate/route.ts
Checklist before going live
Every generation path calls the Moderation API
Audit every code path that can reach your image or video model. There
should be no way to generate without a preceding moderation call.
`deny` always blocks generation
Confirm with a test prompt that a
deny decision returns an error to the
user and never reaches your model.`flag` also blocks generation
Confirm a
flag decision is treated the same as deny and never reaches
your model.Pricing
The Moderation API is currently free to use while it is in its experimental phase. You don’t need to worry about cost today — focus on integrating it correctly. When the product is officially released, the most likely launch price will be $30 USD per 100,000 units. Creem does not aim to make a profit margin on this product. We built the Moderation API because we want our merchants to succeed and because we are committed to a safe environment for everyone on the platform — pricing it is purely about covering the cost of running it.When official pricing rolls out, we will not retroactively charge for
usage that occurred during the experimental period. Anything you screen
today is on us. You can count on Creem’s values of honest and fair pricing
— no surprise bills, no retroactive invoices.
Related
Screen a prompt — API Reference
Full request and response schema for
POST /v1/moderation/prompt.Account Reviews
How Creem reviews merchants and what keeps your account in good standing.