When you're working with APIs or automating web-related tasks, sooner or later you'll need to send data instead of just fetching it. That's where a POST request in Python comes in. It's the basic move for things like logging into a service, submitting a web form, or sending JSON to an API endpoint.
Using the requests library keeps things clean and human-friendly. No browser automation, no Selenium gymnastics, no pretending to click buttons. You just send a POST request in Python, wait for the response, and continue on. It's readable, dependable, and more or less the default way most developers handle HTTP in Python these days.
In this guide, we'll go step-by-step through how to use Python requests POST properly, including different payload types (form data, JSON, etc.), how to work with headers, and when a Python requests session is useful. We'll also look at where tools like ScrapingBee can help if the website decides it doesn't feel like cooperating with normal HTTP requests.

Quick answer (TL;DR): Send a POST request in Python now
To send a POST request in Python, you typically use requests.post. The main difference is what kind of data you're sending:
- Form data → use
data=.... This sendsapplication/x-www-form-urlencoded(similar to a regular web form). - JSON data → use
json=.... Requests setsContent-Type: application/jsonfor you and handles encoding. - File uploads → use
files=...with an open file handle. This sendsmultipart/form-data.
If you're making several requests to the same site (logging in and doing actions), use a Python requests session. It keeps cookies and headers, so you don't have to manually pass them around. This is how you do login once and reuse the session across multiple requests.
When a site blocks typical scripts or needs proxy rotation, you can point your Python POST request through ScrapingBee. You still send a POST request normally, but the request hits ScrapingBee's endpoint. ScrapingBee forwards it to the target site, handling proxies, CAPTCHAs, retries, and optional JS rendering.
ScrapingBee POST Example
import requests
resp = requests.post(
"https://app.scrapingbee.com/api/v1",
params={
"api_key": "SCRAPINGBEE_API_KEY", # your ScrapingBee key (store in env var in real use)
"url": "https://httpbingo.org/post" # the site you want to POST to
},
json={"hello": "world"} # this is your POST body
)
print(resp.json())
This is a straightforward Python requests POST example: the payload goes in json= (or data= if you're sending form data), and ScrapingBee deals with the hard parts behind the curtain.
For a bigger picture on scraping workflows, see Python Web Scraping 101.
Building a JSON POST request with Requests
Before we start sending data all over the place, it helps to be clear about what GET vs POST actually means:
- GET is for retrieving data. It should not change anything on the server. Think of it like walking into a bar and asking to see the menu. You're just looking.
- POST is for sending data to be processed. This usually creates or updates something. That's the moment you tell the bartender: "Yeah, I'll have a beer and fries." Now work happens, something changes.
So when you're logging in, submitting a form, creating a user, or sending JSON to an API, you're making a Python POST request.
If the data you want to send is JSON, Requests makes it very straightforward:
requests.post(url, json=payload)
Using the json= parameter tells Requests to convert your Python dict into JSON, set the Content-Type: application/json header automatically, and handle the encoding details behind the scenes.
This is why Python Requests POST JSON has become the default pattern: it's clear, predictable, and avoids the boilerplate of manually building headers and calling json.dumps() yourself.
Step 0: Set things up
You'll need Python 3 installed. Just make sure it's available:
# Make sure Python 3 is available
python3 --version
# or
python --version
We'll use uv to manage the project and dependencies. It's quick and avoids the usual virtualenv overhead. Create a fresh directory and install requests:
# New project folder
uv init python-requests-post-demo
cd python-requests-post-demo
# Add deps
uv add requests
Quick sanity check that everything runs:
uv run python main.py
We'll also be using ScrapingBee to route some of our HTTP calls later. When dealing with real websites (especially ones that don't want automated traffic), ScrapingBee handles the annoying parts:
- Rotating proxies
- Geo-targeting by country
- Automatic retries and some CAPTCHA handling help
- Optional JavaScript rendering
- Session persistence, custom headers, cookies, etc.
You can grab a free ScrapingBee account at app.scrapingbee.com/account/register. The free tier gives 1,000 credits, which is more than enough to experiment with a few Python POST request examples.
Setup done, so we're good to move forward!
Step 1: Set the request method to POST
To make a POST request in Python, we use requests.post(). It behaves just like requests.get(), the difference is simply intent: POST is used when you're sending something to the server. We'll cover how to attach data in the next steps; for now, let's just focus on the call itself.
A minimal Python POST request example:
# main.py
import requests
def main():
resp = requests.post("https://httpbingo.org/post")
print(resp.text)[:300]
# From now on I'll omit this part for brevity:
if __name__ == "__main__":
main()
This sends a POST request without any data. Some endpoints work this way: you trigger some action by simply calling an endpoint.
If the site you're interacting with rate-limits requests, blocks direct traffic, or needs proxy rotation, you can run the same request through ScrapingBee. The pattern is still Python send POST request, only the request goes to ScrapingBee and you tell it which site to forward to:
import requests
def main():
resp = requests.post(
"https://app.scrapingbee.com/api/v1",
params={
# It's recommended to store API keys safely
"api_key": "SCRAPINGBEE_API_KEY",
"url": "https://httpbingo.org/post"
}
)
print(resp.text)[:300]
What's happening here:
- We are still using
requests.post, same Python request POST flow. - The actual target URL goes inside the
urlparameter. - The ScrapingBee API key is passed in
params. - No request body yet as that comes next (JSON, form data, headers, etc.).
ScrapingBee just sits in the middle, handling proxies, retries, and anti-bot checks, while your code stays simple and readable.
Step 2: Set the POST data
Most of the time, a POST request includes some data like login fields, form inputs, or other information the server needs to process. With Requests, you send standard form data using the data= parameter:
import requests
# I'll omit the main() function from now on for brevity
# You can place this code right inside main.py
# Or inside the main() function within this file:
payload = {
"username": "alice",
"password": "swordfish"
}
resp = requests.post("https://httpbingo.org/post", data=payload)
print(resp.text)[:300]
This sends the payload as application/x-www-form-urlencoded, which is the same format a browser uses when you submit a form.
If you want to route the same request through ScrapingBee (for example, when the target site is picky about who it accepts requests from), the structure stays the same. ScrapingBee just forwards your request and handles things like proxy rotation and retries:
import requests
payload = {
"username": "alice",
"password": "swordfish"
}
resp = requests.post(
"https://app.scrapingbee.com/api/v1",
params={
"api_key": "SCRAPINGBEE_API_KEY",
"url": "https://httpbingo.org/post"
},
data=payload
)
print(resp.text)[:300]
The key idea: whether you're using a plain Python POST request or sending it through ScrapingBee, the shape of your code doesn't really change.
Step 3: Set the POST headers
Sometimes you'll need to include custom headers in your POST request: things like User-Agent, auth tokens, or content-specific settings. In Requests, headers are simply a dictionary passed to headers=:
import requests
headers = {
"User-Agent": "MyApp/1.0"
}
resp = requests.post(
"https://httpbingo.org/post",
headers=headers,
json={"hello": "world"}
)
print(resp.text)[:300]
This is still a regular Python requests POST example, nothing too special. The headers go along with the request, the JSON body gets encoded properly, and the server receives everything it needs.
If you're routing this through ScrapingBee, you can still send custom headers but you need to explicitly allow forwarding. ScrapingBee only forwards headers if:
forward_headers=trueis set in params- Header names are prefixed with
Spb-(ScrapingBee strips the prefix before sending them to the target site)
Here's how that looks:
import requests
headers = {
"Spb-User-Agent": "MyApp/1.0" # will be forwarded as User-Agent
}
resp = requests.post(
"https://app.scrapingbee.com/api/v1",
params={
"api_key": "SCRAPINGBEE_API_KEY",
"url": "https://httpbingo.org/post",
"forward_headers": "true"
},
headers=headers,
json={"hello": "world"}
)
print(resp.text)[:300]
What's going on:
forward_headers=truetells ScrapingBee to forward headers to the target site.Spb-User-AgentbecomesUser-Agentonce forwarded.- We're still sending JSON the same way as any Python requests POST JSON pattern.
4. POST JSON data
Most modern APIs expect JSON instead of traditional form fields. With Requests, the easiest way to send JSON is to use the json= parameter. Just pass it a Python dict and Requests will:
- Convert it to JSON
- Add
Content-Type: application/json - Handle the encoding for you
import requests
resp = requests.post(
"https://httpbingo.org/post",
json={"key": "value"}
)
print(resp.json())
No json.dumps() needed, no manual headers. This is why Python requests POST JSON is such a common approach: it keeps things clean.
If you want to send the same JSON through ScrapingBee, the logic stays the same. The request still uses json=, you just change the endpoint and tell ScrapingBee where to forward the request:
import requests
resp = requests.post(
"https://app.scrapingbee.com/api/v1",
params={
"api_key": "SCRAPINGBEE_API_KEY",
"url": "https://httpbingo.org/post"
},
json={"key": "value"}
)
print(resp.json())
The json= parameter handles the body just like before.
Learn more about using proxies with Python and Requests in our tutorial.
5. Sending form data with Python requests
Not every endpoint speaks JSON. Some expect classic HTML form data, the same format your browser uses when you submit a login form. In Requests, you send that using the data= parameter:
import requests
payload = {"username": "alice", "password": "s3cr3t"}
r = requests.post("https://httpbingo.org/post", data=payload)
print(r.status_code)
print(r.request.headers.get("Content-Type")) # application/x-www-form-urlencoded
print(r.json()["form"]) # {'username': 'alice', 'password': 's3cr3t'}
This is the typical POST request Python form flow: data= tells Requests to encode the fields in the standard form format servers expect.
If you're logging in and then performing additional actions afterward, it's better to use a session so cookies and state are preserved:
import requests
s = requests.Session()
form = {"username": "alice", "password": "s3cr3t"}
r = s.post("https://example.com/login", data=form, timeout=10)
r.raise_for_status()
Same idea if you want to route the form submission through ScrapingBee. The request looks almost identical:
import requests
form = {"username": "alice", "password": "s3cr3t"}
resp = requests.post(
"https://app.scrapingbee.com/api/v1",
params={
"api_key": "SCRAPINGBEE_API_KEY",
"url": "https://httpbingo.org/post"
},
data=form
)
print(resp.text)[:300]
This is still the same Python requests POST form data pattern.
6. Uploading files with POST
Sometimes you need to send real files (images, PDFs, CSVs, logs) along with your POST request. In HTTP, this is handled using multipart/form-data. Using Requests, you send files via the files= parameter, and Requests handles all the multipart encoding details for you.
Uploading a single file
Let's make this practical: we'll generate a tiny PNG image with Pillow, then upload it with a normal POST.
Install Pillow:
uv add pillow
Generate an image and upload it:
# main.py
from pathlib import Path
from PIL import Image, ImageDraw, ImageFont
import requests
def make_demo_image(path: Path) -> None:
w, h = 480, 240
img = Image.new("RGB", (w, h), (32, 32, 48))
draw = ImageDraw.Draw(img)
font = ImageFont.load_default()
msg = "Hello from Pillow"
# banner
draw.rectangle([(0, 0), (w, 48)], fill=(70, 130, 180))
draw.text((12, 12), "Demo Upload", fill=(255, 255, 255), font=font)
x0, y0, x1, y1 = draw.textbbox((0, 0), msg, font=font)
tw, th = (x1 - x0), (y1 - y0)
draw.text(((w - tw) / 2, (h - th) / 2), msg, fill=(230, 230, 230), font=font)
img.save(path, format="PNG")
def upload_file(url: str, path: Path) -> requests.Response:
with path.open("rb") as f:
files = {"file": (path.name, f, "image/png")}
r = requests.post(url, files=files, timeout=15)
r.raise_for_status()
return r
def main():
out = Path("demo_upload.png")
make_demo_image(out)
# Plain upload to httpbin (echoes back what you sent)
resp = upload_file("https://httpbingo.org/post", out)
# Show what the server saw
data = resp.json()
print("Status:", resp.status_code)
print("Server received file keys:", list(data.get("files", {}).keys()))
print("Content-Type sent:", resp.request.headers.get("Content-Type"))
if __name__ == "__main__":
main()
Requests takes care of:
- Setting the correct multipart boundaries
- Handling file streaming
- Formatting the request so the server sees the file properly
No need to manually set headers; sending files= is all it takes. This is the standard Python requests POST file pattern.
File and extra fields
Sometimes you're not just uploading a file, you also want to send extra form fields along with it (title, tags, visibility flags, etc.). Good news: Requests lets you mix data= and files= in the same POST.
We'll reuse the make_demo_image() function from the previous section. Just call it to generate a file, then attach both form fields and the file in one go:
import requests
from pathlib import Path
# Create (or re-create) our demo image
img_path = Path("demo_upload.png")
make_demo_image(img_path)
data = {"album": "summer-2025", "public": "true"}
with img_path.open("rb") as f:
files = {"file": (img_path.name, f, "image/png")}
resp = requests.post(
# try httpbingo (a modern httpbin) to avoid flaky mirrors
"https://httpbingo.org/post",
data=data,
files=files,
timeout=15,
)
# sanity checks before parsing JSON
print("status:", resp.status_code)
print("content-type:", resp.headers.get("Content-Type"))
ctype = resp.headers.get("Content-Type", "")
if "application/json" not in ctype:
# show a snippet so you can see what you actually got
print("non-JSON response preview:\n", resp.text[:500])
else:
payload = resp.json()
print("form:", payload.get("form"))
print("files:", payload.get("files"))
What's happening here:
data=holds normal form fields (key/value strings).files=holds one or more uploaded files.- Requests automatically creates a
multipart/form-dataPOST body that contains both sections in the right format.
This is a standard and common Python requests POST form data and file pattern. Just drop in whatever additional fields the endpoint expects and keep the structure the same.
Uploading files through ScrapingBee
Same idea as before. You already have make_demo_image() from the previous snippet, so we'll reuse it. The only difference here is that the POST is sent to ScrapingBee, and ScrapingBee forwards it to the real destination. Your data= and files= usage stays exactly the same.
import requests
from pathlib import Path
# We assume make_demo_image(img_path) already exists.
img_path = Path("demo_upload.png")
make_demo_image(img_path)
SB_ENDPOINT = "https://app.scrapingbee.com/api/v1"
params = {
"api_key": "SCRAPINGBEE_API_KEY", # store securely in real use
"url": "https://httpbingo.org/post", # ScrapingBee will forward to this
}
data = {"album": "summer-2025", "public": "true"}
with img_path.open("rb") as f:
files = {"file": (img_path.name, f, "image/png")}
resp = requests.post(
SB_ENDPOINT,
params=params,
data=data,
files=files,
timeout=30,
)
print("status:", resp.status_code)
print("content-type:", resp.headers.get("Content-Type"))
# Try JSON parsing if the target returned JSON
if resp.headers.get("Content-Type", "").startswith("application/json"):
print(resp.json())
else:
# Just preview the response body
print(resp.text[:300])
Key points:
- Your Python code doesn't change:
data=for form fields,files=for uploads. - ScrapingBee handles proxy rotation, retries, and optional JS rendering.
- Using a stable echo service like https://httpbingo.org/post makes testing predictable.
This keeps the workflow simple: you write a normal Python requests POST file upload, ScrapingBee just delivers it reliably on the other end.
Reading JSON responses
Most APIs respond in JSON. And since we usually care about the structured data rather than the raw text blob, we want a clean way to convert that JSON into normal Python objects.
When you send a POST request, you can always inspect the raw response first:
import requests
resp = requests.post("https://httpbingo.org/post", json={"key": "value"})
print(resp.text)[:300] # raw server response as a string
But handling JSON as plain text is annoying as you end up doing manual parsing or debugging weird string formatting issues. Instead, use .json() to turn the response directly into a Python dictionary:
data = resp.json()
print(data["json"]) # {'key': 'value'}
That's it: .json() returns Python types (dicts, lists, strings, numbers) that you can work with immediately. If the response isn't valid JSON, .json() will raise an exception which usually means the server returned an HTML error page, a redirect, or some kind of challenge page.
If you're unsure what you got, print a slice of the text:
print(resp.text[:300])
In other words: get comfortable with .json(). It's the normal way to interact with API responses in Python, and it fits naturally into the Python requests POST JSON workflow you'll use everywhere.
Reading JSON through ScrapingBee
The nice part: nothing changes. When you send a POST request through ScrapingBee, the response you get back is still a normal Requests response object. That means:
.textworks.json()works.status_codeworksresp.raise_for_status()works
So the JSON handling is identical:
import requests
resp = requests.post(
"https://app.scrapingbee.com/api/v1",
params={
"api_key": "SCRAPINGBEE_API_KEY",
"url": "https://httpbingo.org/post",
},
json={"key": "value"},
timeout=30,
)
data = resp.json()
print(data["json"]) # {'key': 'value'}
Handling POST errors and status codes
POST requests fail more often than we'd like. Maybe the server expected JSON and you sent form data. Maybe you forgot an auth header. Maybe the site is actively blocking automated traffic. When things go wrong, you want your code to fail loudly, not keep going with half-broken responses.
Here are some common HTTP status codes you'll hit when making a POST request in Python:
| Status | Meaning | What It Usually Means |
|---|---|---|
| 400 | Bad Request | The server didn't like your payload. Check field names, JSON formatting, data types. |
| 401 | Unauthorized | You're missing credentials (token, cookie, session). Fix auth headers or API keys. |
| 403 | Forbidden | The server recognized you but still won't let you in. Could be bot detection so proxies/headers/sessions might help. |
| 500 | Internal Server Error | Server-side issue. Retrying later often works (not your fault... well, usually). |
A good default habit:
resp.raise_for_status()
If something went wrong, this stops your code right away instead of letting bad data trickle downstream.
Error handling with plain requests
Let's take a look at the following example:
import requests
resp = requests.post("https://httpbingo.org/status/401", json={"key": "value"})
try:
resp.raise_for_status()
except requests.HTTPError as e:
print("Request failed:", e, "| Status:", resp.status_code)
Key points:
- We send a POST request like normal.
raise_for_status()will throw an exception if the status code is 4xx or 5xx.- Wrapping it in
try/exceptlets us handle the error cleanly instead of the script crashing. eincludes the HTTP error message, andresp.status_codelets you inspect what went wrong.
This gives you a clean, readable exception with the status code baked in.
Error handling with ScrapingBee
Same approach. ScrapingBee still returns a normal requests.Response object, so the workflow doesn't change:
import requests
resp = requests.post(
"https://app.scrapingbee.com/api/v1",
params={
"api_key": "SCRAPINGBEE_API_KEY",
"url": "https://httpbingo.org/status/403",
},
json={"key": "value"},
timeout=20,
)
if not resp.ok:
print("ScrapingBee request failed:", resp.status_code)
resp.raise_for_status()
The golden rule for POST responses:
- Send the request.
- Check
resp.okorresp.status_code. - Call
raise_for_status()before touching.json().
This keeps your logic predictable and debugging straightforward especially when the server or network gets moody.
Making POST requests within a session
If you're going to send more than one request to the same site, use a requests.Session. It's one of those small upgrades that makes code cleaner and faster without extra complexity.
Why bother?
- Connection reuse: The session keeps the TCP connection open. That means less latency and fewer HTTPS handshakes.
- Cookies remembered: If the server sets a session cookie during login, the session automatically sends it on future requests.
- Shared defaults: Set headers, params, or timeouts once, so every request inherits them.
- Cleaner code: No repeating the same boilerplate everywhere.
Plain requests session (baseline)
Check the following example:
import requests
s = requests.Session()
# Defaults shared by all requests made with this session
s.headers.update({"User-Agent": "requests-session-demo/1.0"})
s.params = {"lang": "en"} # appended to the query string of every request
# Example login-like POST (httpbin just echoes back what we send)
login = s.post(
"https://httpbingo.org/post",
data={"username": "alice", "password": "s3cr3t"}
)
login.raise_for_status()
# Follow-up request, reusing cookies, connection, and defaults
r = s.get("https://httpbingo.org/anything")
r.raise_for_status()
# Show that the default header was carried forward
print(r.json()["headers"]["User-Agent"]) # -> requests-session-demo/1.0
What's happening:
- We create
s = requests.Session(). - We define defaults once (
headers.update,s.params). - On
s.post(...), if the server sets cookies (like a login token), the session automatically stores them. - On
s.get(...), the session reuses the same connection and sends the stored cookies and headers without extra work.
This is the core benefit: a session gives you continuity.
ScrapingBee session with retries, defaults, and per-request overrides
This is the same requests.Session() pattern as before, we just point the requests at ScrapingBee and a retry logic. The idea is:
- Set defaults once (API key, target URL, headers).
- Define retry logic so flaky networks or rate limits don't break the script.
- Override things per request when needed.
Here's an example:
import requests
from requests.adapters import HTTPAdapter
from urllib3.util.retry import Retry
SB_ENDPOINT = "https://app.scrapingbee.com/api/v1"
s = requests.Session()
# Defaults shared by all POSTs through this session
s.params = {
"api_key": "YOUR_API_KEY",
"url": "https://httpbingo.org/post",
"forward_headers": "true", # allows forwarding headers to the destination site
}
# Headers used on every request
# (Headers starting with "Spb-" are forwarded to the *target* site)
s.headers.update({
"Spb-Authorization": "Bearer example-token",
"Accept": "application/json",
})
# Add retry logic so transient errors auto-retry instead of crashing
retry = Retry(
total=3,
backoff_factor=0.5,
status_forcelist=[429, 500, 502, 503, 504],
respect_retry_after_header=True,
allowed_methods={"POST"},
)
s.mount("https://", HTTPAdapter(max_retries=retry))
# First POST (uses default url + headers + params)
r1 = s.post(SB_ENDPOINT, json={"action": "ping"}, timeout=10)
r1.raise_for_status()
print("first:", r1.json().get("json"))
# Override the target URL and add a one-off header
r2 = s.post(
SB_ENDPOINT,
# clone and override the target URL
params={**s.params, "url": "https://httpbingo.org/anything"},
json={"another": "payload"},
headers={"Spb-Accept-Language": "en-US"}, # forwarded to the target
timeout=10,
)
r2.raise_for_status()
print("second:", r2.status_code)
Key takeaways:
- The session keeps connections warm and remembers cookies.
s.paramsands.headersact as defaults you don't have to repeat.- The retry adapter makes things more robust against temporary errors or rate limits.
- For one-off behavior, override
params={...}orheaders={...}on that specific request. - By default, the retry mechanism in urllib3 (and therefore requests) only retries idempotent HTTP methods (like GET, HEAD). POST is not retried by default, because POST usually represents an action that changes something on the server. Retrying it blindly could accidentally create duplicates or repeat side effects (like multiple form submissions). If you do want to retry POST, you must explicitly enable it as shown in the code above:
allowed_methods={"POST"}.
This pattern fits well when you're doing a Python requests session POST workflow repeatedly: log in once, send several follow-up requests, keep things efficient and tidy.
Add automatic retries with Tenacity
Earlier, we added retry logic using Retry and HTTPAdapter. That works great when you want retries applied inside a requests.Session, but it's a bit configuration-heavy. If you want something simpler and more readable, the Tenacity library is a nice alternative. It retries a function automatically whenever it raises an exception.
Install it:
uv add tenacity
Tenacity is a solid choice because:
- It works with any code, not just Requests
- No custom adapter setup required
- Retries only the calls you choose
- The retry behavior is visible and explicit
ScrapingBee session + Tenacity retries
Below, we keep the session defaults (headers, params), but use Tenacity to retry a specific POST call when it fails.
import requests
from tenacity import retry, wait_fixed, stop_after_attempt
SB_ENDPOINT = "https://app.scrapingbee.com/api/v1"
# Create a session so cookies, headers, and TCP connections are reused
s = requests.Session()
# Defaults shared by all calls made through this session
s.params = {
"api_key": "YOUR_API_KEY",
"url": "https://httpbingo.org/post",
"forward_headers": "true", # Needed if forwarding headers to the target site
}
# Default headers for every request (Spb- headers get forwarded to the target site)
s.headers.update({
"Spb-Authorization": "Bearer example-token",
"Accept": "application/json",
})
# This function will automatically retry if `resp.raise_for_status()` errors
@retry(wait=wait_fixed(1), stop=stop_after_attempt(3))
def post_with_retry(payload, **extra):
resp = s.post(SB_ENDPOINT, json=payload, timeout=10, **extra)
resp.raise_for_status()
return resp
# First POST (uses default target URL and defaults)
resp1 = post_with_retry({"action": "ping"})
print("first:", resp1.json().get("json"))
# Override the target URL for this specific request
resp2 = post_with_retry(
{"another": "payload"},
params={**s.params, "url": "https://httpbingo.org/anything"},
headers={"Spb-Accept-Language": "en-US"}, # forwarded to target
)
print("second:", resp2.status_code)
What this pattern gives you:
- A session for cookie persistence and connection reuse
- Clean, readable retry behavior (no networking internals to configure)
- The ability to override params/headers per request
- A workflow that still looks like normal python requests post usage
Which retry method should you choose?
| Situation | Recommended Retry Style |
|---|---|
| You want simple, readable retries around specific calls | Tenacity |
| You want retries applied automatically to every request from a session | Retry + HTTPAdapter |
| You're writing scraping code that must survive 429/403/500 storms | Either works, so choose based on how you want the code to read |
Compression and performance
If you're sending big POST payloads (log batches, analytics events, product catalogs, embeddings, etc.), the size of your request body actually matters. Smaller bodies transfer faster, use less bandwidth, and are less likely to trigger rate limits or WAF heuristics.
Many APIs accept compressed request bodies; the only rule is that you must tell the server how you compressed them. Requests won't auto-compress for you, but doing it yourself takes just a couple of lines.
Gzip-compressing a POST body
If you're sending a big chunk of JSON, most of it is just repeated text. Gzip squishes that down so the request is smaller and gets to the server faster. Less bandwidth, quicker requests, fewer "payload too large" headaches. You just have to say how you compressed it with a Content-Encoding header:
import requests, gzip, json
payload = {"key": "value"} # Could be large in real scenarios
raw = json.dumps(payload).encode("utf-8")
compressed = gzip.compress(raw)
resp = requests.post(
"https://httpbingo.org/post",
data=compressed,
headers={
"Content-Encoding": "gzip",
"Content-Type": "application/json",
},
)
print(resp.status_code)
Key points:
- We serialize to JSON ourselves (
json.dumps()) becausejson=expects to send uncompressed data. - We compress the encoded bytes, not the Python dict.
Content-Encoding: gziptells the server how to decode the body before parsing it.
This pattern is useful when POST bodies regularly get large — hundreds of KB to several MB.
Brotli-compressing a POST body
Brotli is another compression option. It usually squeezes JSON even smaller than gzip, which is great when you're sending big payloads a lot. The trade-off: Brotli uses a bit more CPU to compress, so it's slightly slower on the client side.
Install it if needed:
uv add brotli
Then:
import requests, brotli, json
payload = {"key": "value"}
raw = json.dumps(payload).encode("utf-8")
compressed = brotli.compress(raw)
resp = requests.post(
"https://httpbingo.org/post",
data=compressed,
headers={"Content-Encoding": "br"} # "br" = Brotli
)
print(resp.status_code)
If you care about minimizing bandwidth, Brotli is your guy. If you care more about raw speed, gzip is usually enough.
Compressed POST through ScrapingBee
ScrapingBee doesn't unpack or re-compress your request body, it just forwards it. So if you send a gzipped POST, it'll arrive gzipped on the target site, exactly as intended.
The only detail to watch: if the target site needs to see the Content-Encoding header, you must forward it (which means using forward_headers=true and prefixing the header with Spb-).
import requests, gzip, json
payload = json.dumps({"search": "python"}).encode("utf-8")
compressed = gzip.compress(payload)
resp = requests.post(
"https://app.scrapingbee.com/api/v1",
params={
"api_key": "SCRAPINGBEE_API_KEY",
"url": "https://httpbingo.org/post",
"forward_headers": "true", # allow header forwarding
},
data=compressed,
headers={
"Spb-Content-Encoding": "gzip", # becomes Content-Encoding: gzip on the target
"Spb-Content-Type": "application/json",
},
timeout=20,
)
print(resp.status_code)
print(resp.json().get("data")) # confirms the body was transmitted intact
Why this matters in practice:
- If you're pushing large JSON batches, compression cuts request size dramatically.
- Smaller payloads mean faster requests and fewer rate-limit triggers.
- ScrapingBee forwards the request as-is, so you don't need special integration code.
When should you compress?
| Scenario | Compression Worth It? | Reason |
|---|---|---|
| Small JSON payloads (under ~5KB) | Probably not | Compression overhead is bigger than the savings |
| Uploading large analytics or event batches | Yes | Reduces bandwidth and lowers rate-limit pressure |
| APIs with strict payload size limits | Definitely | Compression gives you room to send more data safely |
| Lots of tiny real-time requests | No | Compression adds latency with no real benefit |
A note on performance and concurrency
requests is synchronous. Each .post() call waits for the response before moving on. If you're just sending a few POSTs, this is totally fine. But if you need to send hundreds or thousands of POST requests, one after another, the waiting becomes the bottleneck. Your script will spend most of its time idle.
If you need high throughput, switch to an async client:
| Option | Why use it |
|---|---|
| httpx.AsyncClient | Modern, requests-like API, great default choice for async |
| aiohttp | Lower-level control, very fast, trusted in production scraping pipelines |
The trade-off is simple:
requests→ easy, blocking, great for simple workflows.httpx.AsyncClient/aiohttp→ faster, concurrent, great for large-scale tasks.
Here's a small example with asyncio:
import asyncio, httpx
async def main():
async with httpx.AsyncClient(timeout=10) as client:
r = await client.post("https://httpbingo.org/post", json={"k": "v"})
r.raise_for_status()
print(r.json()["json"])
asyncio.run(main())
So:
- If you're sending a login POST + a few follow-up calls → stick with
requests. - If you're firing 1000+ POSTs to APIs or scraping endpoints → async will save you quite a bit of run time.
Ready to scrape smarter with Python?
You've now got the core workflow down: sending POST requests with Requests, sending JSON or form data, handling files, using sessions, retrying, even compressing payloads. That's enough to talk to most APIs and basic endpoints.
But when you start working with real-world websites (logins, rate limits, bot checks, JavaScript-rendered pages) you'll quickly hit friction. Handling proxies, retries, CAPTCHAs, and session state manually turns into a whole separate engineering problem.
ScrapingBee solves that: you keep writing normal requests.post() calls, and ScrapingBee takes care of the proxy rotation, JS rendering, and anti-bot defenses in the background. No DIY proxy pool, no infrastructure hassle.
Try it free (plenty of credits to test) at app.scrapingbee.com/account/register.
Conclusion
We covered the whole journey: sending JSON, form fields, files, sessions, retries, compression, all with requests.post() at the center. That's the strength of Requests: it stays simple even as your use case grows. You write the intent, not boilerplate.
A useful rule of thumb: start with Requests first. Many workflows don't need a browser, automation framework, or anything heavy; a clean Python requests POST flow often does the job. And when you do run into rate limits, fingerprinting, login walls, or sites that require JavaScript to render, you don't throw all this away. You keep your same code and route it through ScrapingBee. Same .post(), just more reach.
If you know you'll be working with sites that push back, you'll eventually want proxies in the mix. You can go deeper here: Using Proxies with Python Requests.
Python POST requests FAQs
What is the difference between data and json in Python Requests POST?
data=sends your payload as form-encoded fields by default when given a dict. Think classic login forms: the server expectsapplication/x-www-form-urlencoded.json=automatically serializes your dict to JSON and addsContent-Type: application/jsonfor you.
So:
# Form data (like a web login)
requests.post(url, data={"username": "alice", "password": "s3cr3t"})
# JSON API payload (modern REST/GraphQL-style endpoints)
requests.post(url, json={"username": "alice", "password": "s3cr3t"})
Use data= when dealing with web forms. Use json= when talking to APIs.
How do I send form data with Python Requests?
Use data= and pass a dict. Requests will encode it as application/x-www-form-urlencoded:
requests.post("https://example.com/login", data={"user": "alice", "pw": "secret"})
If you need to send files and fields, use data= (fields) and files= (file parts) together. Note that if you pass a dict to data= and also use files=, Requests builds multipart/form-data (not urlencoded).
How can I upload a file with Python Requests POST?
Use files= with an open file handle:
with open("photo.jpg","rb") as f:
requests.post(url, files={"file": ("photo.jpg", f, "image/jpeg")})
This automatically sends multipart/form-data.
How do I handle errors in Python Requests POST?
Use raise_for_status() for clean failure paths:
try:
r = requests.post(url, json=payload, timeout=10)
r.raise_for_status()
except requests.HTTPError as e:
print(e, r.status_code)
Fail early, debug faster. Check requests docs on error codes to learn more.
Can I use Python Requests POST with sessions and authentication?
Yep. requests.Session() persists cookies/headers across calls. Add auth tokens or basic auth once:
s = requests.Session()
s.headers["Authorization"] = "Bearer TOKEN"
r = s.post(url, json=payload)
No copy-pasting headers everywhere.
How do I send compressed data in a Python POST request?
Compress first, then set the Content-Encoding header:
import gzip, json, requests
payload = gzip.compress(json.dumps({"k":"v"}).encode())
requests.post(url, data=payload, headers={"Content-Encoding":"gzip"})
Useful when sending large batches or analytics uploads.
What is the best way to make multiple POST requests concurrently in Python?
requests is synchronous, so for real concurrency use:
- httpx.AsyncClient — simple async APIs
- aiohttp — fine control and high throughput
Can I use ScrapingBee to send POST requests with Python?
Yes. Same requests.post() flow, but routed through ScrapingBee:
- You send your POST (data/json/files)
- ScrapingBee handles proxies, retries, optional JS rendering
- Your code doesn't change much
Check the ScrapingBee docs to learn more.


