The Requests library is Python's most popular tool for working with HTTP protocols. Whether you need to consume APIs, perform web scraping, download files, or integrate systems, Requests is the ideal choice because of its simplicity and power.

In this complete guide, you'll learn from installation to advanced techniques like authentication, persistent sessions, and error handling. By the end, you'll have full mastery to work with any type of HTTP request in Python.

📦 Installation and Setup

Requests isn't included in Python's standard library, so you need to install it via pip. Installation is extremely straightforward:

# Installing via pip
pip install requests

Or using pip3 on Linux/Mac systems

pip3 install requests

To verify the installation was successful, you can import the module and check its version:

import requests

print(requests.version)

The library is maintained by the Python community and has excellent official documentation. You can check the complete documentation at requests.readthedocs.io.

🌐 Making Your First GET Request

The GET request is the most basic and common method for retrieving data from a server. With Requests, making a GET request is incredibly simple:

import requests

Simple GET request

response = requests.get("https://api.github.com")

Checking the response status

print(f"Status Code: {response.status_code}") print(f"Reason: {response.reason}")

Viewing the content (text)

print(response.text[:500])

The response object contains various information about the server's response:

  • status_code: HTTP status code (200 = OK, 404 = Not Found, etc)
  • reason: Descriptive phrase for the status ("OK", "Not Found", etc)
  • text: Response content in text format
  • json(): Content parsed as JSON
  • headers: Response headers
  • cookies: Cookies received

Checking Status Safely

It's good practice to verify the request was successful before processing the data. Requests makes this easy with the ok attribute:

import requests

response = requests.get("https://api.github.com")

if response.ok: print("Request successful!") data = response.json() print(data) else: print(f"Error: {response.status_code} - {response.reason}")

📋 Working with Query Parameters

Many APIs use query string parameters to filter or paginate results. Requests lets you pass these parameters in an organized way:

import requests

Query parameters passed as a dictionary

params = { "page": 1, "per_page": 10, "sort": "created", "direction": "desc" }

response = requests.get( "https://api.github.com/repos/python/cpython/releases", params=params )

print(f"Generated URL: {response.url}") print(f"Status: {response.status_code}")

Viewing the returned JSON data

releases = response.json() for release in releases[:3]: print(f"- {release['tag_name']}: {release['name']}")

This approach is much cleaner than manually concatenating strings, as Requests automatically handles URL encoding.

📤 Sending Data with POST

The POST method is used to send data to the server, whether to create new resources, submit forms, or send JSON payloads:

import requests

Data to send (Python dictionary)

data = { "username": "test_user", "email": "[email protected]", "password": "secure_password123" }

Sending POST with form data

response = requests.post( "https://httpbin.org/post", data=data )

print(f"Status: {response.status_code}") print(response.json())

Sending JSON in the Request Body

Many modern APIs expect to receive data in JSON format. You can use the json parameter for this:

import requests

JSON payload

payload = { "title": "New Post", "body": "Post content here", "userId": 1 }

response = requests.post( "https://jsonplaceholder.typicode.com/posts", json=payload )

The response automatically comes in JSON

result = response.json() print(f"Created ID: {result['id']}") print(f"Title: {result['title']}")

Requests automatically sets the Content-Type header to application/json when you use the json parameter.

🎯 Custom Headers

In some situations, you need to send custom headers in the request, such as authentication tokens, application identification, or format preferences:

import requests

Defining custom headers

headers = { "Authorization": "Bearer your_token_here", "Accept": "application/json", "User-Agent": "MyApp/1.0", "Accept-Language": "en-US,en;q=0.9" }

response = requests.get( "https://api.example.com/protected-data", headers=headers )

print(f"Status: {response.status_code}") print(response.json())

🔐 Authentication with Requests

Requests natively supports various authentication methods. The most common are Basic Auth and Bearer Token:

Basic Authentication

import requests
from requests.auth import HTTPBasicAuth

Basic authentication (username and password)

auth = HTTPBasicAuth("user", "password")

response = requests.get( "https://httpbin.org/basic-auth/user/password", auth=auth )

print(f"Status: {response.status_code}") print(response.text)

Bearer Token (JWT)

import requests

Bearer Token for APIs using JWT

headers = { "Authorization": "Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9..." }

response = requests.get( "https://api.example.com/user/profile", headers=headers )

print(response.json())

For more complex authentication, you can implement your own authentication class by inheriting from requests.auth.AuthBase. Official documentation on authentication is available at docs.python-requests.org.

🍪 Working with Cookies and Sessions

Cookies are essential for maintaining state between requests, such as user sessions. Requests makes working with cookies easy:

import requests

Checking response cookies

response = requests.get("https://httpbin.org/cookies")

print("Received cookies:") print(response.json())

Sending custom cookies

cookies = { "session_id": "abc123xyz", "user_preference": "dark_mode" }

response = requests.get( "https://httpbin.org/cookies", cookies=cookies )

print(response.json())

Using Persistent Sessions

When you need to make multiple requests while maintaining the same cookies and settings, using a Session is much more efficient:

import requests

Create a session

session = requests.Session()

The session automatically keeps cookies between requests

session.headers.update({ "User-Agent": "MyBot/1.0" })

First request - login

login_data = {"username": "admin", "password": "123456"} response = session.post("https://example.com/login", data=login_data) print(f"Login: {response.status_code}")

Second request - already authenticated

response = session.get("https://example.com/dashboard") print(f"Dashboard: {response.status_code}")

Third request - continue using the session

response = session.get("https://example.com/profile") print(f"Profile: {response.status_code}")

The session also allows sharing TCP connections, improving performance for multiple requests. For more details about sessions, check the advanced documentation.

⏱️Timeouts and Error Handling

By default, Requests waits indefinitely for a response. It's crucial to set timeouts to prevent your application from hanging:

import requests
from requests.exceptions import Timeout, ConnectionError

5 second timeout (connect and read)

try: response = requests.get( "https://api.example.com/data", timeout=5 ) print(response.json()) except Timeout: print("Timeout exceeded!") except ConnectionError: print("Connection error!") except requests.exceptions.RequestException as e: print(f"General error: {e}")

Available Exceptions

Requests raises several exceptions you can catch:

  • requests.exceptions.RequestException: Base class for all exceptions
  • requests.exceptions.ConnectionError: Network connection error
  • requests.exceptions.Timeout: Timeout exceeded
  • requests.exceptions.HTTPError: HTTP response with error (4xx or 5xx)
  • requests.exceptions.TooManyRedirects: Too many redirects

Automatically Checking for HTTP Errors

You can also configure Requests to automatically raise exceptions for HTTP errors:

import requests
from requests.exceptions import HTTPError

Set to raise exceptions on error statuses

response = requests.get("https://httpbin.org/status/404") response.raise_for_status() # Raises HTTPError for 4xx/5xx

📥 Downloading Files and Streaming Data

For large files, you don't want to load everything into memory at once. Requests allows streaming data:

import requests

Streaming download - doesn't load everything into memory

url = "https://example.com/large-file.zip"

response = requests.get(url, stream=True)

Get total size (if available)

total_size = response.headers.get("content-length") print(f"Total size: {total_size} bytes")

Download in chunks

with open("file.zip", "wb") as f: for chunk in response.iter_content(chunk_size=8192): if chunk: f.write(chunk)

print("Download complete!")

Streaming technique is essential for large files or for processing data in real time. For downloading images or other binary files, Requests automatically detects the content type.

🔄 Retry and Retry Strategy

In production applications, you often need to retry requests that temporarily failed:

import requests
from requests.adapters import HTTPAdapter
from urllib3.util.retry import Retry

Configure retry strategy

session = requests.Session()

retry_strategy = Retry( total=3, backoff_factor=1, status_forcelist=[429, 500, 502, 503, 504], allowed_methods=["GET", "POST"] )

adapter = HTTPAdapter(max_retries=retry_strategy) session.mount("http://", adapter) session.mount("https://", adapter)

Now requests will automatically retry

response = session.get("https://api.example.com/data") print(f"Status: {response.status_code}")

This configuration makes the request retry up to 3 times, with exponential backoff between attempts, only for server errors (5xx) or rate limiting (429).

📊 Adaptive Timeout with Session

For applications that make many requests, you can set default timeouts on the session:

import requests

session = requests.Session()

Set default timeout for all requests

session.timeout = 10

Specific timeout for one request

response = session.get("https://api.example.com", timeout=30)

print(response.status_code)

🛡️ SSL Verification

By default, Requests verifies SSL certificates. In development environments, you might need to disable this verification:

import requests

Disable SSL verification (DO NOT use in production!)

response = requests.get( "https://example.com", verify=False )

To ignore only specific certificates

response = requests.get("https://example.com", verify="/path/to/cert.pem")

Important warning: Never disable SSL verification in production, as it exposes your application to man-in-the-middle attacks.

🔗 Integration with Web Scraping

Requests is often used together with BeautifulSoup for web scraping. After getting the HTML with Requests, you can parse it with BeautifulSoup:

import requests
from bs4 import BeautifulSoup

Get HTML page

response = requests.get("https://example.com/articles") response.encoding = "utf-8"

Parse the HTML

soup = BeautifulSoup(response.text, "html.parser")

Extract article titles

for article in soup.select("article"): title = article.select_one("h2").text link = article.select_one("a")["href"] print(f"{title} -> {link}")

If you want to learn more about web scraping, we have a complete web scraping guide with Python that explores more advanced techniques.

✅ Best Practices and Performance

To use Requests efficiently in production, follow these practices:

  • Use Session: Reuse TCP connections with sessions for better performance
  • Set Timeouts: Always set timeouts to prevent hanging
  • Handle Errors: Catch exceptions appropriately
  • Use Retry: Implement retry with backoff for transient failures
  • Reuse Connections: Configure connection pooling for many requests
  • Log Requests: Add logging for debugging in production

📚 Additional Resources

To deepen your knowledge about Requests and HTTP in general, here are some reliable resources:

🚀 Conclusion

The Requests library is essential for any Python developer who needs to work with APIs, perform web scraping, or integrate systems. With its intuitive and powerful API, you can perform everything from simple requests to advanced configurations of authentication, sessions, and retry.

Now that you master Requests, explore our complete Python guide for beginners to continue your learning journey and master the most versatile language on the market.