Integration Guide & API Documentation
ProxyShare works as a standard HTTP forward proxy. Point any HTTP client at our gateway and every request exits through a different residential IP. No SDK, no library, no vendor lock-in.
Connection Details
| Proxy Host | p.proxyshare.io |
| Port | 8080 |
| Authentication | username:password (provided in dashboard) |
| Protocols | HTTP, HTTPS (via CONNECT) |
| Rotation | New IP per request (default) |
Quick Start
The fastest way to verify your setup. Replace USER and PASS with your dashboard credentials.
curl -x http://USER:PASS@p.proxyshare.io:8080 https://httpbin.org/ip
You should see a JSON response with a residential IP address that differs from your own.
Python (requests)
The requests library is the most common HTTP client in Python. Pass the proxy as a dictionary and every call will automatically use a new residential IP.
import requestsproxy_url = "http://USER:PASS@p.proxyshare.io:8080"proxies = {"http": proxy_url,"https": proxy_url,}response = requests.get("https://httpbin.org/ip",proxies=proxies,timeout=30,)print(response.json())# Scraping multiple pages — each request gets a fresh IPurls = ["https://example.com/page/1","https://example.com/page/2","https://example.com/page/3",]for url in urls:resp = requests.get(url, proxies=proxies, timeout=30)print(f"{url} — status {resp.status_code}")
Node.js
Two popular approaches: axios with https-proxy-agent, or the built-in fetch with node-fetch and a proxy agent.
axios + https-proxy-agent
import axios from "axios";import { HttpsProxyAgent } from "https-proxy-agent";const agent = new HttpsProxyAgent("http://USER:PASS@p.proxyshare.io:8080");const response = await axios.get("https://httpbin.org/ip", {httpsAgent: agent,httpAgent: agent,timeout: 30000,});console.log(response.data);
node-fetch + https-proxy-agent
import fetch from "node-fetch";import { HttpsProxyAgent } from "https-proxy-agent";const agent = new HttpsProxyAgent("http://USER:PASS@p.proxyshare.io:8080");const response = await fetch("https://httpbin.org/ip", {agent,timeout: 30000,});const data = await response.json();console.log(data);
Scrapy
Scrapy has built-in proxy support via the HttpProxyMiddleware. Add the proxy to your project settings and every request will be routed through ProxyShare.
# settings.pyDOWNLOADER_MIDDLEWARES = {"scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware": 1,}HTTP_PROXY = "http://USER:PASS@p.proxyshare.io:8080"HTTPS_PROXY = "http://USER:PASS@p.proxyshare.io:8080"# Recommended settings for proxy usageCONCURRENT_REQUESTS = 32DOWNLOAD_TIMEOUT = 30RETRY_TIMES = 3RETRY_HTTP_CODES = [500, 502, 503, 504, 408, 429]
Alternatively, set the proxy per-request in your spider by yielding requests with meta={'proxy': 'http://USER:PASS@p.proxyshare.io:8080'}.
Puppeteer
For JavaScript-rendered pages, use Puppeteer with the --proxy-server launch argument. Authenticate via page.authenticate().
import puppeteer from "puppeteer";const browser = await puppeteer.launch({args: ["--proxy-server=http://p.proxyshare.io:8080"],});const page = await browser.newPage();await page.authenticate({username: "USER",password: "PASS",});await page.goto("https://httpbin.org/ip", {waitUntil: "domcontentloaded",timeout: 30000,});const body = await page.evaluate(() => document.body.innerText);console.log(body);await browser.close();
Playwright
Playwright supports proxy configuration at the browser context level, making it straightforward to route all traffic through ProxyShare.
import { chromium } from "playwright";const browser = await chromium.launch({proxy: {server: "http://p.proxyshare.io:8080",username: "USER",password: "PASS",},});const page = await browser.newPage();await page.goto("https://httpbin.org/ip", { timeout: 30000 });const content = await page.textContent("body");console.log(content);await browser.close();
Error Handling & Best Practices
Residential proxies route through real devices, so occasional timeouts or connection resets are normal. Build resilience into your client code.
- Set timeouts. Always configure a 30-second timeout. If a request hangs, retry it — the next attempt will use a different IP.
- Retry on failure. Implement 2-3 retries for connection errors, 502s, and 503s. Each retry automatically gets a fresh IP.
- Respect rate limits. Even with rotating IPs, sending thousands of concurrent requests to a single domain can trigger site-wide throttling. Start with 10-20 concurrent requests and scale up gradually.
- Handle 407 Proxy Auth Required. This means your credentials are incorrect or expired. Check your dashboard for current credentials.
- Monitor bandwidth usage. Track your consumption via the dashboard to avoid unexpected overages. Each response body counts toward your bandwidth.
import requestsfrom requests.adapters import HTTPAdapterfrom urllib3.util.retry import Retrysession = requests.Session()retries = Retry(total=3,backoff_factor=1,status_forcelist=[500, 502, 503, 504, 429],)session.mount("http://", HTTPAdapter(max_retries=retries))session.mount("https://", HTTPAdapter(max_retries=retries))session.proxies = {"http": "http://USER:PASS@p.proxyshare.io:8080","https": "http://USER:PASS@p.proxyshare.io:8080",}response = session.get("https://example.com", timeout=30)print(response.status_code)
Ready to integrate?
Get your credentials and start routing traffic through residential IPs in minutes.
View Plans