Scrape Google Maps, Twitter, and LinkedIn with a Single API Call
Web scraping is a developer's Swiss Army knife—but managing multiple scraping libraries, handling rate limits, and parsing different HTML structures across platforms is a nightmare. You need Google Maps business data, Twitter threads, and LinkedIn profiles. Each requires different selectors, headers, and anti-bot strategies. What if you could handle all three with a single API call?
The Problem: Multi-Platform Scraping Complexity
Traditional scraping setups require:
- Multiple libraries: Selenium for JavaScript-heavy sites, BeautifulSoup for static HTML, Playwright for LinkedIn
- Proxy management: Rotating proxies to avoid IP bans
- Maintenance overhead: Sites change their DOM; your selectors break weekly
- Infrastructure costs: Running browser automation servers isn't cheap
- Rate limiting: Manual backoff strategies and retry logic
AiPayGent's unified scraping endpoint abstracts this complexity. One API, multiple sources, built-in anti-detection, and Claude AI powering intelligent data extraction.
Getting Started: Free Tier & Credits
You get 10 free API calls per day without authentication. For production use, grab an API key at https://api.aipaygent.xyz and prepay USDC on Base using the /buy-credits endpoint, or use your prepaid API key for pay-per-use billing.
Scraping Google Maps with AiPayGent
Let's extract restaurant data from a Google Maps search result:
curl -X POST https://api.aipaygent.xyz/scrape \
-H "Content-Type: application/json" \
-d '{
"url": "https://www.google.com/maps/search/coffee+shops+near+san+francisco",
"platform": "google_maps",
"extract": {
"fields": ["name", "rating", "reviews_count", "address", "phone", "website"]
}
}'
Python example:
import requests
import json
api_url = "https://api.aipaygent.xyz/scrape"
headers = {
"Content-Type": "application/json",
"Authorization": "Bearer YOUR_API_KEY" # Optional; first 10 calls/day free
}
payload = {
"url": "https://www.google.com/maps/search/coffee+shops+near+san+francisco",
"platform": "google_maps",
"extract": {
"fields": ["name", "rating", "reviews_count", "address", "phone", "website"]
}
}
response = requests.post(api_url, json=payload, headers=headers)
results = response.json()
for business in results.get("results", []):
print(f"{business['name']} - {business['rating']}⭐ ({business['reviews_count']} reviews)")
print(f" {business['address']}")
print()
Example response:
{
"status": "success",
"platform": "google_maps",
"results": [
{
"name": "Ritual Coffee Roasters",
"rating": 4.7,
"reviews_count": 2341,
"address": "1026 Valencia St, San Francisco, CA 94110",
"phone": "(415) 641-1011",
"website": "www.ritualcoffee.com"
},
{
"name": "Blue Bottle Coffee",
"rating": 4.5,
"reviews_count": 5892,
"address": "66 Mint St, San Francisco, CA 94103",
"phone": "(510) 653-3394",
"website": "www.bluebottlecoffee.com"
}
],
"credits_used": 1
}
Scraping Twitter (X) Threads
Extract structured data from a Twitter thread:
curl -X POST https://api.aipaygent.xyz/scrape \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_API_KEY" \
-d '{
"url": "https://twitter.com/username/status/1234567890",
"platform": "twitter",
"extract": {
"fields": ["author", "text", "likes", "retweets", "replies", "timestamp", "thread"]
}
}'
Python:
payload = {
"url": "https://twitter.com/username/status/1234567890",
"platform": "twitter",
"extract": {
"fields": ["author", "text", "likes", "retweets", "replies", "timestamp", "thread"]
}
}
response = requests.post(api_url, json=payload, headers=headers)
tweet_data = response.json()
print(f"Tweet by @{tweet_data['author']} on {tweet_data['timestamp']}")
print(f"{tweet_data['text']}")
print(f"❤️ {tweet_data['likes']} | 🔄 {tweet_data['retweets']} | 💬 {tweet_data['replies']}")
Scraping LinkedIn Profiles
Extract professional information:
curl -X POST https://api.aipaygent.xyz/scrape \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_API_KEY" \
-d '{
"url": "https://www.linkedin.com/in/username",
"platform": "linkedin",
"extract": {
"fields": ["name", "headline", "bio", "location", "experience", "education", "skills"]
}
}'
Response structure:
{
"status": "success",
"platform": "linkedin",
"profile": {
"name": "Jane Developer",
"headline": "Full-Stack Engineer at TechCorp",
"bio": "Building scalable systems...",
"location": "San Francisco, CA",
"experience": [
{
"title": "Senior Engineer",
"company": "TechCorp",
"duration": "2020 - Present"
}
],
"education": [...],
"skills": ["Python", "React", "AWS", ...]
},
"credits_used": 2
}
Scaling & Cost Management
After your 10 free daily calls, purchase credits via /buy-credits. Costs vary by platform (Google Maps typically costs 1 credit, LinkedIn 2, Twitter 1). Monitor usage via the response's credits_used field.
Next Steps
Explore 140+ endpoints in the API Discover portal and review the full OpenAPI specification for advanced options like JavaScript rendering, proxy rotation, and custom extraction schemas.