HomeRevenue Share

Publish your web scrapers and earn 10% revenue share

Publish your Playwright, Puppeteer, or Python scrapers to the CoreClaw marketplace and turn your code into recurring revenue from your web scraping projects. Join 100+ developers building data extraction tools on our growing scraper marketplace.

  • 100K+ monthly successful data record deliveries
  • 100+ active developers
  • 10% revenue share on every successful record
  • 24-hour average review time
Your 10% Revenue Share
$245.00

From successful data records

Script NameCommission
Google Search
User Usage: 45,000 results
+$40.00
YouTube Channel
User Usage: 30,000 results
+$27.00
TikTok Profile
User Usage: 58,000 results
+$52.00
Total This Month
User Usage: 133,000 results
+$119.00
Note: This represents one month of data from our top performing scrapers.
How you earn

How to earn 10% revenue share from your scrapers

You build it; we operate it, and you earn 10% revenue share of every successful data record users receive.

Level
Type
Monthly Successful Records
Your 10% Revenue Share
Top Performer
High-demand scraper
100K-150K records
~$89–$134/month
Growing
Your first Scraper
10K-20K records
~$9–$18/month
Starter
Popular niche scraper
30K-50K records
~$27–$45/month

Formula: Users' actual spend (successful records × per-record price) × 10% = your revenue share

Why publish

Focus on development; revenue runs on autopilot

You own the core logic and user experience; we handle execution, marketing, and payouts

You Focus On

Focus your time on the product itself, while we continuously research the quality of the results.

  • Data extraction logic
  • Data parsing & cleaning
  • Feature iterations
  • Documentation
  • Community support

We Handle

Infrastructure, compliance, and operations are all managed by the company.

  • Proxy rotation (residential IPs)
  • Anti-bot bypass (CAPTCHA solving, fingerprint masking)
  • Cloud scaling (auto-scale to handle demand)
  • Billing & payouts (global withdrawal)
  • Marketing distribution (SEO, homepage features)

CoreClaw

Infrastructure Included:

  • Serverless environment with auto-scaling
  • Auto-retry & error handling
  • Webhook & API delivery
  • Built-in proxy pool (10M+ IPs)
  • Real-time logs & analytics
  • 99.9% uptime SLA
Developer experience

Build with tools you already know

Python (Scrapy, BeautifulSoup, Selenium, Requests, Playwright)
Node.js (Playwright, Puppeteer, Cheerio, Axios)
Go (Colly, Rod)
Docker (bring your own container)
Python
Node.js
Go
#!/usr/bin/env python3
import requests
import json
from typing import Dict, Any, Optional

# API URL
API_URL = "https://openapi.coreclaw.com/api/v1/scraper/run"

# Your API KEY
API_KEY = "<YOUR_API_KEY>"

# Curl timeout (seconds)
TIMEOUT = 30

def run_scraper(params: Dict[str, Any], api_key: str) -> Dict[str, Any]:
    headers = {
        "api-key": api_key,
        "Content-Type": "application/json"
    }

    try:
        # Send POST request
        response = requests.post(
            API_URL,
            headers=headers,
            json=params,
            timeout=TIMEOUT
        )

        # Check HTTP status code
        if response.status_code != 200:
            return {
                "success": False,
                "run_slug": None,
                "error": f"HTTP error: {response.status_code} - {response.text}"
            }

        # Parse response
        result = response.json()

        # Check business error code
        if result.get("code") != 0:
            return {
                "success": False,
                "run_slug": None,
                "error": f"Business error: {result.get("message", "Unknown error")} (code: {result.get("code")})"
            }

        # Return success result
        return {
            "success": True,
            "run_slug": result.get("data", {}).get("run_slug"),
            "error": None
        }

    except requests.exceptions.Timeout:
        return {
            "success": False,
            "run_slug": None,
            "error": f"Request timeout after {TIMEOUT} seconds"
        }
    except requests.exceptions.RequestException as e:
        return {
            "success": False,
            "run_slug": None,
            "error": f"Request error: {str(e)}"
        }
    except json.JSONDecodeError as e:
        return {
            "success": False,
            "run_slug": None,
            "error": f"JSON decode error: {str(e)}"
        }

def main():
    # Build request parameters
    request_params = {
        "scraper_slug": "01KG3VTPJMC1AYPTBPACKQKJBV",
        "version": "v1.0.7",
        "input": {
            "parameters": {
                "system": {
                    "proxy_region": "",
                    "cpus": 0.125,
                    "memory": 512,
                    "execute_limit_time_seconds": 1800,
                    "max_total_charge": 0,
                    "max_total_traffic": 0
                },
                "custom": {
                   "url": [
                       {
                           "url": "https://www.facebook.com/share/p/1K6xfHFkrK/",
                           "comments_sort": "All comments",
                           "limit_records": "",
                           "get_all_replies": ""
                       }
                   ]
               }
            }
        },
        "callback_url": "https://your-domain.com/callback"
    }

    # Send request
    print("Sending request to API...")
    result = run_scraper(request_params, API_KEY)

    # Handle result
    if result["success"]:
        print("Scraper run successful!")
        print(f"Run ID: {result['run_slug']}")
        print("You can use this ID to query run status and results")
    else:
        print("Request failed!")
        print(f"Error message: {result['error']}")

if __name__ == "__main__":
    main()
Success Stories

Developers earning 10% revenue share

Developer-focused revenue share platform, turn your code into recurring revenue

Diana
Diana
Reverse Crawler Engineer
E-commerce monitor
Tool
Amazon Price & Review Tracker
Monthly output
25,000 records
Monthly earnings
~$22

"Built it over a weekend. Now earning enough to cover my coffee expenses every month."

Alex
Alex
Data engineer
Lead generation pro
Tool
Google Maps Business Extractor
Monthly output
50,000 records
Monthly earnings
~$45

"Turned a side project into a sustainable business that covers my phone bill."

2-person team
2-person team
Social media suite
Tool
Facebook + YouTube Monitoring Kit
Monthly output
850,000 records
Monthly earnings
~$760

"CoreClaw brings the customers. I just maintain the code."

From code to cash flow

From code to cash flow

From code to revenue share, all-in-one monetization solution

1

Build

Develop locally using our SDK and familiar tools (Playwright, Puppeteer, Python). Test with real proxies and anti-bot protection, focus on delivering high-quality successful data records.

Playwright
Puppeteer
Python
Google Maps Lead Collection
Extract business names, phones, websites, ratings, reviews.
2

Submit for review

Most reviews are completed within 24 hours. Code verification includes functionality check, data quality validation, and success rate optimization.

Code verification
1  // CoreClaw SDK Code
2  import { Worker } from 'cafe-scraper';
3
4  Worker.main(async () => {
5    const input = await Worker.getInput();
6    const browser = await Worker.launch();
7    const page = await browser.newPage();
3

Start earning

Your tool goes live, users use it, and you earn 10% of all credits spent on successful data records.

Your toolsGoes Live
10% Share
FAQ

Frequently Asked Questions

You earn 10% of gross revenue generated by your scrapers in our marketplace. Revenue is calculated based on successful data records delivered, not on run attempts. If a user runs your scraper 100 times but only gets 50 successful records, you earn 10% on those 50 records.