Introducing Website Lookup Endpoint (+ Decision Maker Data in Company Details!) Learn more

Crunchbase API: Ultimate Guide + Code + Pricing (2026)
crunchbase api

Crunchbase API: Ultimate Guide + Code + Pricing (2026)

Which Crunchbase API endpoints solve this problem?

If you searched for Crunchbase API, here’s the first thing to get straight: Crunchbase Basic is not the full Crunchbase API, and if you want to get moving immediately, I included a public GitHub repo in this guide that you can clone and adapt. Basic gives you 3 endpoints. Full API access requires an Enterprise or Applications license. That distinction matters a lot, because plenty of developers assume a normal Crunchbase subscription unlocks the full developer surface area. It does not.

“Crunchbase Pro doesn't allow exports anymore, I paid $99 for nothing... Any solutions?”

Crunchbase Pro doesn't allow exports anymore-I paid $99 for nothing... Any solutions?
by u/gorjuce in SaaS

I’ve bought enough data tools to know this pattern by heart. The pain usually starts when the product page, the plan name, and the actual API rights are three different conversations. Crunchbase is useful. But you need to know what it is before you wire it into anything.

💻 Full code on GitHub: crunchbase-api-vs-ninjapear-scripts-2026
Runnable sample scripts for CRM enrichment, prospecting, investor research, account prep, monitoring, and customer-facing product patterns.
git clone https://github.com/NinjaPear-Shares/crunchbase-api-vs-ninjapear-scripts-2026.git
View on GitHub →

What the Crunchbase API actually is

Crunchbase says the API is a read-only RESTful service that lets approved developers access Crunchbase data programmatically. Good. That already tells you the important bit: this is a retrieval layer, not a system you write back into.

At a high level, the sanctioned public API breaks down into three motions:

  1. Autocomplete, when you have messy text and need a canonical company.
  2. Entity lookup, when you know the entity and want the record.
  3. Search, when you want a filtered set of companies.

Then there is the richer layer: cards and broader entity families in the full API.

So the normal engineering flow looks like this:

  • resolve a company name with Autocomplete
  • fetch the company with Organization Entity Lookup
  • optionally pull related cards like funding rounds, investors, or acquisitions
  • use Search when you need a list instead of one company

That is the center of gravity.

Crunchbase API endpoints, broken down

This is the section I always want and rarely get.

If you are an engineer, you do not need another paragraph saying “Crunchbase offers rich company intelligence.” You need to know which endpoint family gives you which data shape.

Autocomplete endpoint

Endpoint: GET /api/v4/autocompletes

This is the helper endpoint. You use it when the input is messy:

  • a user typed strip
  • your CRM has Open AI instead of OpenAI
  • someone pasted a partial company name

What it is for:

  • turning fuzzy text into canonical entities
  • getting valid permalinks to use in lookup calls
  • reducing bad downstream requests

Suggested first request:

curl 'https://api.crunchbase.com/api/v4/autocompletes?query=stripe' \
  -H 'X-cb-user-key: YOUR_KEY'

What it realistically returns for you to work with:

  • suggested entity matches
  • canonical identifiers
  • a permalink you can feed into Organization Entity Lookup

This is not the glamorous part of the API. It is the part that keeps the rest of your workflow from turning into garbage.

Organization Entity Lookup

Endpoint: GET /api/v4/entities/organizations/{permalink}

This is the workhorse.

Once you know the organization permalink, this endpoint gives you the company record itself. Public docs and examples show properties shaped around:

  • identifier
  • permalink
  • uuid
  • short_description
  • website
  • category references
  • location references
  • founded_on
  • rank fields like rank_org_company

Trimmed sample response shape based on Crunchbase docs/examples:

{
  "properties": {
    "identifier": {
      "value": "Crunchbase",
      "permalink": "crunchbase",
      "entity_def_id": "organization"
    },
    "short_description": "Crunchbase is a prospecting platform...",
    "rank_org_company": 1234
  }
}

Suggested first request:

curl 'https://api.crunchbase.com/api/v4/entities/organizations/crunchbase' \
  -H 'X-cb-user-key: YOUR_KEY'

Use it for:

  • CRM enrichment
  • company profile pages
  • pre-call research
  • startup research
  • turning a company name into a structured record your code can actually use

Endpoint: POST /api/v4/searches/organizations

This is the list builder.

Use it when you do not know the exact company yet and instead want a filtered set of organizations.

Crunchbase docs show this endpoint family supporting JSON bodies with things like:

  • field_ids
  • query
  • filters on categories
  • location filters
  • funding-related filters
  • employee range enums like num_employees_enum

The docs also say the default page size is 50, and the max is 1000. That is a meaningful implementation detail if you are doing backfills or bulk list work.

Suggested first request:

import requests

r = requests.post(
  'https://api.crunchbase.com/api/v4/searches/organizations',
  headers={'X-cb-user-key': 'YOUR_KEY'},
  json={
    'field_ids': ['identifier', 'short_description']
  }
)
print(r.json())

Trimmed sample response shape based on docs/examples:

{
  "entities": [
    {
      "identifier": {
        "value": "Stripe",
        "permalink": "stripe"
      },
      "short_description": "Stripe is a financial infrastructure platform."
    },
    {
      "identifier": {
        "value": "Adyen",
        "permalink": "adyen"
      },
      "short_description": "Adyen is a payments technology company."
    }
  ]
}

Use it for:

  • prospect list seeding
  • market mapping
  • startup discovery
  • filtering for companies that match a thesis before you do one-by-one lookups

This is where Crunchbase becomes more than “company page in JSON.”

Once you have an organization, the broader API can return cards for related data. Public examples and docs reference cards around:

  • raised funding rounds
  • investor participation
  • acquisitions
  • founder or leadership context

Example:

curl 'https://api.crunchbase.com/api/v4/entities/organizations/openai?card_ids=raised_funding_rounds' \
  -H 'X-cb-user-key: YOUR_KEY'

Trimmed sample response shape based on docs/examples:

{
  "cards": {
    "raised_funding_rounds": [
      {
        "identifier": {"value": "Series C"},
        "announced_on": "2023-01-01",
        "money_raised": {"value_usd": 1000000000}
      }
    ]
  }
}

This card layer is the reason Crunchbase still matters to investors, corp dev teams, and anyone doing private-market research at scale.

What Crunchbase Basic actually exposes

Now the line that matters.

Crunchbase’s Basic API docs say Basic gives you access to a limited set of Organization data fields through exactly 3 endpoints:

  1. Organization Search
  2. Organization Entity Lookup
  3. Autocomplete

That is enough to test lookup and search workflows.

It is not the same thing as broad Crunchbase API access.

If you remember only one sentence from this whole guide, remember that one.

What data points you can realistically pull

Let me translate the endpoint families into actual engineering jobs.

Company identity data

Primarily from:

  • Autocomplete
  • Organization Entity Lookup

Useful fields include:

  • company name
  • permalink
  • UUID
  • website / domain
  • short description
  • rank fields

This is your anchor data. Every other workflow starts here.

Firmographic and company context

Primarily from:

  • Organization Entity Lookup
  • Organization Search

Useful fields and filters include:

  • location
  • categories
  • founded date
  • employee range enums
  • organization profile context
  • funding-related filters in search

This is what you use for segmentation, market mapping, and basic CRM enrichment.

Funding and investor data

Primarily from:

  • Organization Entity Lookup with funding-related cards
  • broader full-API entity relationships when licensed

Useful data includes:

  • funding rounds
  • announced dates
  • amount raised
  • investor relationships
  • acquisitions context
  • private-market history

This is the actual draw for most serious Crunchbase API buyers.

People and leadership context

Primarily from:

  • people-related cards and entity relationships in fuller licensed access

Useful for:

  • founder context
  • executive context
  • understanding who sits around the company record

But let me be precise here: this is not the same thing as a GTM-ready people enrichment or contact data stack. It is context, not contactability.

Search and discovery support

Primarily from:

  • Organization Search
  • Autocomplete

Use these when your question is not “give me company X” but “show me companies that fit this shape.”

How access works, before you waste time coding

This section belongs here, after the endpoint breakdown, because now the access model actually makes sense.

You first need to know what the API surface looks like. Then you need to know which slice you can touch.

Crunchbase Basic

Crunchbase support docs say Basic users can provision an API key and get started.

That sounds more expansive than it really is, because the Basic API docs also say Basic is limited to the 3 endpoints above.

So yes, Basic is real API access.

No, it is not broad API access.

Crunchbase Basic API page showing the 3-endpoint limit

Full API access

Crunchbase says this directly on the Using the API page:

“Access to the Full API Requires an Enterprise or Applications License.”

That line matters more than most comparison pages admit.

If full access starts with a sales conversation, that is not just a pricing detail. It is a product choice.

It means Crunchbase has decided it would rather control the commercial motion than optimize for pure self-serve developer adoption.

That might be fine for your team. It might also annoy the hell out of you.

Crunchbase docs showing full API license requirement and 200 calls/min rate limit

The $99 question

This is where people get burned.

From the public docs I reviewed, I cannot say that a $99 Crunchbase plan gives you the full API. The documentation points the other way. Full API access requires Enterprise or Applications licensing.

So if your internal question is:

“Can I just pay for a normal Crunchbase plan and start building on the full API?”

The honest answer from public evidence is: do not assume that.

“What happened to Crunchbase? ... Heat points?? Growth score?? I want my visitors per month and average monthly visitors...”

New crunchbase is trash (i will not promote)
by u/Practical-Drawing-90 in startups

Different complaint, same buyer emotion. People hate discovering the real boundary after they have already paid.

Auth is the easy part

To be fair to Crunchbase, authentication is not the pain point.

curl 'https://api.crunchbase.com/api/v4/entities/organizations/crunchbase' \
  -H 'X-cb-user-key: YOUR_KEY'

You can pass the key either as:

  • user_key in the URL
  • X-cb-user-key in the header

That part is clean.

Crunchbase API pricing

This section is short because the public information is short.

What is public

From public docs and support pages, we can verify:

  • Basic access exists
  • Basic users can provision an API key
  • full API requires Enterprise or Applications licensing
  • documented rate limit is 200 calls/minute
  • API customers get live company data at scale with unlimited exports
  • Business plan customers get 5K data exports per month

That contrast matters. It tells you Crunchbase treats API customers and UI customers as different commercial tracks.

Crunchbase support doc showing Basic key provisioning and API positioning

What is not public

There is no clean public self-serve full API price in the source material here.

So if you need full API breadth, the public path points you toward a licensing conversation.

I am blunt about this because I have lived it before. Opaque pricing is not a small annoyance. It changes how fast a team can test, budget, and ship.

Why this matters by buyer type

Founder or builder: this slows cheap experimentation.

RevOps team: this makes budgeting messy.

Enterprise buyer: this is annoying, but normal.

Customer-facing product team: this is where rights matter as much as price.

Most teams think they are buying rows. They are actually buying permission.

PAYG contrast

This is where NinjaPear Pricing is easier to reason about.

Publicly visible right now:

  • 3-day free trial
  • 10 credits included
  • PAYG credits valid for 18 months
  • Customer Listing pricing of 1 credit/request + 2 credits/customer returned

On public product pages, NinjaPear also shows:

  • Company Details: 2 credits per call
  • Employee Count: 2 credits per call
  • Company Updates: 2 credits per call
  • Company Funding: 2+ credits per call

NinjaPear pricing page with free trial and credit pricing

If you are a developer who wants to run real calls before talking to a human, that difference matters.

Why the endpoint planner is at the top

The planner is there because most people searching crunchbase api are not looking for a taxonomy lecture first.

They are trying to answer one immediate engineering question:

  • what endpoint chain solves my use case?
  • can I do it with Basic?
  • where do I hit a licensing wall?
  • what should my first request be?

So the flow of this guide is deliberate:

  1. tell you the main gotcha fast
  2. show you the practical planner
  3. break down what the API is
  4. map it into real workflows

That is how I would want to read this if I were evaluating an API late at night with a half-working script open in another tab.

Reverse engineering Crunchbase’s internal GraphQL and dashboard API

Short warning first: using Crunchbase’s internal dashboard requests instead of the sanctioned API can breach Crunchbase’s terms of service. If you do it anyway, do it knowing exactly what you are doing.

Now the useful part.

There is a whole cottage industry built around pulling data out of Crunchbase search pages without buying the proper API license. You can see the breadcrumbs publicly:

  • Apify actors that ask for a logged-in Crunchbase search URL plus browser cookies
  • scraper repos that automate Crunchbase search results and company pages
  • vendors selling “Crunchbase scraper APIs” instead of telling you to use the official REST API

That is the tell.

The common pattern is not magic. It is usually one of these two:

  1. Replay the internal search/dashboard requests from an authenticated browser session
  2. Drive the logged-in browser itself and extract the JSON or rendered table rows

The cleaner of those two, from an engineering perspective, is usually the first.

What other people are doing in practice

One public Apify actor for Crunchbase search results literally says the input is:

  • a Crunchbase search URL with filters and columns already selected
  • session cookies exported from your browser

Another Crunchbase scraping repo from Bright Data positions the product as a way around the usual scraping pain: blocked IPs, CAPTCHAs, and dynamic frontend structure.

That tells you a lot about the real-world technique stack:

  • logged-in session reuse
  • cookie export
  • search-result URL capture
  • replaying the same requests the dashboard makes
  • falling back to browser automation when replay breaks

That is the game.

How to find the internal GraphQL or JSON requests

If you want to inspect what the Crunchbase frontend is doing, the usual workflow is:

  1. Log in to Crunchbase in Chrome.
  2. Open DevTools.
  3. Go to the Network tab.
  4. Filter by fetch or XHR.
  5. Open a Crunchbase search page, or click around an organization page.
  6. Look for requests carrying JSON payloads, GraphQL-style operationName fields, or big filter objects.
  7. Right-click the interesting request and copy it as:
  8. Copy as cURL, or
  9. Copy all as HAR

That is the fastest way to see what the frontend is actually sending.

In a lot of SaaS apps, the requests you will see look like one of these shapes:

  • POST request to a single GraphQL endpoint with operationName, variables, and maybe extensions
  • POST request to an internal search endpoint with filters and selected fields in JSON
  • GET request with a giant encoded filter blob in the query string

The exact path can change. That is the curse of building against internal endpoints.

How to capture the auth you need

This is the part most fluffy articles skip.

The internal request usually depends on one or more of these:

  • session cookies
  • CSRF or anti-forgery headers
  • request headers like x-requested-with
  • a bearer token stored in browser storage
  • a persisted GraphQL query hash

You get those from the browser session itself.

The simplest path is still the dumb one:

  • log in normally
  • copy the working request from DevTools
  • replay it before the session expires

If you want to keep it scriptable, export cookies from the browser. That is exactly what public Apify workflows tell users to do.

Sample script: replay a copied GraphQL request

This is the shape engineers usually use once they have copied a working request from DevTools.

You will need to replace:

  • YOUR_SESSION_COOKIES
  • YOUR_CSRF_TOKEN
  • YOUR_OPERATION_NAME
  • YOUR_REAL_VARIABLES
  • YOUR_REAL_QUERY_OR_HASH

with the values you captured from a real logged-in browser request.

import json
import requests

session = requests.Session()

headers = {
    'User-Agent': 'Mozilla/5.0',
    'Accept': 'application/json',
    'Content-Type': 'application/json',
    'Origin': 'https://www.crunchbase.com',
    'Referer': 'https://www.crunchbase.com/',
    'x-csrf-token': 'YOUR_CSRF_TOKEN',
}

cookies = {
    # paste the important cookies you copied from DevTools
    'session': 'YOUR_SESSION_COOKIES',
}

payload = {
    'operationName': 'YOUR_OPERATION_NAME',
    'variables': {
        # copy the exact filter/sort/paging object from DevTools
        'page': 1,
        'limit': 50,
        'query': 'fintech',
        'filters': []
    },
    # Depending on the request, this may be a raw GraphQL query string
    # or an 'extensions.persistedQuery' object.
    'query': 'YOUR_REAL_QUERY_OR_HASH'
}

r = session.post(
    'https://www.crunchbase.com/graphql',
    headers=headers,
    cookies=cookies,
    data=json.dumps(payload),
    timeout=30,
)

print(r.status_code)
print(r.text[:2000])

That is the general pattern. Sometimes the endpoint is not literally /graphql. Sometimes the request uses extensions.persistedQuery instead of a raw query string. Sometimes the auth header is more important than the cookie. That is why you start from Copy as cURL instead of guessing.

Sample script: start from “Copy as cURL” and convert it into Python

This is honestly the fastest approach.

  1. In DevTools, right-click the working request.
  2. Choose Copy as cURL.
  3. Paste it somewhere safe.
  4. Turn it into Python manually, or with a converter.

A typical copied request usually contains everything you need:

  • the real URL
  • the real cookie string
  • the real headers
  • the real payload shape

Example skeleton:

import requests

url = 'https://www.crunchbase.com/YOUR_REAL_INTERNAL_ENDPOINT'

headers = {
    'accept': 'application/json',
    'content-type': 'application/json',
    'origin': 'https://www.crunchbase.com',
    'referer': 'https://www.crunchbase.com/discover/organization.companies',
    'user-agent': 'Mozilla/5.0',
    'x-csrf-token': 'PASTE_FROM_CURL',
}

cookies = {
    'PASTE_COOKIE_NAME': 'PASTE_COOKIE_VALUE',
}

payload = {
    # paste the exact JSON body from the copied request
}

r = requests.post(url, headers=headers, cookies=cookies, json=payload, timeout=30)
print(r.status_code)
print(r.json())

It is boring. Good. Boring is what works.

Sample script: export search results page by page

Once you have one working internal search request, paginating it is usually straightforward because the request body often includes page or offset controls.

import requests
import time

session = requests.Session()

headers = {
    'accept': 'application/json',
    'content-type': 'application/json',
    'origin': 'https://www.crunchbase.com',
    'referer': 'https://www.crunchbase.com/discover/organization.companies',
    'user-agent': 'Mozilla/5.0',
    'x-csrf-token': 'YOUR_CSRF_TOKEN',
}

cookies = {
    'session': 'YOUR_SESSION_COOKIE',
}

base_url = 'https://www.crunchbase.com/YOUR_REAL_INTERNAL_SEARCH_ENDPOINT'
results = []

for page in range(1, 6):
    payload = {
        'operationName': 'YOUR_OPERATION_NAME',
        'variables': {
            'page': page,
            'limit': 50,
            'filters': [
                # paste your real filter objects from DevTools
            ],
            'field_ids': ['identifier', 'short_description']
        },
        'query': 'YOUR_REAL_QUERY_OR_HASH'
    }

    r = session.post(base_url, headers=headers, cookies=cookies, json=payload, timeout=30)
    r.raise_for_status()
    data = r.json()
    results.append(data)
    print(f'fetched page {page}')
    time.sleep(2)

print(f'total pages fetched: {len(results)}')

Again, the important part is not my placeholder values. The important part is the shape: copy the working request, then vary only the paging variables.

Sample script: fall back to browser automation

If replaying the internal call gets blocked, the next move people use is browser automation on a logged-in session.

That is exactly why public Crunchbase scraping repos lean on Selenium or remote browsers.

A minimal shape looks like this:

from selenium import webdriver
from selenium.webdriver.common.by import By
import json
import time

driver = webdriver.Chrome()
driver.get('https://www.crunchbase.com/login')

print('Log in manually, then press Enter here...')
input()

search_url = 'https://www.crunchbase.com/discover/organization.companies'
driver.get(search_url)
time.sleep(5)

rows = driver.find_elements(By.CSS_SELECTOR, 'grid-row, [role="row"]')
print(f'rows found: {len(rows)}')

for row in rows[:5]:
    print(row.text)

driver.quit()

This is clunkier than replaying JSON, but sometimes it is the faster path when the frontend keeps changing its internal request contract.

What usually breaks

The usual failure modes are painfully predictable:

  • session cookies expire
  • CSRF token rotates
  • persisted query hash changes
  • GraphQL schema or internal field names change
  • Cloudflare or anti-bot verification kicks in
  • the table renders in a different client-side component

This is why people keep paying scraper vendors. Not because the requests are mysterious. Because the maintenance is annoying as hell.

My blunt take

If your real goal is export data from a $99 plan by piggybacking on the dashboard, the techniques above are the playbook people use.

But understand what you are buying with that choice:

  • no stability guarantees
  • no contractual safety
  • no clean scaling path
  • lots of fiddly session maintenance

That is why I still think the first decision is not “how do I hack the GraphQL.”

It is:

  • do I actually need Crunchbase specifically?
  • do I need sanctioned investor-grade data?
  • or am I really just trying to get a practical GTM workflow moving?

Because those are different problems.

Use case: CRM enrichment

The actual engineering problem

Your CRM has company names and maybe domains. Reps need enough context to stop guessing.

The minimum viable Crunchbase chain is:

  • Autocomplete
  • Organization Entity Lookup
  • optional funding card if licensed

That gives you identity plus a usable company profile.

Short code snippet

import requests

r = requests.get(
  'https://api.crunchbase.com/api/v4/entities/organizations/stripe',
  headers={'X-cb-user-key': 'YOUR_KEY'}
)
print(r.json())

Sample response

Trimmed sample response shape based on docs/examples:

{
  "properties": {
    "identifier": {
      "value": "Stripe",
      "permalink": "stripe"
    },
    "short_description": "Stripe is a financial infrastructure platform.",
    "website": {"value": "https://stripe.com"},
    "founded_on": {"value": "2010-01-01"}
  }
}

Where it falls short

This is static enrichment. It does not tell you:

  • what changed this week
  • who buys from them
  • who they compete with
  • whether sales should care right now

That is not a flaw. It is just a different job.

Closest NinjaPear alternative

For this workflow, the closest NinjaPear mapping is:

  • Company Details
  • Employee Count
  • Company Updates if freshness matters
  • NinjaPear for Claude if your team wants enrichment inside AI workflows

Use case: prospect list building

The actual engineering problem

You want a list of companies that match a thesis. Not a bag of random logos.

The Crunchbase chain is:

  • Organization Search
  • Organization Entity Lookup
  • optional people or funding cards if licensed

Use Search to get the candidate set. Use Entity Lookup to flesh out the winners.

Short code snippet

import requests

r = requests.post(
  'https://api.crunchbase.com/api/v4/searches/organizations',
  headers={'X-cb-user-key': 'YOUR_KEY'},
  json={
    'field_ids': ['identifier', 'short_description']
  }
)
print(r.json())

Where it falls short

A prospect list is not a pipeline list.

You still need:

  • timing
  • fit
  • relationship context
  • some reason for now

Crunchbase helps you build the universe. It does not automatically turn that universe into an outbound motion that prints meetings.

Closest NinjaPear alternative

This is where Customer API and Competitor API feel more GTM-native:

  • Customer API for finding who buys from a company or category player
  • Competitor API for adjacency and similar-company discovery
  • Company Details for enrichment after discovery

Use case: investor research

The actual engineering problem

You need startup, round, investor, and acquisition context fast.

This is where Crunchbase is strongest.

The typical chain is:

  • Organization Entity Lookup
  • funding-related cards
  • investor-related cards
  • acquisition cards where relevant

Short code snippet

curl 'https://api.crunchbase.com/api/v4/entities/organizations/openai?card_ids=raised_funding_rounds' \
  -H 'X-cb-user-key: YOUR_KEY'

Sample response

Trimmed sample response shape based on docs/examples:

{
  "cards": {
    "raised_funding_rounds": [
      {
        "identifier": {"value": "Series C"},
        "announced_on": "2023-01-01",
        "money_raised": {"value_usd": 1000000000}
      }
    ]
  }
}

My honest take

Crunchbase wins here.

I am not going to do the cowardly vendor-comparison thing where I pretend every tool is equally good at every job. They are not.

Where NinjaPear fits instead

Use NinjaPear when the question changes from:

  • “who raised?” to
  • “what changed?”
  • “who are their customers?”
  • “who competes with them?”
  • “should sales care now?”

Closest mappings:

  • Company Funding
  • Company Updates
  • Customer API
  • Competitor API

Use case: account research

The actual engineering problem

A rep has a meeting tomorrow and wants context that is useful, not decorative.

The Crunchbase chain is:

  • Autocomplete
  • Organization Entity Lookup
  • optional people or funding cards if licensed

That gives you a decent pre-call brief on the company.

What it misses

Usually the actual trigger to act.

That might be:

  • a pricing page change
  • a product launch post
  • expansion hiring
  • customer adjacency
  • competitor overlap

Those are not really Crunchbase’s center of gravity from the public docs.

Closest NinjaPear alternative

Stronger fit here:

Short NinjaPear snippet

curl -G 'https://nubela.co/api/v1/customer/listing' \
  --data-urlencode 'website=https://stripe.com' \
  -H 'Authorization: Bearer YOUR_API_KEY'

Sample response

Trimmed sample response shape based on public docs:

{
  "company": "Stripe",
  "customers": [
    {
      "company_name": "Shopify",
      "website": "shopify.com",
      "relationship_type": "customer"
    },
    {
      "company_name": "Lyft",
      "website": "lyft.com",
      "relationship_type": "customer"
    }
  ]
}

Use case: watchlists and monitoring

The actual engineering problem

You do not want a static company record. You want to know when the company moves.

This is where a lot of developers searching crunchbase api are actually shopping the wrong category.

From the public Crunchbase materials I reviewed, the API is not positioned as a blog, X, and website monitoring product. You can hack together repeated search and repeated lookup workflows, sure. But that is not the same thing as a first-class monitoring system.

Why this matters

For GTM, timing beats trivia.

I care more about “they changed pricing yesterday” than “their structured company record still exists.”

Closest NinjaPear alternative

This is where Company Monitor is the better fit:

  • Monitor API
  • Company Updates
  • AI-filtered changes across blog, website, and X

Short workflow snippet

Create feed in NinjaPear Monitor API -> poll RSS/API -> push meaningful changes to Slack or CRM

Sample output

<item>
  <title>Stripe: New Checkout Experience for Global Payments</title>
  <category>blog</category>
  <pubDate>Thu, 27 Feb 2026 10:00:00 GMT</pubDate>
</item>

Use case: customer-facing products

The actual engineering problem

You want to put company data inside your own product.

This is where data buying turns into rights buying.

Crunchbase explicitly separates:

  • Data Enrichment, for internal workflows
  • Data Licensing, for customer-facing products

That distinction is healthy. It is also where the work starts.

The catch

Before you ship anything, ask:

  • can I display this data to my users?
  • do I need attribution?
  • do links need to be visible and spiderable?
  • can I redistribute raw data?
  • what happens if the contract ends?

If your PM has not asked those questions yet, you are not at the build stage. You are still at the “pretending this is just JSON” stage.

Closest NinjaPear alternative

NinjaPear is cleaner here when the buyer wants:

  • self-serve testing first
  • PAYG motion
  • AI-agent docs
  • customer, competitor, employee, and monitoring data in one stack

Still, read the terms. Always.

Workflow pricing cards

I hate fake pricing tables, so here is the honest version.

CRM enrichment

Field Crunchbase NinjaPear
Problem Enrich 500 CRM accounts with company context Enrich 500 CRM accounts with company context
Crunchbase endpoints needed Autocomplete → Organization Entity Lookup N/A
Crunchbase access motion Basic may cover starter org enrichment N/A
Crunchbase public price? Basic exists, public full API price not shown N/A
NinjaPear endpoints needed N/A Company Details, optional Employee Count
NinjaPear known credit math? N/A Company Details shows 2 credits/call
Best practical takeaway Start with Basic if you only need basic org fields. If you need broader cards or product embedding, expect a licensing conversation. Good fit if you want to test immediately and care about fresh company details.

Prospect list building

Field Crunchbase NinjaPear
Problem Build a list of target companies Build a target list from customer or competitor adjacency
Crunchbase endpoints needed Organization Search → Organization Entity Lookup N/A
Crunchbase access motion Basic for starter workflow N/A
Crunchbase public price? Full API price not public N/A
NinjaPear endpoints needed N/A Customer Listing / Competitor API / Company Details
NinjaPear known credit math? N/A 1 credit/request + 2 credits/customer returned for Customer Listing
Best practical takeaway Great for startup discovery by firmographic or funding filters. Better when the list needs to be commercially actionable, not just broad.

Investor research

Field Crunchbase NinjaPear
Problem Research funding history and investors Pull funding plus adjacent company context
Crunchbase endpoints needed Org Lookup → funding cards → investor cards N/A
Crunchbase access motion Full API likely for real depth N/A
Crunchbase public price? Custom / sales quote required N/A
NinjaPear endpoints needed N/A Company Funding
NinjaPear known credit math? N/A Partial, public page says 2+ credits/call
Best practical takeaway Crunchbase wins this workflow. Useful adjacent layer, not a full replacement.

Account research

Field Crunchbase NinjaPear
Problem Prep for a meeting Prep for a meeting with GTM context
Crunchbase endpoints needed Autocomplete → Org Lookup N/A
Crunchbase access motion Basic can start N/A
Crunchbase public price? Full API price not public N/A
NinjaPear endpoints needed N/A Customer Listing, Company Updates, Employee API
NinjaPear known credit math? N/A Partial
Best practical takeaway Good baseline profile lookup. Better if the rep needs what changed and why now.

Watchlists and monitoring

Field Crunchbase NinjaPear
Problem Watch a set of target companies Monitor meaningful company changes
Crunchbase endpoints needed Repeated search / repeated lookup N/A
Crunchbase access motion Not clearly positioned as monitoring in public docs N/A
Crunchbase public price? No monitoring pricing path visible in reviewed docs N/A
NinjaPear endpoints needed N/A Monitor API / Company Updates
NinjaPear known credit math? N/A Partial, plus blog scenario examples
Best practical takeaway This is not Crunchbase’s strongest job from public docs. Better fit.

Customer-facing products

Field Crunchbase NinjaPear
Problem Put company data in your app Put company intelligence in your app
Crunchbase endpoints needed Search / Lookup / cards / licensing review N/A
Crunchbase access motion License required territory N/A
Crunchbase public price? Custom / sales quote required N/A
NinjaPear endpoints needed N/A Depends on endpoint mix
NinjaPear known credit math? N/A Partial
Best practical takeaway This is a rights problem as much as a data problem. Better for quick prototyping, but always review terms before shipping.

A numeric NinjaPear example we can actually do

Finding customers for 100 target vendors

Known from public pricing:

  • base requests: 100 x 1 credit = 100 credits
  • if average returned customers = 10/company, returned records = 100 x 10 x 2 credits = 2,000 credits
  • total = 2,100 credits

That 10/company number is an assumption for illustration. I am labeling it because bullshit pricing math is how bad software gets bought.

Rate limits and gotchas

200 calls per minute

Crunchbase docs say 200 calls per minute.

That is fine for:

  • individual lookups
  • modest enrichment jobs
  • basic search flows

It gets more annoying for:

  • large backfills
  • multi-tenant apps
  • workflows that fan out across cards and related entities

The real bottleneck is often not RPM

Usually it is one of these:

  • access tier limits
  • export restrictions
  • attribution restrictions
  • procurement delays

Most teams think they are buying rows. They are actually buying permission.

Attribution and licensing

This is one of the highest-value sections in this whole guide.

Internal use is the safe default

Crunchbase says:

“We encourage you to leverage the API for your internal business and research needs.”

That is the safe center of gravity.

Attribution rules matter

Crunchbase says attribution must:

  • include a hyperlink to Crunchbase
  • point to the entity page if the content is primarily about one entity
  • be plainly visible to the end user
  • be in close proximity to the data
  • be visible to spiders
  • not include nofollow

That is not a legal footnote. That is product behavior.

Why this becomes a product problem

Attribution affects:

  • UI layout
  • SEO behavior
  • display logic
  • distribution rights
  • how native the data feels inside your app

Again, this is why I keep saying most teams are not buying records. They are buying permission, speed, and workflow fit.

NinjaPear alternatives by endpoint

Here is the cleanest mapping I can give without pretending the products are identical.

Crunchbase endpoint family Typical job NinjaPear alternative Mapping type
Organization Lookup CRM enrichment Company Details Direct
Organization Search Prospecting Competitor API / Customer API / Company Details Partial
People Lookup Account research Employee API / Person Profile Endpoint Partial
Funding data Investor research Company Funding + Company Details Partial
Acquisition data Market change tracking Company Updates / Monitor API Better adjacent
Search Company discovery Competitor API / Customer API Partial
Autocomplete UI helper Internal resolver on Company Details Adjacent
Insights / Predictions Prioritization Monitor API + Updates + Claude workflows Better adjacent

A few notes:

  • be honest when the mapping is not 1:1
  • Crunchbase wins cleanly on deep private-market funding context
  • NinjaPear wins when the workflow is GTM, monitoring, customer graphing, competitor mapping, or AI-native use

Who should use what

Use Crunchbase if

  • you care most about funding and investors
  • you do private-market research
  • you are in VC, PE, corp dev, or startup strategy
  • you can live with sales-led access for full API needs

Use NinjaPear if

  • you need PAYG access now
  • you want to test before you commit
  • you care about customers, competitors, employees, and company updates
  • you want to work inside Claude or AI-agent tooling

Use both if

  • you genuinely need funding depth and GTM actionability
  • research and sales both touch the same accounts
  • one team cares about private-market graph data and another cares about timing

That is a valid stack. Just do not buy both out of habit. That is how teams end up with five tools, three stale dashboards, and one very expensive shrug.

Crunchbase vs NinjaPear scorecard

Factor Crunchbase NinjaPear Winner
Funding depth ⭐⭐⭐⭐☆ ⭐⭐⭐☆☆ Crunchbase
Pricing clarity ⭐☆☆☆☆ ⭐⭐⭐⭐☆ NinjaPear
Self-serve access ⭐☆☆☆☆ ⭐⭐⭐⭐☆ NinjaPear
Customer graph ⭐⭐☆☆☆ ⭐⭐⭐⭐☆ NinjaPear
Competitor graph ⭐⭐☆☆☆ ⭐⭐⭐⭐☆ NinjaPear
Live updates ⭐⭐☆☆☆ ⭐⭐⭐⭐☆ NinjaPear
AI workflow fit ⭐⭐⭐☆☆ ⭐⭐⭐⭐⭐ NinjaPear
Overall score 2.29/5 4.14/5 NinjaPear
Dimension Crunchbase NinjaPear My take
Funding depth ⭐⭐⭐⭐☆ ⭐⭐⭐☆☆ Crunchbase wins for investor-grade funding context.
Pricing clarity ⭐☆☆☆☆ ⭐⭐⭐⭐☆ Full API pricing opacity is a real tax on builders.
Self-serve access ⭐☆☆☆☆ ⭐⭐⭐⭐☆ NinjaPear’s free trial + PAYG is much easier to test.
Customer graph ⭐⭐☆☆☆ ⭐⭐⭐⭐☆ NinjaPear is built for this.
Competitor graph ⭐⭐☆☆☆ ⭐⭐⭐⭐☆ Same story.
Live updates ⭐⭐☆☆☆ ⭐⭐⭐⭐☆ Monitoring is not Crunchbase’s center of gravity.
AI workflow fit ⭐⭐⭐☆☆ ⭐⭐⭐⭐⭐ Claude + AI-agent docs is a real advantage.
GTM usefulness ⭐⭐☆☆☆ ⭐⭐⭐⭐☆ For pipeline work, NinjaPear is usually the better first call.
Developer friction ⭐⭐⭐☆☆ ⭐⭐⭐⭐☆ Crunchbase auth is fine, access motion is the bigger issue.

Average score: Crunchbase 2.44/5, NinjaPear 4.11/5.

Final verdict

Use Crunchbase when funding data is the job. Use NinjaPear when GTM intelligence is the job. Use both only if you genuinely need both layers.

If I were evaluating the Crunchbase API from scratch in 2026, I would do it in this order:

  1. test whether Basic’s 3 endpoints already cover the workflow
  2. confirm whether the real job is funding research or GTM actionability
  3. if it is funding, keep pushing on Crunchbase
  4. if it is GTM, monitoring, customer graphs, competitor graphs, or AI-native workflows, test NinjaPear first because the feedback loop is shorter

And if you want the practical version, not another fluffy API explainer written by someone who clearly never had to ship against a real data contract, clone the repo and start breaking things.

That is usually where the truth shows up.

Alex Meyer
Alex Meyer is a patterns-obsessed growth architect. As Head of GTM at NinjaPear, he leads the charge in building the actual intelligence layer that modern B2B teams use to win.

Featured Articles

Here's what we've been up to recently.

I dismissed someone, and it was not because of COVID19

The cadence of delivery. Last month, I dismissed the employment of a software developer who oversold himself during the interview phase. He turned out to be on the lowest rung of the software engineers in my company. Not being good enough is not a reason to be dismissed. But not

sharedhere

I got blocked from posting on Facebook

I tried sharing some news on Facebook today, and I got blocked from posting in other groups. I had figured that I needed a better growth engine instead of over-sharing on Facebook, so I spent the morning planning the new growth engine. Growth Hacking I term what I do in