/ proxycurl

The Ultimate Guide to Professional Social Network X-Ray Searches (2025 Update)

Professional Social Network X-ray searches are a unique method that allows for much more freedom in terms of querying Professional Social Network's B2B database for interesting companies, jobs, and individuals.

Technically, there are two similar but functionally different methods referred to as "Professional Social Network X-ray searches"--both being aimed at improving the process of searching for and extracting data from Professional Social Network search results.

Before I continue with the methods to do Professional Social Network X-ray search...

That you can start using now: [Free Professional Social Network Xray search tool](https://nubela.co/proxycurl/demo/Professional Social Network-xray-tool)

You can simply find profiles by searching based on:

  • Job title
  • Location by city
Professional Social Network Xray search demo by Proxycurl
For example, find CTOs based in Atlanta, it's that easy.

We'll be adding on more features to the free tool, so subscribe and stay tuned!

Now, back to business.

An overview of some search methods

The first method uses Boolean operators (such as NOT, AND, OR) within the Professional Social Network search engine to improve search results.

For example, let's say you wanted to find and hire a content marketer or content strategist, but you wanted to make sure they were also writers themselves, as well as familiar with SEO standards.

Rather than sifting through thousands of profiles on Professional Social Network to find the right prospect, you could use the following search query to instantly find them, "content marketer" OR "content strategist" AND "writer" AND "SEO":

Quite a bit narrower than the original 385,00 results for "content marketer":

The second method for performing Professional Social Network X-ray searches is googling, but using operators to refine your results. The main difference is that we're using Google's search engine to find indexed Professional Social Network pages instead of Professional Social Network's search engine.

We also don't need to be logged into a Professional Social Network account to do the second method (hence the name "Professional Social Network X-ray search"), so there's no risk of getting banned. Most people prefer this method due to its scalability.

To give you an example, let's take our earlier example and take a look at the Google search results for Professional Social Network content writer and SEO:

Over 22 million results
Over 22 million results

Wow, that's a lot. Let's use a bit of magic and confine our search results to only professionalsocialnetwork.com:

Down to 6 million results
Down to 6 million results

Better. Now let's refine that search query further with operators: site:professionalsocialnetwork.com/in ("content marketer" OR "content strategist") AND "writer" AND "SEO":

Much better!

That said, in this article, I'll be explaining how to use both methods to your advantage.

First, though, we need to talk a bit more about Boolean operators:

What are Boolean operators?

A Boolean operator is a word or symbol used in logic and search queries to combine or exclude keywords, resulting in more focused and specific search results. It's basically like searching on steroids.

Both Professional Social Network and Google's search engines support the following Boolean operators:

Operator Function Example Usage
AND Results must include all specified terms "developer AND manager"
OR Results may include any of the specified terms "sales OR marketing"
NOT Excludes results containing the specified term (Google uses "-" for NOT) "engineer NOT civil"

Quick heads up

We're about to get into further examples of utilizing search operators on Professional Social Network and performing Professional Social Network X-ray searches on Google, but before we do, I just wanted to give you a heads-up that for the examples throughout this article, we'll assume we're working in some type of HR role and need to find a given individual to fill a job role.

However, don't freak. If that doesn't apply to you, for example, if you're in sales or marketing, you can still use these same methods to find prospects for your use case as well. You'll just slightly modify your search query.

Okay, now let's put this into practice:

How to use search operators on Professional Social Network

On top of just Boolean operators, Professional Social Network also supports the following other search operators:

Operator Function Example
" " Search for an exact phrase "product manager"
( ) Group terms in complex queries (senior OR junior) AND engineer
first: Search by first name first:John
last: Search by last name last:Doe

These can further help refine our search results.

First up, the "AND" operator:

"AND" operators

So, let's say we need to find a software engineer who lives in Los Angeles and has experience with both Javascript and Linux.

Here's how we could use Boolean operators to help us here, "software engineer" AND "los angeles" AND "javascript" AND "linux":

That returns 374 results for profiles that match all of our search criteria. Here's the first profile:

The prospect fits our Javascript requirement
The prospect fits our Javascript requirement
The prospect also fits our Linux requirement
The prospect also fits our Linux requirement

"NOT" operators

Let's say for whatever reason, the company we're recruiting for doesn't hire programmers that use Kotlin.

The profile returned above also has Kotlin, the programming language, listed as a skill:

We can refine our search accordingly, "software engineer" AND "los angeles"AND "javascript" AND "linux" NOT "kotlin":

You'll see our search just slightly went down in the number of results returned. Our friend from earlier is also now no longer returned.

Filtering with location and putting everything together

We can take this one step further and assume everything above is true, but the company we're recruiting for has offices in both Los Angeles and New York City, so they could work in either and come into either respective office.

We can expand our original search by alternating it slightly, "software engineer" AND ("los angeles" OR "new york city") AND "javascript" AND "linux" NOT "Kotlin":

The amount nearly doubles by including New York City as an option. All profiles returned still fit our earlier search. Nice!

Searching on Professional Social Network vs. Google

It may sound silly, but the main con of this way of searching for prospects is the fact you have to provide your own Professional Social Network account and be actively logged into it. You can't use Professional Social Network's search engine otherwise.

You'll also inevitably run into account limits and bans by doing this in bulk, and you [can't access more than 1,000 results](https://nubela.co/blog/how-to-bypass-Professional Social Network-search-limit/) at any given time on a free Professional Social Network account, even if it tells you more results were returned.

It's simply not a scalable method, and you can't automate it for the most part.

This is why Professional Social Network operators, while they help improve search results, aren't quite as powerful as utilizing Google to perform a true Professional Social Network X-ray search without the same limitations as on Professional Social Network.

That said, let me show you how to do this in a bit of a better way by using Google:

Google's search operators

On top of AND, OR, and NOT, Professional Social Network, Google also supports additional search operators:

Operator Function Example
" " Searches for an exact phrase "climate change"
- As mentioned above, Google uses "-" for NOT jaguar -car
site: Limits the search to a specific website or domain site:nytimes.com
related: Finds websites similar to a specified site related:time.com
filetype: Searches for a specific file type filetype:pdf "renewable energy"
intitle: Finds pages with a specific term in the title intitle:conservation
inurl: Searches for a specific term within the URL inurl:nutrition
intext: Searches for a specific term within the text of a page intext:"global warming"
* Acts as a wildcard world * champion
AROUND(X) Finds terms that are within a certain number of words apart solar AROUND(3) energy
cache: Shows the most recent cached version of a web page cache:google.com

We can use these operators to our advantage.

How to do Professional Social Network X-ray searches

site:professionalsocialnetwork.com/in, will be our starting point and is what allows us to search on Google exclusively for Professional Social Network results.

(Note: Remember the part from earlier where I mentioned you can also do this for companies and jobs? The only different is that companies have a URL structure with /company instead, for example: https://www.professionalsocialnetwork.com/company/microsoft/, and jobs have a URL structure of /jobs, such as: https://www.professionalsocialnetwork.com/jobs/search/?currentJobId=3825567794.)

Professional Social Network X-ray searches by job role

So, let's say we want to find a full-stack developer. Here's how we could do just that by using a Professional Social Network X-ray search, site:professionalsocialnetwork.com/in/ intitle:"full-stack developer":

Professional Social Network X-ray search results returned for "fullstack developer"
Professional Social Network X-ray search results returned for "full stack developer"

11,100 results returned. All consisting of Professional Social Network profiles, having "full-stack developer" in their title:

An example profile returned for "fullstack developer"
An example profile returned for "full stack developer"

Professional Social Network X-ray searches by job role and skill

Now, let's narrow that search down a bit and say we need them to be familiar with both Django and React, site:professionalsocialnetwork.com/in/ intitle:"full-stack developer" (django OR react):

Professional Social Network X-ray with job role and skill filtering applied
Professional Social Network X-ray with job role and skill filtering applied

6,560 results returned. With profiles such as this:

Fits our search criteria
Fits our search criteria

Next, let's say we want the same as above, but we want to exclude PHP programmers.

Here's how we could do that, site:professionalsocialnetwork.com/in/ intitle:"full-stack developer" intext:django OR intext:react -php:

Results returned with an excluding operator
Results returned with an excluding operator

Quite a bit fewer results, none of the profiles containing PHP within them as a skill:

All of those skills, and not one is PHP
All of those skills, and not one is PHP

I think you're probably just about getting the point, but I'll show you another trick:

Location-based filtering with Professional Social Network X-ray searches

One way of doing this is by simply adding the location into the search such as this, site:professionalsocialnetwork.com/in/ intitle:"full-stack developer" "new york city" (django OR react) -php:

Results returned with location filtering
Results returned with location filtering

Obviously narrowing the search results down, matching profiles like this:

A New York City full-stack programmer without any skill in PHP
A New York City full-stack programmer without any skill in PHP
Results returned with ultra-specific location filtering
Results returned with ultra-specific location filtering

170 results. Full of Professional Social Network profiles like this:

A full-stack developer in Frankfurt
A full-stack developer in Frankfurt

Not bad! You can get even more sophisticated with operators for an even more refined search results returned too.

Now let's talk a bit more about extracting this data from Professional Social Network:

Extracting your Professional Social Network X-ray search results

Ideally, we want to automate and systemize as much as possible. Especially if you're doing this at scale image for recruiting.

So, to help accomplish this, we can use a tool like Value Serp, which is a search engine result page API (ELI5: an API is like a fast food menu for data--it makes it easy to send/receive data between applications):

What Value Serp's homepage looks like
What Value Serp's homepage looks like

Its sole job is to scrape search result pages at scale and conveniently return the data. This will help us avoid any future issues like running into CAPTCHAs or just blatantly being blocked (you'll have to source proxies otherwise).

Value Serp starts out at $2.50 to scrape 1,000 Google results, so not bad, but there are also other similar services out there you can use, so if you find an equivalent service for less, it's one of those things that it either works or doesn't, so it should do just as good of a job.

Anyway, using Value Serp's API, we can extract the results of Google search queries at scale:

Value Serp's API
Value Serp's API

And we can conveniently export these results to a .CSV:

Value Serp's process of exporting to .CSV
Value Serp's process of exporting to .CSV

Here's our earlier Germany example, site:de.professionalsocialnetwork.com/in/ intitle:"full-stack developer" "frankfurt" (django OR react) -php exported:

Our brand new .CSV
Our brand new .CSV

You could do this to thousands of profiles at once, all fitting different Professional Social Network X-ray search queries or recruiting prospects.

This is where Professional Social Network X-ray searches truly shine: the programmatic, and scalable nature of it in comparison to traditional Professional Social Network prospecting methods.

Continuing on:

How to enrich Professional Social Network profiles

You now have a way to search for qualified prospects that meet specified criteria. You know how to search for Professional Social Network profile URLs at scale and export the results without limitations.

But a Professional Social Network profile URL alone doesn't do us much good.

Sure, you could click that URL and then manually send them a connection and try to message them, however, ideally, I would want their email and phone number for better touchpoints. All 3 if possible for multiple touchpoints.

So, what's an easy way to do that?

Well, no other than Proxycurl of course; a B2B data provider and API.

Let me explain more:

Proxycurl's Person Profile Endpoint

One of the key endpoints of Proxycurl is the Person Profile Endpoint, which allows you to extract quite a bit of data from [public Professional Social Network profiles](https://nubela.co/blog/the-ultimate-guide-to-Professional Social Network-public-profile-visibility/), such as education, employment, skills, and everything you would find on a Professional Social Network profile.

But it doesn't just extract Professional Social Network information. The B2B data provided by our API is also enriched with other data sources, which is our secret sauce.

Anyway, by using our Person Profile Endpoint, you can take your newly exported list of Professional Social Network profiles and enrich them, improving both the quantity and quality of data you have on any given prospect. And we can do it in a nearly automatic fashion that requires almost no effort or human intervention.

Using Proxycurl's API for enrichment

In the above Value Serp API example I used their built-in API sandbox feature, however, for the following Proxycurl example I'll be using a bit of Python, one of the easiest programming languages there is to use, to accomplish our Professional Social Network profile enrichment.

(For the untechnical/non-programmers, there are many different ways to request data from an API. I chose Python because I'm familiar with it. You can easily use a free Python IDE like PyCharm on any device.)

That said, using a bit of Python, here's how we could pull quite a bit of information from any given Professional Social Network profile:

import requests

api_key = 'Your_API_Key_Here'
headers = {'Authorization': 'Bearer ' + api_key}
api_endpoint = 'https://nubela.co/proxycurl/api/v2/Professional Social Network'
params = {
    'Professional Social Network_profile_url': 'https://www.professionalsocialnetwork.com/in/russellbrunson/',
    'extra': 'include',
    'github_profile_id': 'include',
    'facebook_profile_id': 'include',
    'twitter_profile_id': 'include',
    'personal_contact_number': 'include',
    'personal_email': 'include',
    'inferred_salary': 'include',
    'skills': 'include',
    'use_cache': 'if-present',
    'fallback_to_cache': 'on-error',
}
try:
    response = requests.get(api_endpoint, params=params, headers=headers)
    response.raise_for_status()  # Raise an exception for HTTP errors (e.g., 404, 500)
    
    # Print the response content and status code
    print("Response Content:")
    print(response.text)
    print("\nResponse Status Code:", response.status_code)
except requests.exceptions.RequestException as e:
    print("An error occurred during the HTTP request:", e)
except Exception as ex:
    print("An unexpected error occurred:", ex)

In this example, it would enrich the Professional Social Network_profile_url of https://www.professionalsocialnetwork.com/in/russellbrunson/ (random example, he's a co-founder of the company ClickFunnels), immediately printing the results:

An enriched Professional Social Network profile
An enriched Professional Social Network profile

(Note: You could try this for yourself by creating an account here, we give you 15 free credits for testing a few things out upon account creation.)

The JSON result returned at the bottom looks like this:

"public_identifier": "russellbrunson", "profile_pic_url": "normal_url.com", "background_cover_image_url": null, "first_name": "Russell", "last_name": "Brunson", "full_name": "Russell Brunson", "follower_count": 81644, "occupation": "Owner at ClickFunnels", "headline": "New York Times Bestselling Author, Co-Founder of ClickFunnels", "summary": "Over the past 14 years, Russell has built a following of over 2 million entrepreneurs, sold over 450,000 copies of his books, popularized the concept of sales funnels, and co-founded ClickFunnels, a software company that helps 90,000 entrepreneurs quickly get their message out to the marketplace. \n\nRussell has been featured on major publications and websites such as Forbes, Entrepreneur Magazine, and The Huffington Post. He is also the host of the #1 rated business podcast, Marketing Secrets. In 2018, he was awarded \u2018Entrepreneur of the Year\u2019 in the Utah region by ey.com. Russell also regularly works with non-profits like Operation Underground Railroad and Village Impact.", "country": "US", "country_full_name": "United States of America", "city": null, "state": null, "experiences"...(continues on with more data)...

You can see all of the different data points that can be returned by the Person Profile Endpoint here.

So, of course, you could individually put any given Professional Social Network profile in your new Python script to enrich, but you could also automate the entire process, using a familiar file format such as a .CSV to store the data, such as your Professional Social Network profile URLs.

How to easily enrich Professional Social Network profiles at scale

In fact, here's a script that does just that, reading an original .CSV named input_Professional Social Network_profiles.csv, full of just Professional Social Network URLs, enriches them, and then returns the results into a new .CSV named enriched_data.csv:

import json
import requests
import csv

# Your API key
API_KEY = 'Your_API_Key_Here'

# API endpoint for Person Profile
api_endpoint = 'https://nubela.co/proxycurl/api/v2/Professional Social Network'

# Headers for the API request
headers = {'Authorization': 'Bearer ' + API_KEY}

# Input and output CSV file names
input_file = 'input_Professional Social Network_profiles.csv'
output_file = 'enriched_data.csv'

# Function to extract experiences as a string
def extract_experiences(profile):
    experiences = profile.get('experiences', [])
    return '; '.join([f"{exp.get('title', '')} at {exp.get('company', '')}" for exp in experiences])

# Read Professional Social Network profile URLs from the input CSV file
with open(input_file, 'r') as csvfile:
    reader = csv.reader(csvfile)
    Professional Social Network_urls = [row[0] for row in reader]

# Open the output CSV file for writing
with open(output_file, 'w', newline='') as csvfile:
    fieldnames = [
        'Professional Social Network_url', 'full_name', 'profile_picture', 'current_occupation',
        'country', 'city', 'state', 'experiences', 'skills', 'personal_emails', 'inferred_salary'
    ]
    writer = csv.DictWriter(csvfile, fieldnames=fieldnames)
    writer.writeheader()

    # Loop through each Professional Social Network profile URL
    for Professional Social Network_url in Professional Social Network_urls:
        params = {
            'Professional Social Network_profile_url': Professional Social Network_url,
            'extra': 'include',
            'github_profile_id': 'include',
            'facebook_profile_id': 'include',
            'twitter_profile_id': 'include',
            'personal_contact_number': 'include',
            'personal_email': 'include',
            'inferred_salary': 'include',
            'skills': 'include',
            'use_cache': 'if-recent',
            'fallback_to_cache': 'on-error',
        }

        response = requests.get(api_endpoint, params=params, headers=headers)
        if response.status_code == 200:
            profile = response.json()

            # Extracting skills and experiences
            skills = ", ".join(profile.get('skills', []))
            experiences_str = extract_experiences(profile)

            writer.writerow({
                'Professional Social Network_url': Professional Social Network_url,
                'full_name': profile.get('full_name', ''),
                'profile_picture': profile.get('profile_pic_url', ''),
                'current_occupation': profile.get('occupation', ''),
                'country': profile.get('country_full_name', ''),
                'city': profile.get('city', ''),
                'state': profile.get('state', ''),
                'experiences': experiences_str,
                'skills': skills,
                'personal_emails': "; ".join(profile.get('personal_emails', [])),
                'inferred_salary': profile.get('inferred_salary', {}).get('min', '')  # Example of handling nested data
            })
        else:
            print(f"Failed to fetch data for {Professional Social Network_url}, Status Code: {response.status_code}")

print(f"Data exported to {output_file}")
Example of the data returned into the new .CSV
Example of the data returned into the new .CSV

There's no limit to this, so you can automate the enrichment of thousands and thousands of prospects/Professional Social Network profile URLs directly generated from a tool like Value Serp. You could also integrate our API directly with Value Serp's API and skip the .CSV with a little bit more effort.

But there's also another way of doing this as well...

Proxycurl's Person Search Endpoint

Proxycurl's Person Search Endpoint is another useful endpoint, especially for those in the recruitment sector. It allows you to search for individuals by using job roles, education, the companies they work for, and more.

So you'll be able to skip Value Serp and search directly within our existing dataset, full of millions and millions of Professional Social Network profiles (powered by LinkDB). And it's pretty darn simple to do so. Much similar to the above examples.

Searching based on job role and company

Let's say we want to hire a project manager, and we want them to be from a big tech background.

We'll use Microsoft as the source company (but we could of course use others, such as AWS), and then we can use the following Python script to search our dataset for anyone that matches being a "project manager" for "Microsoft" and return an enriched result:

import json, requests

headers = {'Authorization': 'Bearer ' + 'Your_API_Key_Here'}
api_endpoint = 'https://nubela.co/proxycurl/api/search/person/'
params = {
    'country': 'US',
    'current_role_title': '(?i)Project Manager',
    'current_company_Professional Social Network_profile_url': 'https://www.professionalsocialnetwork.com/company/microsoft',
    'page_size': '10',
    'enrich_profiles': 'enrich',
}
try:
    response = requests.get(api_endpoint, params=params, headers=headers)
    response.raise_for_status()  # Raise an exception for HTTP errors (e.g., 404, 500)
    
    # Print the response content and status code
    print("Response Content:")
    print(response.text)
    print("\nResponse Status Code:", response.status_code)
except requests.exceptions.RequestException as e:
    print("An error occurred during the HTTP request:", e)
except Exception as ex:
    print("An unexpected error occurred:", ex)

Here's an example profile:

Microsoft project manager
Microsoft project manager

It should be noted when using the enrich_profiles variable, you're limited to a page_size of 10 (which is why you see that limit above), but you can use our next_page function to pull large enriched lists with ease still.

Extracting all search results

Here's a slightly altered script that would endlessly keep going until there were no more results to return:

import requests

api_key = 'Your_API_Key_Here'
headers = {'Authorization': 'Bearer ' + api_key}
api_endpoint = 'https://nubela.co/proxycurl/api/search/person/'
params = {
    'country': 'US',
    'current_role_title': '(?i)Project Manager',
    'current_company_Professional Social Network_profile_url': 'https://www.professionalsocialnetwork.com/company/microsoft',
    'page_size': '10',
    'enrich_profiles': 'enrich',
}
def fetch_page(url, params):
    response = requests.get(url, params=params, headers=headers)
    if response.status_code == 200:
        return response.json()
    else:
        print("Error:", response.status_code, response.text)
        return None

# Fetch the initial page
response_data = fetch_page(api_endpoint, params)

# Loop through pages
while response_data and response_data.get('next_page'):
    # Process the results here
    for result in response_data['results']:
        print(result)  # or any other processing

    # Fetch the next page
    next_page_url = response_data['next_page']
    if 'country' not in next_page_url:
        next_page_url += '&country=US'  # Append 'country' parameter
    response_data = fetch_page(next_page_url, {})

# End of pagination
print("Completed fetching all pages.")

Exporting search results to a .CSV

We could even take these results and export them to a .CSV like so:

import requests
import csv
from urllib.parse import urlparse, parse_qs, urlencode, urlunparse

# API credentials and endpoint configuration
api_key = 'Your_API_Key_Here'
headers = {'Authorization': 'Bearer ' + api_key}
api_endpoint = 'https://nubela.co/proxycurl/api/search/person/'

# Parameters for the initial request
initial_params = {
    'country': 'US',
    'current_role_title': '(?i)Project Manager',
    'current_company_Professional Social Network_profile_url': 'https://www.professionalsocialnetwork.com/company/microsoft',
    'page_size': '10',
    'enrich_profiles': 'enrich',
}

# Define the output CSV file and headers
output_file = 'enriched_profiles.csv'
fieldnames = [
    'Professional Social Network_profile_url', 'full_name', 'profile_picture', 'background_cover_image_url',
    'current_occupation', 'location'
]

def fetch_profiles(url, headers, params=None):
    """Fetch profiles from the given URL with specified parameters."""
    response = requests.get(url, headers=headers, params=params)
    if response.status_code == 200:
        return response.json()
    else:
        print(f"Error: {response.status_code}, {response.text}")
        return None

def add_country_to_url(url, country):
    """Add the 'country' parameter to the given URL."""
    parsed_url = urlparse(url)
    query_params = parse_qs(parsed_url.query, keep_blank_values=True)
    query_params['country'] = [country]  # Ensure 'country' parameter is included
    new_query = urlencode(query_params, doseq=True)
    new_url = urlunparse(parsed_url._replace(query=new_query))
    return new_url

def process_profile_data(profile):
    """Process and return the profile data in a dict format."""
    return {
        'Professional Social Network_profile_url': profile.get('Professional Social Network_profile_url', ''),
        'full_name': profile.get('profile', {}).get('full_name', ''),
        'profile_picture': profile.get('profile', {}).get('profile_pic_url', ''),
        'background_cover_image_url': profile.get('profile', {}).get('background_cover_image_url', ''),
        'current_occupation': profile.get('profile', {}).get('occupation', ''),
        'location': f"{profile.get('profile', {}).get('city', '')}, {profile.get('profile', {}).get('country_full_name', '')}"
    }

# Open the output CSV file for writing
with open(output_file, mode='w', newline='') as file:
    writer = csv.DictWriter(file, fieldnames=fieldnames)
    writer.writeheader()

    # Fetch the initial page of profiles
    profiles_data = fetch_profiles(api_endpoint, headers, initial_params)

    if profiles_data and 'results' in profiles_data:
        # Iterate through each profile and write to CSV
        for profile in profiles_data['results']:
            profile_data = process_profile_data(profile)
            writer.writerow(profile_data)

        # Handle pagination
        while 'next_page' in profiles_data:
            next_page_url = add_country_to_url(profiles_data['next_page'], initial_params['country'])
            profiles_data = fetch_profiles(next_page_url, headers)

            if profiles_data and 'results' in profiles_data:
                for profile in profiles_data['results']:
                    profile_data = process_profile_data(profile)
                    writer.writerow(profile_data)
            else:
                break

print("Completed fetching and storing all profile data.")

It would export them to a file named enriched_profiles.csv that looks like this:

microsoft project managers
.CSV file exported, all Microsoft project managers

Exporting phone numbers and emails with our Search API

Notice, however, how there is no phone number or email...

We could use our Personal Contact Number Lookup Endpoint and Personal Email Lookup Endpoint to change that.

Here's our updated Python script, keeping the same "project manager" at "Microsoft" example:

import requests
import csv
from urllib.parse import urlparse, parse_qs, urlencode, urlunparse

# API credentials and endpoint configuration
api_key = 'Your_API_Key_Here'  # Replace with your actual API key
headers = {'Authorization': 'Bearer ' + api_key}
search_api_endpoint = 'https://nubela.co/proxycurl/api/search/person/'
contact_phone_endpoint = 'https://nubela.co/proxycurl/api/contact-api/personal-contact'
contact_email_endpoint = 'https://nubela.co/proxycurl/api/contact-api/personal-email'

# Parameters for the initial search request
search_params = {
    'country': 'US',
    'current_role_title': '(?i)Project Manager',
    'current_company_Professional Social Network_profile_url': 'https://www.professionalsocialnetwork.com/company/microsoft',
    'page_size': '10',
    'enrich_profiles': 'enrich',
}

# Define the output CSV file and headers
output_file = 'enriched_profiles_with_contacts.csv'
fieldnames = [
    'Professional Social Network_profile_url', 'full_name', 'profile_picture', 'background_cover_image_url',
    'current_occupation', 'location', 'personal_phone_number', 'personal_email'
]


def fetch_contact_info(api_endpoint, Professional Social Network_profile_url):
    """Fetch personal contact info (phone number or email)."""
    params = {'Professional Social Network_profile_url': Professional Social Network_profile_url}
    response = requests.get(api_endpoint, headers=headers, params=params)
    if response.status_code == 200:
        data = response.json()
        if api_endpoint.endswith('personal-contact'):
            return ', '.join(data.get('numbers', []))  # Join all numbers into a single string
        elif api_endpoint.endswith('personal-email'):
            return ', '.join(data.get('emails', []))  # Join all emails into a single string
    else:
        print(f"Error fetching contact info from {api_endpoint}: {response.status_code}, {response.text}")
    return ''  # Return empty string if no data or in case of error


def add_country_to_url(url, country):
    """Add the 'country' parameter to the given URL."""
    parsed_url = urlparse(url)
    query_params = parse_qs(parsed_url.query, keep_blank_values=True)
    query_params['country'] = [country]
    new_query = urlencode(query_params, doseq=True)
    new_url = urlunparse(parsed_url._replace(query=new_query))
    return new_url


def process_and_write_profiles(writer):
    """Fetch profiles and write their details, including contact info, to the CSV file."""
    url = search_api_endpoint
    params = search_params.copy()

    while url:
        response = requests.get(url, headers=headers, params=params)
        if response.status_code != 200:
            print(f"Error fetching profiles: {response.status_code}, {response.text}")
            break

        data = response.json()
        for profile in data.get('results', []):
            # Fetch contact information
            email_info = fetch_contact_info(contact_email_endpoint, profile['Professional Social Network_profile_url'])
            phone_info = fetch_contact_info(contact_phone_endpoint, profile['Professional Social Network_profile_url'])

            # Write profile and contact information to CSV
            writer.writerow({
                'Professional Social Network_profile_url': profile['Professional Social Network_profile_url'],
                'full_name': profile['profile']['full_name'],
                'profile_picture': profile['profile']['profile_pic_url'],
                'background_cover_image_url': profile['profile']['background_cover_image_url'],
                'current_occupation': profile['profile']['occupation'],
                'location': f"{profile['profile']['city']}, {profile['profile']['country_full_name']}",
                'personal_phone_number': phone_info,
                'personal_email': email_info,
            })

        # Prepare for the next page
        next_page_url = data.get('next_page')
        if next_page_url:
            url = add_country_to_url(next_page_url, search_params['country'])
            params = {}  # Since all needed params are in the URL, clear params to avoid duplication
        else:
            break


# Open the output CSV file for writing and process profiles
with open(output_file, mode='w', newline='') as file:
    writer = csv.DictWriter(file, fieldnames=fieldnames)
    writer.writeheader()
    process_and_write_profiles(writer)

print("Completed fetching and storing all profile data with contact information.")

Which will export an enriched list of prospects, including contact and email (providing we have it available), to a file named enriched_profiles_with_contacts.csv:

Our new .CSV that includes contact information
Our new .CSV that includes contact information

I've gone ahead and blurred all contact information, but as you can see, it pulled additional contact information for many of our earlier prospects.

Now we're really talking...

There are 100 different ways you could do this, even going as far as to directly integrate outreach channels straight into our API (sending emails, messages, automatically) and beyond.

But, as you can see, our Search API is quite powerful, and you can avoid many of the cons of a traditional Professional Social Network search, or Professional Social Network X-ray search.

By the way, if you're entirely uninterested in coding...

I don't know how you got this far. Nevertheless, we do have a no-code option that anyone could use.

It's a Google Sheets extension named Sapiengraph that'll allow you to conveniently pull all of this same data into a Google Sheets spreadsheet.

Same power, but all of the convenience in the world - Sapiengraph Google Sheets enrichment
Same power, but all of the convenience in the world

It won't have the level of functionality or customizability as our API, however, it's still very effective and worth checking out!

Additional tips

There are several other API endpoints we offer that could be of value to you, but I'm not going to mention every single one of them here for the sake of brevity.

The most immediately relevant not already shown above is our Employee Listing Endpoint, which works similarly to our Person Search Endpoint, but instead searches a given company for a matching job role.

Outside of that, for a full understanding of what's possible, I would read over our documentation here.

You now have three options

  1. You can take your newly gained knowledge and do nothing with it.

  2. You can take your newly gained knowledge about Professional Social Network X-ray searches and implement it.

  3. You can take your newly gained knowledge about Professional Social Network X-ray searches and our B2B enrichment APIs and fully utilize it to your advantage.

The choice is yours to make, but, if you're thinking what I'm thinking in my admittedly very biased position...

Create your Proxycurl account today

It's free to create a Proxycurl account and you start out with a few trial credits to test things out. What do you have to lose?

That said, you can click right here to create your account for free now.

If you're interested in learning more about our credit usage system and pricing policy first, you can do that here.

Conclusion

Whether you're looking to fill a job role, find a sales lead, or beyond, Professional Social Network X-ray searches offer a scalable and programmatically friendly way to access the vast amounts of B2B data available on Professional Social Network. It just requires a bit of technical know-how.

But, if you want the most convenient way, you can skip Professional Social Network X-ray searches altogether and use a B2B data provider and API like ours to handle all of the headaches for you (like scraping Professional Social Network profiles and Google SERPs). You can just flawlessly pull rich B2B data instead.

Thanks for reading, and here's to both more and better-quality data!

P.S. Have any questions about Proxycurl? Feel free to reach out to us at "[email protected]" and we'll be glad to help!

Colton Randolph | Technical Writer
Share:

Subscribe to our newsletter

Get the latest news from Proxycurl

Featured Articles

Here’s what we’ve been up to recently.