How to build a professional networking engineer engine | By Victor Yakubu | February 2025
9 mins read

How to build a professional networking engineer engine | By Victor Yakubu | February 2025


Let’s create a LinkedIn recommendation engine fed by AI to find the right connections using Bright Data Linkedin Dataset and Olllama.

Stackademic

I am constantly looking for precious connections, especially on LinkedIn. I realized how important the construction of a solid professional network is for career growth. However, finding the right professionals to follow can be difficult and long.

A professional networking recommendation application simplifies this by analyzing large data sets to suggest relevant connections according to career objectives.

In this article, you will build a professional networking recommendation application that automates this process. He analyzes a set of LinkedIn profile data and suggests relevant professionals according to your career goals.

The tutorial covers:

  • Identify the recordings of 521.72 million Linkedin data from light data.
  • Use of the bottle to serve the recommendations via an API.
  • Use of Olllama to generate suggestions based on AI.
  • Creation of a simple user interface using Streamlit.
  • Python’s basic knowledge and API.
  • A light data account to access LinkedIn data sets.
  • Python 3.8+ installed on your system.

To build the recommendation engine, you need a LinkedIn profile data set. Bright Data provides sets of fresh and structured data adapted to the needs of your business from popular websites that can be downloaded in JSON or CSV format without emphasizing scrapers or bypassing blocks.

Follow these steps to get the data:

1. Access to the Bright Data dashboard

  1. Connect to your Bright Data dashboard.
  2. If you don’t have an account, register and check your email.

2. Access the LinkedIn data set

  1. In the sidebar menu, click on “Web Datasets”.
  2. Then click on “Data set marketβ€œTo explore available data sets.

3 and 3 To research “Liendin”On the search bar to see all LinkedIn data sets available.

4 Click on “Linkedin people profilesΒ»Data set.

5 Filter the data set you need, that is to say that you can only get profiles of people in the United States or profiles of professionals in a particular field, or contact light data to obtain a Personalized data set of your choice.

NB: You can also download data examples, which you can play with before buying fresh and updated data sets.

3. Buy and download the data set

  1. Click on “Purchaseβ€œTo acquire the data set.
  2. You can choose between JSON or CSV formats (this article uses the CSV format).
  3. Once purchased, the data set will be downloaded from your local machine. Rename it linkedin_dataset.csv for easy access.

4: Check the data set

  • Open the CSV file in a text editor or a computer sheet tool.
  • Make sure it contains structured data with appropriate column names.
  • Move the file to your project file (for example, data/linkedin_dataset.csv). This will be created at the next step.

Once the data set is ready, you can load it in your Python application.

πŸ“‚ Project configuration

Step 1: Create the project file

Organize project files using the following structure:

professional-networking-recommendation-engine/
│── data/
β”‚ β”œβ”€β”€ linkedin_dataset.csv # Bright Data's LinkedIn dataset
│── backend/
β”‚ β”œβ”€β”€ main.py # API to process user input and generate recommendations
β”‚ β”œβ”€β”€ requirements.txt # Python dependencies
│── frontend/
β”‚ β”œβ”€β”€ app.py # Streamlit-based UI for user input and displaying

Step 2: Configure a virtual environment

Access the project directory and create a virtual environment:

cd professional-networking-recommendation-engine
python -m venv venv

Activate the environment:

venv\Scripts\activate
source venv/bin/activate

Step 3: Install the outbuildings

Go to backend/ file and create a requirements.txt File with the following dependencies:

ollama
flask
pandas
streamlit
requests

Save then install them

pip install -r backend/requirements.txt

Go to backend/ folder and create a data folder. Place your LinkedIn data set downloaded from light data in this folder.

βš™οΈrunding the olllama phi3 model locally

To generate AI -centered recommendations, you will use the Olllama Phi3 modeL. OLLAMA PROVIDS AN EASY WAY TO RUN AI MODELS LIKE LLAMA 3.3, DEEPSEEK-R1, PHI-4, Mistral, Gemma 2, etc, Locally for free. This section will guide you through installer and running the Phi3

Step 1: Install Olllama

Olllama provides a simple CLI tool to locally execute models of large languages ​​(LLMS). Install it by following the instructions in your operating system:

iwr -useb  | iex

Linux (curl)

curl -fsSL  | sh

MacOS (Homebrew)

brew install ollama

ollama - version

Step 2: Download the Phi3 model

Now pull the Phi3

ollama pull phi3

This will download the model and make it available for local execution.

Step 3: Run the Olllama model

You can now test the model by working:

ollama run phi3

This starts an interactive cat session where you can enter text prompts and receive answers generated by AI.

Then implement the Backend service code.

πŸ› οΈ Configuration of the backend with the balloon

The Backend Serves as the Core of the Recommendation Engine. It deals with the user’s entry, recovers the relevant LinkedIn profiles and returns recommendations generated by AI. This section walks in the configuration of a Ball

Step 1: Create the Flask API

Access the backend/ main.py. This script will:

  • Load the LinkedIn data set
  • To use Ollla
  • Return of results via an API

Open main.py

import pandas as pd
import json
import ollama
from flask import Flask, request, jsonify

2. Load the LinkedIn data set

data/ directory. Then, load it into a pandas dataframe:

df = pd.read_csv("../data/linkedin_dataset.csv")

3. Define the recommendation function

Ollla You have a model:

def generate_prompt(user_goal):
"""Generate a structured prompt for the phi3 model."""
prompt = f"""
You are an expert career coach. Given the user's goal: "{user_goal}",
suggest 10 professionals from the dataset who would be great to follow on LinkedIn.
Provide recommendations based on their position, experience, and influence.
Return results in JSON format with name, position, current company, and LinkedIn URL.
"""
return prompt

def get_recommendations(user_goal):
"""Query phi3 with a structured prompt."""
prompt = generate_prompt(user_goal)

response = ollama.chat(
model="phi3",
messages=[{"role": "user", "content": prompt}]
)

# Print raw response for debugging
print("Raw response from Ollama:", response)

try:
content = response["message"]["content"] # Extract text response

# Remove Markdown code block markers if present
if content.startswith("```json"):
content = content[7:] # Remove the starting "```json"
if content.endswith("```"):
content = content[:-3] # Remove the ending "```"

recommendations = json.loads(content.strip()) # Convert text to JSON
return recommendations
except (json.JSONDecodeError, KeyError):
return {"error": "Failed to parse model response. Check the output format."}

Now define the Flask application and the /recommend Termination point:

# Flask API
app = Flask(__name__)

@app.route("/recommend", methods=["POST"])
def recommend():
data = request.json
user_goal = data.get("goal")
if not user_goal:
return jsonify({"error": "Goal is required."}), 400

recommendations = get_recommendations(user_goal)
return jsonify(recommendations)

if __name__ == "__main__":
app.run(debug=True)

Step 2: Run the API

Start the server by operating:

cd backend
python main.py

To use Postman Or loop To send a test request:

curl -X POST " -H "Content-Type: application/json" -d '{"goal": "Marketing Expert"}'

The Fronend provides an interactive user interface where users can enter their career objectives and receive recommendations. Streamline Makes the development of the user interface easy with a minimum code.

Access to the Fronend / and Create a file named app.py. This script will be:

  • Accept the contribution of users for career objectives

Open app.py and add the following code:

import streamlit as st
import requests

def get_recommendations(user_goal):
"""Send a request to the backend API and fetch recommendations."""
url = "
response = requests.post(url, json={"goal": user_goal})

if response.status_code == 200:
return response.json()
else:
return {"error": "Failed to fetch recommendations."}

def main():
st.title("πŸ” LinkedIn Professional Recommendation Engine")
st.write("Enter your career goal below, and we'll suggest professionals you should follow on LinkedIn.")

user_goal = st.text_input("🎯 Your Career Goal:")

if st.button("Get Recommendations πŸš€"):
if user_goal:
response = get_recommendations(user_goal)

if "error" in response:
st.error(response["error"])
else:
st.subheader("βœ… Recommended Professionals:")

for person in response:
with st.container():
st.markdown(f"### {person['name']}")
st.markdown(f"**Position:** {person['position']}")
st.markdown(f"**Company:** {person['current_company']}")
st.markdown(f"πŸ”— [LinkedIn Profile]({person['linkedin_url']})", unsafe_allow_html=True)
st.markdown("---") # Adds a separator line between profiles
else:
st.warning("⚠️ Please enter a career goal.")

if __name__ == "__main__":
main()

Step 2: Run the Streamlit application

To start the user interface, run:

NB: Make sure the Olllama model and Backend services work.

cd frontend
streamlit run app.py

This launches at Local Streamlit Server. Open the Displayed Url in a Browser to Interact with the Recommendation Engine. Now you should get a list of recommended professionals with whom you connect according to your career goals.

You have built a professional networking recommendation engine powered by AI that helps users find relevant connections according to their career objectives. The data set of this project was obtained from light data; They provide structured and updated data from various websites that you can use to make informed commercial decisions, improve customer relations, make strategic career movements, etc. and a rationalization for frontal user interaction.

This system works completely locally; You can improve the project at no cost. You can improve it by:

  • Get more updated or personalized data sets from light data.
  • Refine the AI ​​model with a personalized fine setting.



Grpahic Designer