Scrape Job Postings from Upwork

Easily extract job postings from Upwork with our no-code scraper template. Ideal for freelancers, recruitment agencies, and market researchers, this tool allows you to gather detailed job data, monitor industry trends, and identify new opportunities—all without any coding skills.

Effortlessly Extract Data from Your First Website

Simply enter the URL of the website you want to scrape.

The full web address of the target website, including the protocol (e.g., https://www.example.com).

Whether the website has a load more button.

When enabled, the scraper will attempt to use Google's cached version of the website circumventing anti scraping measures.

Sample Data for "Upwork"

titledescriptiontypehourly_price_range_starthourly_price_range_endbudgetcurrencyskills
Data Scientist for Machine Learning Optimization - TikTok AI (Python/NLP)Overview: We are seeking a highly skilled developer to advance the development, optimization, and strategic guidance of our AI tool. This project is partially completed, and we need further expertise to bring it to fruition. Project Scope: The project requires between 60 to 200 hours of dedicated work. Key Responsibilities: 1. Set Up and NVIDIA GPU (Jetson Orin): Ensure the hardware setup is optimized for advanced AI processing, focusing on maximizing the performance of machine learning models. 2. Optimize AI Model Performance: Refine Python code to enhance the efficiency and effectiveness of machine learning and NLP, OCR Text analysis models. 3. Strategic AI Development Guidance: Provide expert advice on the AI development process, ensuring the tool evolves appropriately and aligns with strategic goals. 4. Development of Complex AI Algorithms: Apply advanced mathematical and statistical concepts to develop and refine algorithms that improve the AI's learning and decision-making processes. Skills and Qualifications: 1. Expertise in AI and Machine Learning: Demonstrable proficiency with complex AI model development and optimization. 2. Advanced Python Programming: Extensive experience in Python, with a strong emphasis on writing clean, efficient, and robust code. 3. NLP Mastery: Expert knowledge in Natural Language Processing techniques and applications. 4. Data Manipulation and Analysis: Proven ability to handle large datasets and perform sophisticated data analysis. 5. API Integration Experience: Skilled in integrating various APIs and ensuring seamless data interchange. 6. Linux and Hardware Programming: Experience with Linux environments and programming hardware like NVIDIA Jetson Nano or Raspberry Pi; desirable but not mandatory. 7. Front-End Development Skills: Useful but optional; ability to contribute to front-end development can be a bonus. Additional Information: While I cannot disclose more details of the project publicly, I am eager to discuss it further. Please message me to arrange a conversation. Additional Notes: - Prior experience with similar AI development projects will be viewed favorably.Hourly5090.00USDData Scraping,Web Crawling,AI Development,AI Model Training,Data Science
Data MiningHi Freelancers! I need Data Mining and Lead Generation expert to Find the data on Restaurants in the USA I will need Company Name Website Executive person Full Name Generic Email Company Facebook Link Company Address PhoneHourly720.00USDData Scraping,Data Mining,Microsoft Excel,Data Entry
Python/Make/APIFY Scraper for Repetitive UseLooking to erect a scraping infrastructure to repeatedly scrape some RE sites. What I am looking for: - A 3 level scraper to: 1) grab all roads in a city, 2) grab all addresses per road, 3) grab specific details of each address (needs to be run once per quarter) - A 2 level scraper specific to other RE sites that 1) will grab the addresses they have for provided city and 2) grab specific details of each address (needs to be run on a flexible schedule per site) Open to doing this in make.com or other platform or direct in python as long as I can call it in an API call. Will need to solve for 403 errors to repeat scrape calls. Will look at candidates primarily who have done similar RE scrapes and who can present a whole plan to API the capability as a whole - dumping into Supabase or other web DB - and how best to solve anti-scrape measures given some of the target frequencies ideally aiming for.Hourly3060.00USDData Scraping,Make.com,Python,Data Extraction,Scrapy
1 Source ADASWe need to have a software built that will decode VIN numbers on vehicles and match damage line items from a collision estimate to calibration requirements for that vehicle.HourlyUSDWeb API,Web Scraping Software,Azure DevOps,C#,Angular,HTML,SQL
Scrape latest news items from any siteI need to develop some python code to scrape the latest news items, videos, posts from a variety of websites. I can't write explicit code for each website, so I'm trying to find a way to write some general code that can work across a broad set of websites. Some things I'm trying to work thru: 1- locating news articles across various websites and then getting the title, link, image, etc associate with the article. 2- if incorporating LLM, keeping cost down, so doing a one-time analysis of a website and storing the result for reuseHourly4060.00USDData Scraping,Selenium,Python
Google Maps Scraping SpecialistGoogle Maps Scraping Specialist Overview We are seeking a skilled data specialist to scrape Google Maps and generate comprehensive lists of companies based on specific criteria. The ideal candidate will have experience in web scraping, data extraction, and data processing, and will be able to deliver accurate and organized data in a timely manner. Responsibilities Data Scraping: Use web scraping tools and techniques to extract company information from Google Maps. Criteria-Based Extraction: Identify and collect data based on predefined criteria such as industry, location, company size, and other relevant parameters. Data Organization: Organize and clean the extracted data to ensure it is accurate and usable. Data Delivery: Provide the extracted data in a structured format (e.g., Excel, CSV). Automation: Develop scripts or automated solutions to efficiently scrape and process data from Google Maps. Compliance: Ensure that all scraping activities comply with Google's terms of service and relevant legal regulations. Requirements Proven experience in web scraping and data extraction. Proficiency in using web scraping tools and frameworks (e.g., BeautifulSoup, Scrapy, Selenium). Strong programming skills in Python or another relevant programming language. Experience with data cleaning and organization. Ability to handle large datasets and ensure data accuracy. Familiarity with Google Maps and its data structure. Excellent problem-solving skills and attention to detail. Preferred Skills Experience with APIs and other data extraction methods. Knowledge of data analysis and visualization tools. Previous experience in generating business lists or similar projects.Hourly5075.00USDData Scraping,Data Mining,Google Maps API,Python,JavaScript,Scrapy,Data Entry
Identify all Colorado registered cars out of list of 12.000 VINs which I will provideI have an Excel spreadsheet of 12.000 VINs for Chevy Suburbans in the USA built between 2000 and 2006. See attached PDF but I can provide the Spreadsheet once required. I'm looking for someone to identify which states these VINs are registered in or if they are no longer driving/registered anywhere Then I'm looking to have the owners identified by name, address, phone and email. I have located sites where this kind of data is available however not for the years 2000-2006 so this data is out there somewhere. e.g. https://carsowners.net/chevrolet I'm looking forward to your proposal on how you'd do it, how long it will take, how much you will charge. This is a quick turn job which I'd like completed by 31 May 2024 if possible.HourlyUSDData Scraping,Online Research,Data Mining,Microsoft Excel,Market Research
I Would Like To See If We Can Work TogetherEmail List Builder For Southeast Wyoming, Western Nebraska and Northern Colorado USA. Please email for more details.HourlyUSDWeb Scraping,B2B Lead Generation,Data Mining,Lead Generation,Data Entry
Social Media and Platform Scraping for Lead Generation in KentuckyProject Overview: Doorstep Solutions seeks to identify small- to mid-size businesses in Kentucky that are in need of 4PL (Fourth Party Logistics) services. This project involves scraping social media platforms and other relevant websites to compile a list of potential leads within the Kentucky market. Objectives: 1. Scrape social media platforms (e.g., LinkedIn, Facebook, Twitter) for businesses in Kentucky looking for 4PL services. 2. Identify small- to mid-size businesses in Kentucky in need of supply chain and logistics solutions. 3. Compile a comprehensive list of leads, including contact information and relevant details. Scope of Work: 1. Data Collection: - Identify and target small- to mid-size businesses in Kentucky on social media platforms. - Scrape relevant data from LinkedIn, Facebook, Twitter, and other platforms where businesses might post their needs for logistics services. - Extract key information, including business names, contact details, type of logistics services required, and any additional relevant data. 2. Data Analysis and Verification: - Analyze the collected data to ensure accuracy and relevance. - Verify the authenticity of the leads to ensure they meet the criteria for potential clients of Doorstep Solutions. 3. Lead Compilation: - Organize the verified leads into a structured database or spreadsheet. - Include detailed information for each lead: business name, contact person, email address, phone number, location, and specific logistics needs. 4. Reporting: - Provide a detailed report summarizing the findings and the methodology used. - Include insights and recommendations based on the data collected. Deliverables: 1. A comprehensive list of potential leads in Kentucky (in a spreadsheet format) with the following information: - Business Name - Contact Person - Contact Details (email, phone number) - Location - Specific 4PL Services Needed 2. A detailed report on the data collection and verification process. 3. Insights and recommendations based on the findings. Timeline: - Project Kickoff: [Start Date] - Data Collection Phase: [2 Weeks] - Data Analysis and Verification: [1 Week] - Lead Compilation: [1 Week] - Final Report Submission: [End Date] Requirements: - Proven experience in web scraping and data mining. - Familiarity with social media platforms and their APIs. - Attention to detail and accuracy in data collection and analysis. - Ability to deliver within the specified timeline.HourlyUSDData Scraping,Lead Generation,List Building,Data Mining,Social Media Marketing
Generates a spreadsheet and charts of academic jobs ads in a specific field over a date rangeExperience and skills required: -Ideally someone with experience in academia, specifically tender, track job searches -Ability to understand and synthesize job postings and parse them into a data frame or Excel sheet -Compare against existing academic job benchmark reports AI generated cover letters will not be accepted for this job Description: I need under a very short turnaround the data regarding how many academic jobs related to health and medical humanities have been posted from the past 10 to 15 years I ideally this should be relative to the number of jobs available for PhD holders in specific fields: English literature, history, any humanities field. But the primary target is literature One way to start this is that one could go to the MLA modern language Association and see they have a nice aggregate data report and they do breakdown by fields, but they don't have health or medical humanities specifically https://www.mla.org/content/download/160729/file/Data-Job-List-2019-20.xlsx The problem is finding the sources for old postings. And then also parsing the data according to whether the job is specifically looking for someone in this field " assistant Prof. and Health humanities" Versus an assistant professor in literature and then within the job description, it says we are especially interested in people working in health and medical humanities or literature and medicine, etc. For -five day turnaround is idealHourly4575.00USDWeb Scraping,Microsoft Excel,Python,Data Mining,Data Entry

Scraping made Easy and Fast

Simply provide the url and which fields you want to extract, we will cover the rest.

Zero Coding Experience Required

Dive right in, no coding experience necessary. Just supply the website URL and specify the data you need. Leave the HTML, CSS, and JavaScript to us.

Unbreakable Resilience

Worried about HTML modifications? Fear not. Our robust scraper adjusts and continues to operate effectively, regardless of changes.

Universally Compatible

Say goodbye to constant scraper adjustments. Our technology is equipped to work seamlessly with any new website.

Hassle-Free Data Export

Simple and flexible data export options at your fingertips. Choose from CSV, JSON, or Excel formats for your convenience.

FAQ

No-Code Scraper is a no-code scraping tool that enables you to extract data from any website effortlessly without needing to write code or manage complex scripts. By leveraging large language models, it simplifies the data extraction process, making it accessible to everyone.

Pricing

Pricing plans for teams of all sizes

Choose an affordable plan that is packed with the best features for engaging your audience, creating customer loyalty, and driving sales.
John DoeJane DoeAlice DoeBob DoeEve Doe
99+

99+ scrape Websites faster

Starter

Get started quickly and easily, no matter your skill level.

$ 16.99/month

400 credits* for USD 0.04 per credit

including USD 0.99 base fee

  • Javascript Rendering
  • Basic Data Export (JSON, CSV, Excel)

* All unused credits expire at the end of the billing period.

Pro

Most Popular

Elevate your data extraction with enhanced tools and integrations.

$ 19.99/month

400 credits* for USD 0.04 per credit

including USD 3.99 base fee

  • All of Starter
  • Export to Google Sheets, Notion and Airtable
  • Schedule Scraping Jobs

* All unused credits expire at the end of the billing period.

Expert

For users who need ultimate flexibility and premium proxy support.

$ 25.99/month

400 credits* for USD 0.04 per credit

including USD 9.99 base fee

  • All of Pro
  • API Access for full flexibility
  • Premium Proxy Support

* All unused credits expire at the end of the billing period.

How many credits do I need?

Use our credit calculator to estimate how many credits you need to scrape a page.

400 credits*
40010,000
* All unused credits expire at the end of the billing period.
ActionCredit cost per ActionTotal pages
Scrape simple page1400
Scrape page with proxy2200
Extract data from scraped page4100
Scrape & Extract data from simple page580
Scrape & Extract data from page with proxy667