0.0
- 20.0
- 50.0
{"invitesSent" => 3, "totalInvitedToInterview" => 1, "totalHired" => 0, "totalUnansweredInvites" => 1, "totalOffered" => 1, "totalRecommended" => 6}
I am seeking a skilled full stack developer proficient in n8n to collaborate and teach a non technical person on a project while providing guidance and instruction on its usage. Simple project integrating multi step prompt using n8n and outputs into google sheets.
This role involves hands-on co-working, allowing for an immersive learning experience as we tackle real tasks together. The ideal candidate is patient, communicative, and enthusiastic about teaching. If you have a passion for full stack development and enjoy mentoring others, I would love to hear from you! would be great to have a knowleageable expert to be able to become our go to expert for questins.
0.0
- 25.0
- 110.0
{"invitesSent" => 1, "totalInvitedToInterview" => 0, "totalHired" => 1, "totalUnansweredInvites" => 0, "totalOffered" => 0, "totalRecommended" => 6}
We are seeking an experienced Esri technical consultant to assist with the development of a multi-disaster risk hub using Esri technologies. The role involves leveraging the Living Atlas and integrating external data layers to enhance our disaster response capabilities. Proven experience in Esri applications and a strong understanding of GIS principles are essential. If you have a track record of developing innovative solutions and can work collaboratively in a fast-paced environment, we would love to hear from you.
0.0
- 30.0
- 50.0
{"invitesSent" => 53, "totalInvitedToInterview" => 21, "totalHired" => 0, "totalUnansweredInvites" => 23, "totalOffered" => 0, "totalRecommended" => 9}
The Opportunity
Join a profitable SaaS company that powers a software safety system for major business chains. We're looking for an engineer who wants to own and evolve a fleet of 500+ scrapers that form the backbone of our business.
This isn't just another "fix broken scrapers" role. We need someone who will take true ownership of our data pipeline and push it forward.
What You'll Own
- Scraper Fleet Health: Monitor, fix, and deploy 500+ scrapers across different jurisdictions
- Data Quality Systems: Build tools and automation to catch issues before customers do
- Infrastructure Evolution: Modernize our scraping infrastructure with headless browsers and cloud services
- Automation & AI: Leverage LLMs and smart algorithms to reduce manual data work
- Process Improvement: Design systems that let the business run without constant supervision
The Technical Challenge
Our 500+ scrapers face every challenge in the book:
- PDF parsing - Many jurisdictions publish data as PDFs (often poorly formatted)
- Heavy JavaScript sites - Modern frameworks that require full browser execution
- Anti-scraping measures - Cloudflare, rate limits, CAPTCHAs
- Monolithic job runner - Current Celery setup needs breaking into manageable pieces
- Scale issues - One giant job processing everything is fragile
You Should Have
- PDF extraction mastery - You've wrestled with PDFs and won (PyPDF2, pdfplumber, Tabula)
- Selenium expertise - Comfortable automating complex browser interactions
- JavaScript-heavy site experience - You know when BeautifulSoup won't cut it
- Job queue architecture - Experience breaking down monolithic processes (Celery, RQ, or similar)
- Python/Django proficiency
- DevOps mindset - Comfortable with deployment, monitoring, database operations
Bonus Points For
- Experience with headless browser services (Browserless, ScrapingBee, etc.)
- OCR experience for image-based PDFs
- Building resilient distributed systems
- Using AI/LLMs for data extraction and matching
- Experience with data quality monitoring systems
What Makes You Perfect
- You see 500 scrapers and think "system" not "list"
- You're bothered by manual processes that could be automated
- You take pride in data quality and uptime metrics
- You can work independently with minimal supervision
- You're excited about using AI to solve real problems
- You understand that data quality directly impacts customer satisfaction
Specific Projects You'll Tackle
- Redesign job processing to eliminate single point of failure
- Build staging environment with automated scraper testing
- Create tools that help non-technical staff handle data quality tasks
- Implement smart canary monitoring system
- Develop AI-powered location matching to reduce manual tagging
Work Style
- Remote position (geographic flexibility)
- ~20-30 hours/week to start, potential for more
- Async-first communication
- Direct collaboration with technical founder
- Real autonomy to improve systems
- Opportunity to work with offshore data quality team
Why This Role Matters
Our customers rely on this data to keep their restaurants compliant and safe. When scrapers break or data is wrong, real businesses suffer. You'll be the guardian of this critical data pipeline.
Compensation
Competitive hourly rate based on experience and location. This is a contractor position with potential for increased hours based on performance.
To Apply
Send:
- Your GitHub/portfolio showcasing relevant work
- A brief note on your most challenging web scraping project (especially if it involved PDFs or JavaScript-heavy sites)
- Your thoughts on how you'd approach managing 500 scrapers at scale
- Your hourly rate expectations
We're looking to move quickly with the right candidate who can hit the ground running.
0.0
- 25.0
- 47.0
{"invitesSent" => 26, "totalInvitedToInterview" => 22, "totalHired" => 0, "totalUnansweredInvites" => 3, "totalOffered" => 0, "totalRecommended" => 5}
We're an early-stage startup looking to build an MVP of a web platform that assists lawyers in reviewing and summarizing legal contracts using AI.
The goal is to speed up contract review for law firms and legal departments using generative AI to:
Highlight key clauses
Detect risky terms
Generate plain-English summaries
Suggest edits (optional for Phase 2)
We're open to tech stack recommendations but are leaning toward a full-stack JavaScript solution (Next.js, Node.js, or similar). Integration with OpenAI API or similar LLMs will be needed for the AI portion.
Key Deliverables (MVP):
Upload and parse contract PDFs
LLM-powered clause extraction and risk flagging
Summary generation (by section or full document)
Clean and simple UI for lawyers (minimalist design)
Who You Are:
Open for agencies
Experienced with AI/NLP projects using LLMs (OpenAI, Claude, etc.)
Comfortable with building MVPs and making smart decisions under uncertainty
Ideally experienced with legal tech or similar document-heavy domains
Available to start immediately and build quickly
Questions for Applicants:
What similar projects have you worked on using AI or document analysis?
How would you architect this to balance performance, scalability, and rapid delivery?
What LLM or AI tooling would you suggest for this use case?
We are self-funded but serious – this MVP is part of a broader go-to-market strategy with beta clients already lined up.
Bonus: If you've built tools for finance, healthcare, or compliance, that’s a strong plus.
Looking forward to collaborating with someone who can move fast, think strategically, and write clean, testable code.
0.0
- 35.0
- 45.0
{"invitesSent" => 8, "totalInvitedToInterview" => 16, "totalHired" => 0, "totalUnansweredInvites" => 4, "totalOffered" => 0, "totalRecommended" => 0}
Overview
Working with a fast-growing AI-powered SaaS product that is building a next-generation platform designed to transform workflows for enterprise customers. The platform leverages advanced AI tooling to automate and augment critical processes, providing actionable insights and improving operational efficiency.
The product is currently in Beta, with a full launch targeted later this year. A newly formed engineering team will address existing architectural gaps and implement new feature sets. This role offers a rare opportunity to influence both technical direction and product success from an early stage.
Engagement Details
- Start Date: Immediate
- Initial Term: Up to 12 months with potential extension (6 months minimum except if performance issues)
- Location: Fully remote/Nearshore to the UK preferred (+/- 2-3 hours)
- Working Hours: 8 hours/day, aligned with UK business hours
About the Project
The platform is a AI SaaS product built using a modern stack:
- Backend: Python (FastAPI), Node.js
- Frontend: React, TypeScript
- Infra/CI/CD: AWS, Kubernetes, Terraform, GitHub Actions
- Data: PostgreSQL (RDBMS), LLM integrations (OpenAI API, Langfuse)
This is a full lifecycle role—ranging from architecture discussions to feature delivery, where you’ll collaborate with product, design, and QA teams. You’ll also leverage modern AI developer tools (e.g., GitHub Copilot, LangChain) to accelerate delivery and improve quality.
*Must-Have Experience & Skills*
Technical / Product
- 5+ years experience as a fullstack engineer in production environments
- Proficiency with Python (FastAPI) and Node.js for backend development
- Strong frontend skills with React (TypeScript)
- Solid database experience (PostgreSQL or similar RDBMS)
- Familiarity with microservices architecture and modern design patterns
- Experience with AWS, Kubernetes, and CI/CD pipelines (e.g., GitHub Actions)
- Exposure to AI/ML or LLM-based systems (e.g., OpenAI API, LangChain) is a plus
Business & Delivery
- Experience in SaaS environments, ideally B2B
- Previous experience working in hybrid or distributed agile teams
- Ability to work autonomously and take ownership of end-to-end delivery
Soft Skills
- Clear communication and ability to collaborate in remote-first teams
- Proactive problem solver with a pragmatic, delivery-focused mindset
- Adaptable and resilient, comfortable in fast-paced and evolving environments
- Curious and eager to learn new tools and approaches (especially AI-driven)
Responsibilities
- Develop and maintain backend services (Python/Node.js) and frontend components (React)
- Own features from estimation through to production and QA sign-off
- Contribute to architecture and technical direction decisions
- Collaborate with product, QA, and DevOps for smooth, high-quality delivery
- Ensure robust testing, observability, and security practices are in place
- Leverage AI development tools (e.g., GitHub Copilot, LangChain) to improve velocity and reduce boilerplate
- Participate in agile ceremonies: standups, sprint planning, reviews, and retrospectives
Success Criteria
- Rapid onboarding within 2–3 weeks
- Delivery of clean, tested, and documented code aligned with agreed standards
- Effective collaboration across cross-functional teams
- Improved performance, resilience, and observability of platform components
- Demonstrated use of AI/ML tools to accelerate and improve development outcomes
Working Practices
Methodology: Agile (Scrum/Kanban hybrid)
Tooling: Jira, Slack, GitHub, CI/CD pipelines
Ceremonies: Daily standups, sprint planning, retrospectives, and demos
Additional Information
Onboarding: Includes intro sessions with team leads across engineering, product, and delivery
All work is fully remote. Travel (if ever required) will be covered by the client.
Work eligibility excludes candidates in regions subject to UK financial sanctions.
Assessment Process
- CV & GitHub Review – Validate technical depth and SaaS experience
- Screening Call (30 mins) – Initial technical and cultural assessment
- Technical Interview (1 hr) – Pair-programming and fullstack problem-solving
- Final Interview (30 mins) – Focus on collaboration, mindset, and adaptabilityNext Steps
IMPORTANT NOTES
1) Please submit candidates with:
- Updated CV with relevant fullstack experience (especially early-stage or AI-focused)
- Confirmation of availability, location, and expected rate
- GitHub or code samples if available
2) CVs should summarise the career but most important to us is a clear view of project contributions - project roles held, duration of role on project, 1 line about the project, responsibilities and achievements in the role, team involved with, tech stack used.
***Please do not apply without a CV.***
3) We assess technical skills so please only apply if this is a good fit
4) Please don't try to sell us your agency we are only interested in assessing individual talent (albeit we are happy to assess talent from an agency).
5) We cannot take a phone call to discuss the brief with every applicant. We need to see a CV (of the applicant being proposed for the team)
6) Please don't apply with a CV for a candidate and then switch them out for someone else during the process.
0.0
- 25.0
- 65.0
{"invitesSent" => 0, "totalInvitedToInterview" => 8, "totalHired" => 0, "totalUnansweredInvites" => 0, "totalOffered" => 0, "totalRecommended" => 0}
Project Overview
We are building a real-time smart timeline for creative production teams. The application tracks crew members, live shot status, and AI-assisted hand-offs during a production day, enabling producers to spot timeline delays, manually override schedules, and ensure projects wrap on time.
A previous developer has already begun a "Smart View" using Retool and Node.js. We are looking for a senior engineer to take ownership of the project. Your core responsibilities will be to:
* Audit the existing codebase and architecture.
* Recommend the best path forward: extend the current Retool build or rebuild key components in React.
* Deliver the complete Phase 1 feature set within 5–6 weeks for a scheduled live case study.
Tech Stack
* Backend: Node.js v18+, Express
* Database: PostgreSQL with Drizzle (or Prisma)
* Frontend: Retool (preferred) or React + Vite (if migration is faster)
* Real-Time: Socket.io or Supabase Channels
* Infrastructure: Docker Compose for local development, GitHub Actions for CI/CD
* Storage: Any S3-compatible object storage
Phase 1 Deliverables
You will be provided with detailed acceptance criteria at kickoff. The high-level goals are:
* Schedule Upload
* Done when: A CSV/XLSX file can be imported to create demo users and schedule blocks.
* Roles & Permissions (RBAC)
* Done when: Producer, Crew, and Client roles are defined and enforced in both the UI and API middleware.
* Timeline UI
* Done when: The UI displays "Now/Next" lanes, a live clock, and a "drift" indicator badge for delays.
* Status Engine
* Done when: Block statuses (pending → in_progress → needs_approval) update in real-time for all connected clients via sockets.
* Manual Override
* Done when: A "Make Next" modal allows a Producer to reorder a block and view the time impact (drift delta).
* AI Webhook Integration
* Done when: A secure /ai-callback endpoint can receive a payload, attach a thumbnail to a block, and update its status.
* Admin Kill Switch
* Done when: A Producer can force any automated block into a fully manual state.
* Audit & Wrap Report
* Done when: A /wrap-report endpoint returns a JSON object detailing timeline drift and approval latencies.
* Deployment
* Done when: The application runs reliably via Docker locally and is auto-deployed to a staging URL on push to the main branch.
Initial Milestone: Audit & Plan (First 1-2 Days)
Your first deliverable will be a concise Audit Memo outlining your findings and proposed plan. This memo should include:
1. Your final recommendation: Extend vs. Rebuild.
2. A detailed hour estimate for each deliverable listed above.
We will review and approve this plan together before full development begins. This initial audit and planning work is considered the first paid task of the project.
Required Skills & Experience
* Proven experience building complex dashboards in Retool or React.
* Deep expertise with real-time web applications using WebSockets (Socket.io, Supabase Channels, etc.).
* Strong background in Node.js and implementing role-based access control (RBAC).
* Proficiency with Docker Compose for local development and CI/CD pipelines (GitHub Actions).
* Excellent asynchronous communication skills (we live in Slack and Loom).
* Dealbreaker: You must have a link to a live, real-time dashboard in your portfolio that we can review.
Bonus Skills
* Experience with Supabase (especially Row-Level Security).
* Proficiency with Tailwind CSS, particularly for themes like dark mode.
* Familiarity with building or integrating AI webhook pipelines.
How to Apply
To be considered, please provide the following in your proposal:
1. A link to a live dashboard application (or a Retool JSON export) that you personally built.
2. Your answers to these two questions:
* Q1: Briefly describe a custom component, module, or transformer you built in Retool.
* Q2: In two sentences, how would you approach calculating and displaying the "drift delta" for the Manual Override feature?
3. Your detailed hour estimates for each of the Phase 1 Deliverables.
4. Your hourly rate.
5. Confirmation of your immediate availability to start.
0.0
- 30.0
- 50.0
{"invitesSent" => 0, "totalInvitedToInterview" => 2, "totalHired" => 0, "totalUnansweredInvites" => 0, "totalOffered" => 0, "totalRecommended" => 1}
Have a website created and it is public. The map/location feature needs to be updated to use Google maps and be able to show the end user what locations are near them.
0.0
- 72.0
- 150.0
{"invitesSent" => 0, "totalInvitedToInterview" => 0, "totalHired" => 0, "totalUnansweredInvites" => 0, "totalOffered" => 1, "totalRecommended" => 11}
We are seeking an experienced fractional CTO to lead our SaaS project. The ideal candidate will guide our technical strategy, oversee product development, and ensure we meet our goals efficiently. You will collaborate with developers and stakeholders, providing insights on architecture, technology stack, and best practices. If you have a proven track record in SaaS environments and are passionate about innovation, we want to hear from you!
0.0
- 70.0
- 150.0
{"invitesSent" => 5, "totalInvitedToInterview" => 6, "totalHired" => 1, "totalUnansweredInvites" => 2, "totalOffered" => 0, "totalRecommended" => 4}
We’re looking for a skilled full stack developer to build new features for a SaaS platform that integrates with Shopify. You should be proficient in Next.js (App Router), TypeScript, Tailwind CSS, MongoDB, Vercel, and AWS.
0.0
- 20.0
- 50.0
{"invitesSent" => 0, "totalInvitedToInterview" => 0, "totalHired" => 0, "totalUnansweredInvites" => 0, "totalOffered" => 0, "totalRecommended" => 2}
⚠️ If you are not comfortable using AI tools in your daily work, this job is not for you.
At IT Luxuoso, we believe local ideas deserve global impact 🌍.
We bring this to life by creating emotionally resonant 💫, high-end online shopping experiences that captivate users and elevate brands 🛍️. Through the design and launch of multilingual, strategically crafted websites 💻, we help ideas grow beyond borders.
"Local ideas, global reach." 🚀
- IT Luxuoso's founder
---
## 🎯 Mission Overview
We are looking for a backend developer with proven Supabase experience, capable of building and populating a structured database using AI-assisted methods.
The objective is to create a centralized and scalable backend structure for a high-end performance tuning platform, similar to the one at:
🔗 [https://www.br-performance.fr/brp-paris/reprogrammation/1-voitures/](https://www.br-performance.fr/brp-paris/reprogrammation/1-voitures/)
This site contains all the key technical data we want to replicate: vehicle brands (car, motorbike, jet ski, boat), model names, engine types, power stages (Stage 1, Stage 2...), torque, horsepower, manufacturing year, and prices.
🚨 We are not sure whether scraping is officially allowed on this website. However, if legally and technically feasible, scraping is strongly recommended. Otherwise, intelligent replication or AI-assisted structuring of the content is expected.
---
## ✅ Your Responsibilities
- Analyze the reference site and assess the total scope of data (brands, models, variants, etc.).
- Propose and implement a database schema, optimized for Supabase.
- Populate the database using scraping (if viable) or smart AI-assisted data structuring methods.
- Provide a cost estimate for the Supabase infrastructure.
- IMPORTANT: We are requesting this job based on Supabase, but since we are not backend experts, you are expected to compare Supabase with any better-suited alternative (in terms of features and cost) and justify your recommendation.
- Implement a simple API endpoint (REST or GraphQL) to allow the frontend to query the database.
---
## 🛠️ Required Skills
- Supabase expert (or strong experience with similar platforms)
- Backend architecture and data modeling
- Python, Node.js, or equivalent (for automation/scripting)
- Scraping tools (BeautifulSoup, Puppeteer, etc.)
- AI tools: GPT APIs, Claude, LangChain, or equivalent for data extraction/enrichment
- API development (REST/GraphQL)
- Fluent in English (French is a plus)
---
## 📋 What We Expect in Your Proposal
To be considered, your proposal must include:
1. A short analysis of the volume and type of data you'd include based on the reference site.
2. A concise outline of the process and steps you would follow to build and populate the database.
3. A brief overview of the AI tools and techniques you’d use to increase speed and accuracy.
4. A comparison of Supabase vs. your proposed alternative (if applicable), based on cost and scalability.
---
## 📈 Project Scope Summary
Key Tasks
- Data audit from a public reference website
- Database modeling and population
- Use of automation and AI to reduce manual input
- Supabase setup (or suggested alternative)
- API creation for data access
- Infrastructure cost analysis
Deliverables
- Pre-filled database (Supabase or recommended alternative)
- API endpoint (documentation included)
- Cost breakdown and platform recommendation (if applicable)