Senior Quant Developer for CS2 Prop Predictive Modeling (Python / Monte Carlo / Sports Analytics)
I am looking for a specialist Quantitative Developer or Data Scientist to build a bespoke betting algorithm for Counter-Strike 2 (CS2) player kill props. The goal is not a public-facing website. The goal is a high-performance local application (or dashboard) that scrapes statistical data, processes it through a predictive model, and outputs "Fair Value" probabilities for player kill counts. Core Responsibilities1. Data Acquisition (Advanced Scraping)Build a robust, stealth scraper for HLTV.org to pull player logs, match histories, and economy data.Must handle: Cloudflare protection (turnstile/captchas) via headless browsers (Puppeteer/Playwright) or high-quality residential proxies. Entity Resolution: Implement fuzzy matching to map player names/aliases correctly across different data points (e.g., matching "s1mple" to "Oleksandr Kostyliev"). 2. The Mathematical Engine (The "Brain")Methodology: Preference for Monte Carlo Simulations (simulating the match round-by-round 10k+ times) or Poisson Regression adjusted for opponent pace. Normalization: You must normalize historical CS:GO data (MR15) to fit the current CS2 (MR12) economy and match length.Covariance:The model must account for "kill cannibalization" (if one teammate dominates, others must regress). 3. The Veto Logic (Dual-Mode Projection)Mode A: Pre-Veto (Weighted Probability):The system must predict the likelihood of every map being picked based on historical team ban/pick rates. The output must be a weighted average of the player's performance across the likely map pool. Mode B: Post-Veto (Manual Override):The UI must allow me to manually select the maps once announced. The model must instantly re-run the simulation using only the active maps to provide a "sharp" update.4. User InterfaceA clean, functional local dashboard (using Streamlit, Dash, or a standard GUI). Input: Select Matchup - Output: Player list with "Fair Value" for Over/Under kills (e.g., "S1mple Over 16.5 Kills = 58% Prob"). Technical RequirementsLanguage: Python (Pandas, NumPy, Scikit-learn, Scipy). Database: PostgreSQL or SQLite for historical data storage. Math: Strong understanding of Probability Theory, Expected Value (EV), and Simulation logic. Domain Knowledge: You must understand CS2 nuances (Economy management, CT vs.T side bias, Map variance). DeliverablesFull Source Code: Documented Python scripts for the scraper, database manager, and modeling engine. Backtesting Report: A "Walk-Forward" analysis showing how the model would have performed over the last 3 months (Out-of-Sample testing). The Application: A functioning tool I can run locally to get daily projections. Maintenance Guide: A guide on how to update the scraper selectors if HLTV updates their UI. CANDIDATES WHO ARE FAMILIAR WITH COUNTER-STRIKE 2 + SPORTS BETTING WILL BE PRIORITIZED, BUT IT IS NOT A REQUIREMENT!Yes, most of this job description is AI because I do not have the technical prowess to describe what I'm thinking of as eloquently as I would like. However I do know enough to sniff out BS and if you are not qualified for this I will find out in less than 5 questions. If you have a brief understanding of what my post is describing but might need more information feel free to reach out. Initial budget is upwards of $5000 for project completion, budget is flexible and can be increased with added scope etc.I am open to suggestion if you believe you have a better methodology than what I am describing nothing other than the base description of the model is set in stone or firm. This could be a long-term opportunity if scope expands and there is plenty of money to throw at this to tackle these sorts of problems, if you believe you are qualified please reach out ASAP. Apply tot his job