Druce Vertes Profile Photo

Hi, I'm Druce

I bridge the gap between business decision-making and AI engineering, and help ship production AI systems for investment management. With 30 years as a front-office quantitative analyst, hedge fund CTO, and now hands-on AI builder, including work at Tiger Management, Caxton, and as CTO of a $2B Tiger Cub fund, I help hedge funds and investment firms plan and implement AI strategy, translating modern AI capabilities into production systems that improve execution velocity, decision quality, and operational efficiency.

AI projects often struggle and fail. Key reasons include:

  • Lack of business-centric framing and success metrics.
  • Lack of realistic evals against business objectives.
  • Misaligned expectations.
  • Lack of end user or executive buy-in.
  • Governance, risk, and compliance blind spots.
  • Automating too little or too much, with poorly tuned human-in-the-loop dynamics.
  • Agent POCs stuck in the 'messy middle' where they work 80% of the time, but not in the high 90s required for production.

I can help avoid those pitfalls.

Recent successes include:

  • Designing and deploying RAG-based research summarization, text-to-SQL on CRM data, and agentic workflows for sell-side and buy-side teams.
  • Architecting agent automation systems using OpenAI Agents SDK and LangGraph.
  • Traditional ML for alternative data signals.

As CTO of Hoplite Capital Management (2008–2010), I was the first internal tech hire at a 25-person, $2B AUM long/short equity fund. I built a best-practice end-to-end platform, including OMS (Eze Castle), real-time P&L and portfolio analytics, FIX trading, compliance, post-trade workflows, and disaster recovery, supporting daily decision-making for PMs and analysts.

Earlier, I led hedge fund implementations at Charles River Development, and held front-office roles at Caxton and Tiger Management. I began my career at Salomon Brothers, doing economic forecasting and building portfolio optimization systems using multi-factor risk models. Awarded CFA charter 2009 (not active).

Technical focus: Python, AI-assisted full SDLC dev practices, agentic systems, LangChain/LangGraph, OpenAI Agents SDK, RAG, robust data pipelines.

Contact

Learn more or reach out through any of these channels. I'd love to hear from you!

Personal Research & Dev Projects

A selection of published and unpublished experiments in machine learning, data visualization, finance, and web development.

Agentic Newsletter Generation System screenshot

Agentic Newsletter Generation System

Autonomous agent system that uses OpenAI Agents SDK to generate an AI newsletter from 500+ daily articles.

The system orchestrates a 9-step pipeline: web scraping; initial filtering; summarizing; deduping; reranking; semantic clustering; and critic-optimizer iterative refinement to transform raw news feeds into polished editorial content.

Processes ~650 articles → 30+ curated stories in 1200+ word draft for human editing with 7-15 thematic sections in ~30 minutes.

Tech Stack: Python • OpenAI Agents SDK • GPT-4o (logprobs) / GPT-5 • Langfuse prompt repository/observability • Playwright web scraping • Pandas data wrangling • HDBSCAN clustering • UMAP dimensionality reduction • Pydantic structured output • SQLite database

Safe Withdrawal Retirement Calculator screenshot

Safe Withdrawal Retirement Calculator

Beyond the 4% Rule: Flexible Withdrawals Under Risk Aversion

Bill Bengen's famous 4% rule provides a simple, effective retirement strategy: invest in a 50/50 stock-bond mix, withdraw 4% of your initial portfolio per year, and increase annually for inflation. Historically you won't run out of money over 30 years.

This represents an infinite risk aversion solution—a hard constraint that never allows withdrawals to decrease or the portfolio to run dry.

But what if we relaxed that constraint? What flexible rules would emerge at different levels of risk tolerance? In a March 2021 article for Advisor Perspectives, I maximized certainty-equivalent spending under constant relative risk aversion (CRRA)—essentially, actual spending discounted by its volatility based on your risk preference. Using gradient-free Python optimization frameworks on historical data, I generated rules of the form:

Withdraw = fixed_pct + (variable_pct × portfolio_value), subject to a floor_pct minimum

As risk aversion increases, stock allocation decreases and the withdrawal rule shifts from variable to fixed. This framework lets retirees choose their preferred point on the risk-return spectrum, perhaps tolerating some risk of future modest declines in spending for higher current and expected lifetime spending.

AI maturity self-assessment framework screenshot

AI maturity self-assessment framework

AI maturity measures the degree to which organizations have mastered AI-related capabilities to achieve operational excellence. AI maturity is not defined by tech tools alone, but by how effectively the organization can improve outcomes by orchestrating people, technology, data, and process. Organizations that successfully navigate AI transformation gain competitive advantages, operate with greater efficiency, increase velocity and respond faster to market changes, delivering more value creation.

Level 1 (Crawl): Initial unstructured exploration and planning.
Level 2 (Walk): Structured learning, training, planning, foundation-laying, and initial POCs and pilots.
Level 3 (Run): Operational deployment at scale, with robust infrastructure, governance, and security, and business value measured against KPIs.
Level 4 (Fly): An AI-native organization where AI is central and pervasive, deploying advanced use cases like autonomous agents, advanced fine-tuned models, with continuous improvement and adoption of emerging capabilities.

This self-assessment lets a firm measure how far they have navigated down the road to AI maturity, and inventories the steps firms might typically take.

NYC Subway Dashboard screenshot

NYC Subway Dashboard

Data pipeline experiment using dbt to aggregate turnstile-level MTA data into DuckDB, with Plotly Dash for visualization. This modern data stack experiment delivers sub-second query performance on multi-gigabyte datasets from a laptop. The analysis revealed issues with MTA data quality and quantified the pandemic's devastating ridership impact, which ultimately necessitated extreme revenue measures like congestion pricing.

Pizza Pizza Pizza screenshot

Pizza Pizza Pizza

A ReactJS learning project to search for pizza (or coffee, or ice cream) in selected local NY areas, gather Google, Yelp, and Foursquare data, and return a single Bayesian ranking table and a Leaflet.js map of the results. Data no longer refreshed due to API deprecations and restrictions.

FinTwit Graph screenshot

FinTwit Graph

Visualizing the Fintwittersphere by building a force-directed graph which models Twitter accounts as nodes connected by weighted springs whose strength is derived from follower and topic similarity, simulating the physics, and solving for a low-energy equilibrium that clusters related users.

Classification Thresholds screenshot

Classification Thresholds

A primer on threshold selection for classification using isocurves. Metrics like F1 are popular, but in practice, minimizing real-world costs of false positives and false negatives is most useful.