AVP, Applied Model Ops Developer
Job Description
Join Synchrony as an AVP, Applied Model Ops Developer and play a crucial role in designing and maintaining data infrastructure and pipelines. Collaborate with model developers, validators, and risk stakeholders to understand their evolving data requirements for model development, monitoring, and governance. Partner with credit analytics, risk, fraud, marketing, and operations functions to identify, define, and prioritize use cases requiring model-ready data. Build scalable data architectures to support real-time and batch monitoring, including data ingestion, enrichment, and retention practices. Support pipeline development by designing and maintaining automated end-to-end ML pipelines for data collection, preprocessing, feature engineering, and model training. Conduct data transformation by converting raw observations into variables (features) that machine learning models can understand, such as turning timestamps into cyclical time features. Transform theoretical data science prototypes into robust, high-performance software systems that can handle large volumes of real-time data. Build and maintain automated pipelines that handle not just code, but also data validation, model training, and artifact management. Design, develop, and maintain robust pipelines to collect, transform, and store data used in model monitoring workflows (e.g., scoring data, performance metrics, outcomes). Provide thought and technical leadership in generating new signals from raw data by applying techniques such as normalization, scaling and categorical encoding. Integrate data pipelines with model lifecycle platforms, MLOps tools, and observability solutions to ensure seamless model performance tracking. Partner with model risk and compliance teams to ensure data lineage, audit trails, and documentation are preserved and accessible for regulatory reviews (e.g., SR 11-7 compliance).
Qualifications
1. Bachelor’s degree in a quantitative, technical, or data-focused field (e.g., Statistics, Mathematics, Computer Science, Data Science, Engineering) with 6+ years’ experience OR in lieu of a degree 8+years of relevant work experience in monitoring, validation, or credit risk strategy 2. Minimum 6+ years of professional experience in model operations, data engineering, or analytics infrastructure 3. Strong proficiency with data engineering tools and frameworks (e.g., Apache Spark, Airflow, Kafka, dbt, PySpark). 4. Proficient in programming languages such as SAS, Python, and SQL for building monitoring pipelines and validation checks. 5. Experience with cloud-based data infrastructure (e.g., AWS, Azure, GCP) and data warehousing (e.g., Snowflake, Redshift, BigQuery). 6. Familiarity with MLOps practices, model metadata tracking (e.g., MLflow), and monitoring toolkits (e.g., Evidently AI, WhyLabs, Prometheus). 7. Understanding of model risk governance requirements and the role of data engineering in ensuring compliant model monitoring. 8. Ability to work in an agile environment and deliver high-quality, production-grade code in collaboration with DevOps and platform engineering teams.
Benefits
- best-in-class employee benefits and programs that cater to work-life integration and overall well-being - career advancement and upskilling opportunities
Apply Now
