Role Overview
We are seeking a Senior Platform & Machine Learning Engineer to architect and build a scalable recommendation and decision engine powered by real-time data pipelines and machine learning services. This role will lead the development of a Single Customer View platform that integrates transactional APIs, streaming data infrastructure, and cloud-native machine learning capabilities.
The ideal candidate has strong expertise in cloud-native architectures, machine learning production systems, and distributed data engineering. A major responsibility will be modernizing legacy systems into serverless cloud environments, using Infrastructure-as-Code (IaC) and strong DevOps practices to ensure scalability, security, and operational efficiency.
This position requires experience building high-performance ML platforms, automated MLOps pipelines, and real-time decisioning systems in modern cloud environments.
Schedule: Full-time job / Flexible schedule during client business hours
Client Time Zone: Sydney, NSW
Independent Contractor Perks
-
Permanent work from home
-
Immediate hiring
-
Health Insurance Coverage for eligible locations
Key Responsibilities
Machine Learning Platform & Architecture
- Productionize machine learning models in collaboration with Data Scientists, ensuring performance, scalability, and reliability.
- Design and maintain MLOps pipelines that automate model training, testing, validation, and deployment.
- Architect a Single Customer View platform that enables unified customer profiling and real-time decisioning.
- Build scalable cloud-based data warehouse architectures supporting analytics and machine learning workloads.
- Develop serverless functions and services using Python, Go, and SQL.
Data Engineering & Real-Time Processing
- Design and implement high-performance streaming data pipelines for real-time analytics and decision engines.
- Develop data ingestion pipelines using API integrations with retry and fault-tolerant mechanisms.
- Build optimized data models supporting both transactional and analytical workloads.
- Develop and maintain data orchestration workflows using modern workflow management tools.
Infrastructure, DevOps & Reliability
- Implement and maintain Infrastructure-as-Code (IaC) using Terraform to provision and manage cloud infrastructure.
- Build and optimize CI/CD pipelines to enable reliable and automated deployments.
- Apply Site Reliability Engineering (SRE) practices to maintain system availability and performance.
- Implement monitoring, alerting, and incident response systems for distributed applications.
Security, Compliance & Cost Optimization
- Implement secure identity and authentication frameworks for distributed cloud workloads.
- Ensure infrastructure and data pipelines comply with data privacy and security standards, including GDPR.
- Drive cloud cost optimization initiatives, including serverless architecture adoption and infrastructure rightsizing.
- Maintain clear and comprehensive technical documentation, including architecture diagrams and data lineage.
Must-Have Requirements
Technical Skills
- Expert-level proficiency in Python and SQL
- Strong scripting experience using Shell
- Working knowledge of Go
Cloud & Data Platforms
- Strong experience working with Google Cloud Platform (GCP) services including:
- BigQuery
- Vertex AI
- Cloud Dataflow
- Cloud Run
- Google Kubernetes Engine (GKE)
- Pub/Sub
Data Engineering
- Experience building large-scale data pipelines and streaming architectures
- Experience developing data warehouses and analytical data models
- Experience building real-time data processing systems
Infrastructure & DevOps
- Strong experience implementing Infrastructure-as-Code using Terraform
- Experience managing CI/CD pipelines using GitHub Actions or GitLab
- Experience with Cloud Build or Jenkins
Databases
- Hands-on experience with PostgreSQL-based databases
- Experience with NoSQL databases
Data & Analytics Tools
- Experience working with Airflow (DAG development)
- Experience using Looker / LookML
- Experience developing and managing API integrations
Nice-to-Have Skills
- Experience designing recommendation systems or ML-driven decision engines
- Experience building real-time personalization platforms
- Experience with distributed systems and event-driven architectures
- Experience implementing serverless architectures
Preferred Certifications
- GCP Professional Cloud Architect
- GCP Professional Data Engineer
- AWS Certified Solutions Architect
- AWS Certified Data Engineer
Core Competencies
- Distributed systems architecture
- Machine learning platform engineering
- Real-time data processing
- Cloud infrastructure automation
- DevOps and CI/CD automation
- Security and compliance in cloud environments
- Cross-functional collaboration with Data Science teams
Side Note
-
This is a permanent work-from-home role under an Independent Contractor arrangement. Candidates must have their own computer and reliable internet connection, and are responsible for their own taxes and benefits. Professional hourly fees are established based on your performance in the application process.
Reminder
-
Please follow the provided link to BruntWork’s Career Site to finish your initial application requirements, including the assessment questions, technical check, and voice recording. Submissions with all requirements fulfilled will receive priority review.
Job ID: 49879044157