What You Will Do
Trility Consulting is seeking a Senior DevSecOps Consultant who thrives at the intersection of cloud infrastructure, security, and data platforms. In this role, you will support the ongoing evolution and stability of a modern data lake environment, ensuring the underlying infrastructure is scalable, secure, and built for long-term sustainability.
You’ll partner closely with platform, data, and engineering teams to support and optimize cloud-native infrastructure across AWS, leveraging Infrastructure as Code and Kubernetes-based patterns to enable reliable data processing and analytics. This is a hands-on role where you’ll help ensure the data platform runs smoothly behind the scenes — so data teams can focus on delivering insights.
This is a remote 1099 or W2 position.
Key Responsibilities
-
Support and optimize cloud infrastructure for a data lake environment within AWS
-
Develop and maintain Infrastructure as Code using Terraform to ensure scalable, repeatable deployments
-
Manage and support Kubernetes-based workloads, including deployment and configuration using Helm
-
Collaborate with data and platform teams to ensure infrastructure supports data ingestion, processing, and reporting needs
-
Write and maintain Python scripts to support automation, integration, and operational tasks
-
Monitor and troubleshoot infrastructure and platform issues across cloud and containerized environments
-
Implement and maintain security best practices across cloud resources, Kubernetes, and data platform components
-
Contribute to documentation, runbooks, and operational standards to support long-term platform sustainability
-
Partner with cross-functional teams to support ongoing enhancements and stabilization of the data platform
Qualifications
-
5+ years of experience in DevOps, DevSecOps, or Platform Engineering roles
-
Strong hands-on experience with AWS cloud services in production environments
-
Proven experience building and managing infrastructure using Terraform
-
Hands-on experience with Kubernetes, including deploying and managing containerized workloads
-
Experience using Helm for Kubernetes application deployment and configuration
-
Proficiency in Python for scripting, automation, and operational tooling
-
Strong troubleshooting skills across cloud infrastructure, containers, and distributed systems
-
Experience working in collaborative, cross-functional engineering environments
-
Excellent written and verbal communication skills
Nice to Have
-
Experience with Starburst (Trino) or distributed query engines
-
Familiarity with Apache Airflow for workflow orchestration
-
Experience with Apache Superset or similar data visualization tools
-
Experience with DBT (Data Build Tool) or modern data transformation frameworks
-
Exposure to data lake architectures and analytics platforms