Pharmacy Data Management, Inc. (PDMI) logo

AWS Cloud Operations Engineer

Pharmacy Data Management, Inc. (PDMI)
Full-time
Remote
United States
AWS

PDMI is looking to add an AWS Cloud Operations Engineer to our team! As an AWS Cloud Operations Engineer, you will deploy, manage, and optimize cloud infrastructure and applications in AWS environments. This role ensures seamless, secure, and efficient deployment of infrastructure and application code, working closely with internal IT and development teams to maintain a highly available and scalable cloud ecosystem.

Since 1984, PDMI has provided pharmacy data processing and other flexible, scalable solutions to help our clients meet their business objectives. We offer transparent, pass-through pharmacy processing and other services for private label Pharmacy Benefit Managers (PBMs), vertically integrated health plans and hospital systems. In addition to Pharmacy Benefit Administrative Services, we offer 340B Administration, Hospice and Long-Term Care Services.

Why Join Us:

  • Best Employer: PDMI was voted Best Employer in Ohio for the 5th consecutive year in 2025!
  • Meaningful Work: Contribute to improving healthcare quality and efficiency.
  • Collaborative Environment: Work with passionate professionals who share your drive.
  • Exciting Challenges: Every day brings new opportunities to excel.
  • Flexible Work: Fully remote opportunity with a company that cares.

Responsibilities:

  • Own and manage all production AWS environments across multi-account Control Tower architecture (Shared, Prod, Dev, and specialized workloads)
  • Build, maintain, and troubleshoot CI/CD pipelines using GitHub Actions, AWS CodePipeline, CodeBuild, and CodeDeploy
  • Deploy and operate containerized applications using Amazon EKS (Kubernetes) and Amazon ECS / Fargate
  • Support microservices architecture, service mesh, sidecars, and container security best practices
  • Automate infrastructure provisioning using Terraform, CloudFormation, and AWS-native tools (SSM, Lambda, Step Functions)
  • Manage and optimize core AWS services, including:
    • Compute: EC2, ASGs, Launch Templates
    • Networking: VPC, TGW, Security Groups, Route53, VPN/Direct Connect, NAT Gateways, ALB/NLB
    • Storage: S3, EBS, EFS
    • Databases: RDS (MySQL, SQL Server), Aurora, DynamoDB
    • Monitoring & logging: CloudWatch, AWS Config, GuardDuty, Inspector; Datadog for monitoring, alerting, dashboards, and APM
  • Implement and enforce security, governance, and best practices across IAM, KMS, SSM Parameter Store, Secrets Manager
  • Support and troubleshoot production workloads, including performance bottlenecks, networking issues, and container failures
  • Work closely with the development team, but maintain ownership of production infrastructure, deployments, and reliability
  • Participate in on-call rotation for production-critical services
  • Utilize Kafka / Confluent Cloud, CI platform integrations, and microservices observability

What We're Looking For:

  • 3+ years of hands-on AWS engineering or DevOps experience (production-grade)
  • Strong experience building and operating CI/CD pipelines using:
    • GitHub Actions (required)
    • AWS CodePipeline / CodeBuild / CodeDeploy
  • Strong experience with EKS (Kubernetes fundamentals, deployments, autoscaling, node groups/Fargate, networking)
  • Experience with ECS/Fargate, container registries, image lifecycle management, and security
  • Hands-on experience with:
    • Terraform (preferred) or CloudFormation
    • EC2, VPC, routing, load balancing, security groups
    • S3, IAM, KMS, Secrets Manager
    • Lambda, Step Functions, EventBridge
    • RDS/Aurora or DynamoDB
  • Understanding of multi-account AWS environments, SSO/Identity, and production governance
  • Ability to diagnose complex AWS issues (networking, IAM, CI/CD, K8s failures)
  • Scripting ability in Python, Bash, or PowerShell (good to have)
  • Strong communication skills and the ability to work closely with developers while owning production reliability
  • Experience with Datadog (Logging, APM, RUM, synthetic tests)
  • Experience with Confluent Kafka or event-streaming architectures
  • Familiarity with modern IaC patterns (modules, GitOps, reusable pipelines)
  • Familiarity with GitHub administration, branch protection, runners, or organization-level DevOps processes
  • Experience in healthcare, compliance-heavy, or regulated environments

Apply now
Share this job