Camascope logo

Senior Data Platform Engineer (Event Sourcing & Event-Driven Systems)

Camascope
Temporary
On-site
Miami, Florida, United States

ABOUT US

Camascope is a rapidly growing technology company dedicated to empowering the healthcare and medication sectors with technology. Our talented, caring, and ambitious team is driven by a mission to make a real difference in the care industry. Our products connect pharmacies, care homes, and doctors, improving lives every day.  

As we expand, now is a great time to join us. If you are passionate about healthcare and excited by the fast-paced, but mature startup environment, Camascope is the perfect place for you.  

  • Location: Miami, FL (Hybrid/Remote Considered)
  • Department: Platform Engineering
  • Reports to: Director of Platform Engineering
  • Focus Areas: Event-Sourced Data Processing, Distributed Data Systems, Fault Tolerance, Analytics Pipelines 

WHAT YOU WILL WORK ON

We are looking for a Senior Data Platform Engineer who will specialize in event-driven data pipelines, distributed databases, and real-time analytics, ensuring fault tolerance, scalability, and compliance across multiple regions. 

As a Senior Data Platform Engineer, you will architect and build event-sourced and real-time data systems that power analytics, reporting, and operational intelligence. You will work on data consistency, event replay, distributed data processing, and real-time data observability across the platform. 

This is a high-impact role where you will collaborate with the Platform, Shared Services, and Product Development engineering teams across our US, UK, and India to ensure our architecture is reliable, fault-tolerant, and compliant with regulatory requirements. 

RESPONSIBILITIES

Data Architecture & Event Processing:

  • Design and build event-driven data pipelines using Kinesis or Kafka for real-time and batch processing.
  • Architect event-sourced data stores that enable event replay, CQRS, and materialized views.
  • Develop distributed data processing frameworks to handle large-scale event streams and transformations.

Fault Tolerance & Data Consistency:

  • Implement fault-tolerant, resilient event-driven data architectures with retry mechanisms, dead letter queues (DLQs), and circuit breakers.
  • Ensure event deduplication, ordering, and transactional consistency across services.
  • Design self-healing and auto-recoverable data pipelines to handle failures seamlessly.

Scalability & Performance Optimization: 

  • Optimize real-time and batch analytics for high throughput and low latency.
  • Design auto-scaling data services that adapt dynamically to fluctuating workloads.
  • Implement data partitioning, indexing, and caching to improve performance. 

Observability & Compliance: 

  • Build real-time monitoring, logging, and alerting for event-driven data pipelines.
  • Ensure compliance with GDPR, HIPAA  and regional healthcare data regulations.
  • Implement role-based access control (RBAC) and encryption for sensitive data. 

Developer Autonomy & Productivity: 

  • Create self-service data APIs, SDKs, and developer portals for seamless data access.
  • Abstract complex data infrastructure, enabling faster development and integration.
  • Enable schema versioning and data lineage tracking to maintain data integrity.

WHAT WE'RE LOOKING FOR

Requirements 

  • 5+ years of experience in software engineering or data engineering with a focus on event-driven architectures and distributed data systems.
  • Strong expertise in Kinesis, Kafka or similar streaming platforms.
  • Experience with event sourcing, CQRS, and materialized views for efficient query performance.
  • Deep understanding of fault-tolerant event processing, retries, and dead letter queues (DLQs).
  • Proficiency in data modeling for event-driven architectures,
  • Hands-on experience with AWS native data solutions and distributed databases (Aurora/PostgreSQL, DynamoDB, Amazon S3).
  • Strong programming skills in Python with experience in data processing frameworks.
  • Familiarity with observability tools like Datadog, Prometheus, or OpenTelemetry. 

 BONUS POINTS FOR

  • Experience with Flink or Spark for real-time stream processing.
  • Knowledge of multi-tenant architectures and regional regulatory compliance.
  • Experience working in MedTech, HealthTech, or regulated industries.
  • Familiarity with GraphQL, gRPC, or async API patterns for data access.
  • Experience with DataOps, schema evolution, and data governance tools.