Senior Scala Big Data Engineer - Remote AWS Spark Developer Position

Remotely
Full-time

Are you an experienced Scala developer with expertise in big data technologies and cloud infrastructure? Our fintech organization, specializing in online trading and investment platforms, is seeking a Senior Scala Big Data Engineer to design and implement sophisticated data pipelines that power our financial products. This remote position offers you the opportunity to work with cutting-edge technologies in a dynamic, fast-paced environment.


About The Role

As a Senior Scala Big Data Engineer, you'll be instrumental in architecting and developing robust data processing systems that support our trading and investment platforms. You'll work with massive datasets, optimize performance, and ensure reliability across our distributed systems infrastructure. This position requires deep technical knowledge combined with the ability to understand financial domain challenges.


Key Responsibilities

- Design and implement efficient, scalable data pipelines using Scala and big data technologies.

- Develop and maintain ETL workflows using Apache Spark, Hadoop, Hive, and Airflow.

- Architect solutions for processing high-volume financial data streams with Kafka.

- Optimize data storage and retrieval across various systems including HDFS, S3, Cassandra, and RDBMS.

- Collaborate with Data Science teams to implement and optimize their Python models and algorithms.

- Build and maintain infrastructure on AWS, leveraging services such as S3, Athena, EMR, and EKS.

- Implement comprehensive automated testing for Spark applications to ensure data integrity.

- Participate in code reviews and technical discussions to improve system architecture.

- Monitor performance metrics and implement optimizations for existing data pipelines.

- Document technical solutions and share knowledge with team members.


Required Skills & Experience

- 4+ years of professional experience with Scala development in production environments.

- Strong proficiency with Apache Spark, Hadoop ecosystem, and distributed computing principles.

- Hands-on experience with Hive, Airflow, and workflow orchestration tools.

- Demonstrated expertise with Kafka and message-based architectures.

- Advanced knowledge of data storage solutions including HDFS, AWS S3, and Cassandra.

- Solid understanding of relational database concepts and proficiency in SQL.

- Experience designing and implementing data pipelines for large-scale systems.

- Proven experience with AWS services (S3, Athena, EMR, EKS, etc.).

- Ability to read and understand Python code, particularly for data science applications.

- Experience writing and maintaining automated tests for Spark applications.

- Excellent spoken and written English communication skills.


Nice to Have

- Experience in the fintech, trading, or investment industry.

- Knowledge of real-time data processing and low-latency systems.

- Familiarity with containerization technologies (Docker, Kubernetes).

- Experience with CI/CD pipelines and DevOps practices.

- Understanding of data governance and compliance requirements in financial services.

- Contributions to open-source projects or Scala community.

- Certifications in AWS or big data technologies.


Why Join Our Team

Working with us means being at the forefront of financial technology innovation. You'll help build systems that process millions of transactions daily and directly impact our clients' investment decisions. We offer competitive compensation, flexible remote work arrangements, professional development opportunities, and the chance to work with a diverse team of talented engineers. Join us in creating next-generation trading and investment platforms that are reshaping the financial landscape.