Skip To Main Content
backgo to search

senior data software engineer

Data Software Engineering, Amazon Web Services, Apache Airflow, Apache Spark, CI/CD, Python, SQL

We are in search of an exceptionally talented Senior Data Software Engineer to join our remote team and aid our client's Data Science squads in constructing datamarts and fulfilling ad-hoc requests as required.

As a Senior Data Software Engineer, your role involves collaborating with a group of skilled professionals to create and sustain data pipelines, ETL processes, and REST APIs. Your responsibilities extend to ensuring the scalability, effectiveness, and dependability of our data solutions. Furthermore, you'll be tasked with providing on-call support to guarantee the seamless functioning of these solutions.

  • Collaborate with Data Science teams to construct datamarts and fulfill ad-hoc requests as necessary
  • Develop and manage data pipelines, ETL processes, and REST APIs to facilitate efficient data processing and delivery
  • Verify the scalability, efficiency, and reliability of our data solutions
  • Provide on-call support to uphold the smooth operation of our data solutions
  • Work alongside cross-functional teams to deliver top-notch data solutions in accordance with project objectives and timelines
  • Regularly assess industry trends and optimal practices, refining and implementing cutting-edge data engineering strategies
  • Offer guidance and mentorship to junior team members, nurturing a culture of continuous learning and growth within the team
  • Engage directly with clients, comprehending their requirements, and deliver well-suited, efficient solutions
  • Collaborate with stakeholders, showcasing outstanding communication and leadership skills
  • A minimum of 3 years of hands-on experience in Data Software Engineering, contributing to large-scale data projects and intricate data infrastructures
  • Demonstrated expertise in constructing and sustaining data pipelines, ETL processes, and REST APIs
  • Proficiency in Amazon Web Services, specifically focusing on data-related services like Redshift, S3, and Glue
  • Substantial familiarity with Apache Airflow and Apache Spark, utilizing them for data processing and pipeline automation
  • Adeptness in Python and SQL for the purpose of data processing
  • Experience with Databricks and PySpark for efficient pipeline automation
  • Familiarity with CI/CD tools to ensure the streamlined delivery of data solutions
  • Robust analytical skills, enabling effective troubleshooting and decision-making in intricate data environments
  • Ability to convey technical concepts clearly to a non-technical audience
  • Advanced English language proficiency (Upper-Intermediate level), enabling effective written and verbal collaboration in team meetings and discussions with stakeholders
nice to have
  • Experience with Redshift for data warehousing and management

benefits for locations

For you
  • Insurance Coverage 
  • Paid Leaves – including maternity, bereavement, paternity, and special COVID-19 leaves. 
  • Financial assistance for medical crisis 
  • Retiral Benefits – VPF and NPS 
  • Customized Mindfulness and Wellness programs 
  • EPAM Hobby Clubs
For your comfortable work
  • Hybrid Work Model 
  • Soft loans to set up workspace at home 
  • Stable workload 
  • Relocation opportunities with ‘EPAM without Borders’ program

For your growth
  • Certification trainings for technical and soft skills 
  • Access to unlimited LinkedIn Learning platform 
  • Access to internal learning programs set up by world class trainers 
  • Community networking and idea creation platforms 
  • Mentorship programs 
  • Self-driven career progression tool

don't have time? Apply later!We send you a link to the job in your e-mail
get job alerts in your inboxHundreds of open jobs for Software Engineers, QA, DevOps, Business Analysts and other tech professionals
a smiling man wearing sunglasses