senior data software engineer
We are in search of an exceptionally talented Senior Data Software Engineer to join our remote team and aid our client's Data Science squads in constructing datamarts and fulfilling ad-hoc requests as required.
As a Senior Data Software Engineer, your role involves collaborating with a group of skilled professionals to create and sustain data pipelines, ETL processes, and REST APIs. Your responsibilities extend to ensuring the scalability, effectiveness, and dependability of our data solutions. Furthermore, you'll be tasked with providing on-call support to guarantee the seamless functioning of these solutions.
responsibilities
- Collaborate with Data Science teams to construct datamarts and fulfill ad-hoc requests as necessary
- Develop and manage data pipelines, ETL processes, and REST APIs to facilitate efficient data processing and delivery
- Verify the scalability, efficiency, and reliability of our data solutions
- Provide on-call support to uphold the smooth operation of our data solutions
- Work alongside cross-functional teams to deliver top-notch data solutions in accordance with project objectives and timelines
- Regularly assess industry trends and optimal practices, refining and implementing cutting-edge data engineering strategies
- Offer guidance and mentorship to junior team members, nurturing a culture of continuous learning and growth within the team
- Engage directly with clients, comprehending their requirements, and deliver well-suited, efficient solutions
- Collaborate with stakeholders, showcasing outstanding communication and leadership skills
requirements
- A minimum of 3 years of hands-on experience in Data Software Engineering, contributing to large-scale data projects and intricate data infrastructures
- Demonstrated expertise in constructing and sustaining data pipelines, ETL processes, and REST APIs
- Proficiency in Amazon Web Services, specifically focusing on data-related services like Redshift, S3, and Glue
- Substantial familiarity with Apache Airflow and Apache Spark, utilizing them for data processing and pipeline automation
- Adeptness in Python and SQL for the purpose of data processing
- Experience with Databricks and PySpark for efficient pipeline automation
- Familiarity with CI/CD tools to ensure the streamlined delivery of data solutions
- Robust analytical skills, enabling effective troubleshooting and decision-making in intricate data environments
- Ability to convey technical concepts clearly to a non-technical audience
- Advanced English language proficiency (Upper-Intermediate level), enabling effective written and verbal collaboration in team meetings and discussions with stakeholders
nice to have
- Experience with Redshift for data warehousing and management
beneficios por ubicaciones
Cobertura de seguro
Licencias pagadas, incluyendo licencia por maternidad, luto, paternidad y licencia especial por COVID-19.
Asistencia financiera para crisis médicas
Beneficios de jubilación: VPF y NPS
Programas personalizados de atención plena y bienestar
Clubes de pasatiempos de EPAM
Modelo de trabajo híbrido
Préstamos suaves para establecer un espacio de trabajo en casa
Carga de trabajo estable
Oportunidades de reubicación con el programa 'EPAM without Borders'
Capacitaciones de certificación en habilidades técnicas y blandas
Acceso ilimitado a la plataforma de aprendizaje de LinkedIn
Acceso a programas internos de aprendizaje establecidos por entrenadores de clase mundial
Plataformas de networking comunitario y creación de ideas
Programas de mentoría
Herramienta de progresión profesional impulsada por uno mismo
Envíanos tu CV para recibir una oferta personalizada