Skip To Main Content
backgo to search

senior data software engineer

Data Software Engineering, Databricks, Python, PySpark, Azure Data Factory

Join our remote team as a Senior Data Software Engineer within a global leader in providing innovative solutions for data management and analytics. We are looking for a highly skilled and experienced engineer who will be responsible for developing and implementing data solutions, as well as ensuring the quality, performance, and scalability of data systems. This role offers an opportunity to work with cutting-edge technologies and to contribute to the continued growth of our company and its clients.

  • Designing, developing, and implementing data solutions in collaboration with stakeholders
  • Ensuring the quality, performance, and scalability of data systems through regular maintenance and optimization
  • Identifying and resolving issues related to data quality, data processing, and data integration
  • Providing support for data-related issues and incidents
  • Collaborating with cross-functional teams to deliver customer-centric solutions
  • Defining and implementing data management best practices and standards
  • Creating and maintaining technical documentation for data systems and solutions
  • Ensuring compliance with data security and privacy regulations
  • Leading and mentoring less experienced team members to enhance their skills and grow their careers
  • Participating in code reviews and ensuring adherence to coding standards
  • Staying up-to-date with emerging data technologies and trends
  • At least 3+ years of solid/hands-on experience in Data Software Engineering
  • Expertise in PySpark for data processing and analysis
  • Experience working with Azure Data Factory for building and managing data pipelines
  • Advanced SQL knowledge for designing and managing database schema, including procedures, triggers, and views
  • Proficiency in Python for data manipulation and scripting
  • Experience with cloud-based data management tools such as hdinishght, Azure Data Lake, and Data API
  • Knowledge of Spark, Scala, and Kafka for distributed computing and real-time data processing
  • Expertise in Databricks for data engineering and analytics
  • Experience with EDL changes in DB Views/Stored procedures
  • Experience in data analysis and troubleshooting
  • Ability to provide integration testing support
  • Ability to plan and implement new requirements/data entities on EDL
  • Excellent communication skills in spoken and written English, at an upper-intermediate level or higher
nice to have
  • Experience with Hadoop, Hive, and MapReduce is a plus
  • Familiarity with AWS, GCP, or other cloud platforms is a plus

benefits for locations

For you
  • Insurance Coverage 
  • Paid Leaves – including maternity, bereavement, paternity, and special COVID-19 leaves. 
  • Financial assistance for medical crisis 
  • Retiral Benefits – VPF and NPS 
  • Customized Mindfulness and Wellness programs 
  • EPAM Hobby Clubs
For your comfortable work
  • Hybrid Work Model 
  • Soft loans to set up workspace at home 
  • Stable workload 
  • Relocation opportunities with ‘EPAM without Borders’ program

For your growth
  • Certification trainings for technical and soft skills 
  • Access to unlimited LinkedIn Learning platform 
  • Access to internal learning programs set up by world class trainers 
  • Community networking and idea creation platforms 
  • Mentorship programs 
  • Self-driven career progression tool

don't have time? Apply later!We send you a link to the job in your e-mail
get job alerts in your inboxHundreds of open jobs for Software Engineers, QA, DevOps, Business Analysts and other tech professionals
a smiling man wearing sunglasses