Skip To Main Content
backgo to search

senior data software engineer

bullets
Data Software Engineering, Databricks, Python, PySpark, Microsoft Azure, SQL Azure

Join our remote team as a Senior Data Software Engineer within a global leader in providing solutions for complex data needs. We are currently looking for an individual who can work closely with the architects, technical leads, and other key individuals within our functional groups to design, develop and implement reusable DataBricks components for data ingestion and analytics. As a Senior Data Software Engineer, you will be responsible for ensuring that data is ingested via batch, streaming, or replication into a data lake, establishing security controls, and ensuring integration with data governance. You will also be responsible for building collaborative partnerships with stakeholders to ensure that the data is available for reporting and predictive modeling.

responsibilities
  • Design and develop reusable DataBricks components for data ingestion and analytics
  • Collaborate with architects, technical leads, and other key individuals within our functional groups to deliver customer-centric solutions.
  • Establish security controls and ensure integration with data governance to achieve clear auditable data lineage
  • Participate in code review and test solutions to ensure they meet best practice specifications
  • Write project documentation for all phases of the software development lifecycle
  • Create and maintain technical documentation for the data ingestion pipelines, Data Warehouse or Database architecture
  • Work with stakeholders to ensure data availability for reporting and predictive modeling
  • Ensure continuous improvement by staying abreast of industry trends and emerging technologies
  • Drive the implementation of solutions aligned with business objectives.
  • Mentor and guide less experienced team members, helping them enhance their skills and grow their careers
  • Collaborate with cross-functional teams to achieve project goals
requirements
  • 3+ years of experience in building data ingestion pipelines, Data Warehouse or Database architecture
  • Expertise in Python and PySpark for data processing and analysis
  • Experience with DataBricks for building scalable and high-performance applications
  • Hands-on experience with SQL Azure for designing and managing database schema, including procedures, triggers, and views
  • Experience with Microsoft Azure for designing, deploying and administering scalable, available and fault-tolerant systems
  • Familiarity with data modeling and modern Big Data components
  • Experience with ADLS, Power BI, Azure Synapse Analytics for cloud-based infrastructure and application management
  • Familiarity with compliance awareness such as PI, GDPR, HIPAA
  • Excellent communication skills in spoken and written English, at an upper-intermediate level or higher
nice to have
  • Experience with other Cloud platforms such as AWS and GCP is a plus

benefits for locations

poland.svg
ImageImage
For you
  • Discounts on health insurance, sport clubs, shopping centers, cinema tickets, etc.
  • Stable income
  • Flexible roles
ImageImage
For your comfortable work
  • 100% remote work forever
  • EPAM hardware
  • EPAM software licenses
  • Access to offices and co-workings
  • Stable workload
  • Relocation opportunities
  • Flexible engagement models
ImageImage
For your growth
  • Free trainings for technical and soft skills
  • Free access to LinkedIn Learning platform
  • Language courses
  • Free access to internal and external e-Libraries
  • Certification opportunities
  • Skill advisory service
don't have time? Apply later!We send you a link to the job in your e-mail
get job alerts in your inboxHundreds of open jobs for Software Engineers, QA, DevOps, Business Analysts and other tech professionals
a smiling man wearing sunglasses