Join our remote team as a Senior Data Software Engineer.
Join our remote team as a Senior Data Software Engineer . We are actively seeking a hands-on and deeply technical engineer to collaborate closely with development peers, product leadership, and other technical staff to create innovative and impactful solutions. This role offers an opportunity to contribute significantly to the design, development, and optimization of features in a dynamic Agile development environment, with a focus on Databricks workflows, APIs, and Data Engineering.
responsibilities
Design and develop new features using the Agile development process (Scrum)
Prioritize and ensure high-quality standards at every stage of development
Guarantee reliability, availability, performance, and scalability of systems
Maintain and troubleshoot code in large-scale, complex environments.
Collaborate with Developers, Product and Program Management, and senior technical staff to deliver customer-centric solutions.
Provide technical input for new feature requirements, partnering with business owners and architects
Ensure continuous improvement by staying abreast of industry trends and emerging technologies
Drive the implementation of solutions aligned with business objectives.
Mentor and guide less experienced team members, helping them enhance their skills and grow their careers
Participate in code reviews, ensuring code quality and adherence to standards
Collaborate with cross-functional teams to achieve project goals
Actively contribute to architectural and technical discussions
requirements
At least 3 years of production experience in Data Software Engineering
Expertise in Databricks, Microsoft Azure, PySpark, Python, and SQL for building both within development and enabling deployment to production
Experience with Azure DevOps, GitHub, (or others), and version control for effective project management
Ability to develop end-to-end production solutions
Strong experience working on one or more cloud platforms such as Azure, GCP, AWS
Experience in building out robust data pipelines
Ability to tie loose ends together for solutions across systems
Excellent communication skills in spoken and written English, at an upper-intermediate level or higher
nice to have
Experience with REST APIs and Power BI would be a plus
Join our remote team as a Senior Data Software Engineer.
Join our remote team as a Senior Data Software Engineer within a leading tech firm. In this role, you will be responsible for designing, building, and optimizing robust data pipelines to support our cutting-edge applications. The successful candidate will have deep expertise in one of the languages (Python, Spark, PySpark, SQL) and be able to build within dev and enable deployment to production. We are seeking a thoughtful and reliable engineer who is capable of tying together solutions across systems and delivering end-to-end production solutions.
responsibilities
Design and implement scalable data pipelines to support our cutting-edge applications
Ensure data quality and data accuracy across all stages of data processing
Collaborate with cross-functional teams to understand business requirements and develop solutions that meet their needs
Develop and maintain codebase in accordance with industry best practices and standards
Troubleshoot and resolve issues in a timely and effective manner
Optimize data processing algorithms and improve application performance
Ensure compliance with data security and data privacy regulations
Conduct code reviews and ensure high code quality and compliance with standards and guidelines
Participate in architectural and technical discussions to help shape the product roadmap
Stay up-to-date with emerging trends and technologies in data engineering and analytics
requirements
At least 3+ years of experience as a Data Software Engineer or in similar roles
Expertise in one of the languages (Python, Spark, PySpark, SQL) for building scalable and high-performance applications
Experience with Microsoft Azure for cloud-based infrastructure and application management
Experience using Databricks for building robust data pipelines
Experience using Azure DevOps, GitHub, or other version control systems
Familiarity with developing end-to-end production solutions
Ability to tie loose ends together for solutions across systems
Excellent communication skills in spoken and written English, at an upper-intermediate level or higher
nice to have
Experience with GCP and AWS cloud platforms
Experience with Apache Kafka and Apache Beam for building data pipelines
Experience with machine learning and data science tools and frameworks
get job alerts in your inboxHundreds of open jobs for Software Engineers, QA, DevOps, Business Analysts and other tech professionals
We are seeking a talented remote Senior Data Engineer with strong experience in Data Software Engineering.
The ideal candidate will have Big Data technologies, primarily Spark, work with cross-functional teams to design, develop, and deploy high-performance data pipelines.
responsibilities
Design, develop and deploy high-performance data pipelines
Work with cross-functional teams to deliver complex projects
Develop, maintain and optimize ETL processes
Troubleshoot and optimize data pipeline processes
Create and maintain data models and databases
requirements
3+ years of software engineering experience
Good knowledge of Big Data technologies, primarily Spark
We are seeking a remote Senior Data Software Engineer to join our team.
The right candidate will have a strong technical background with expertise in Apache Spark, Microsoft Azure, and Python. This position offers a unique opportunity to work on a high-impact project with one of the world's most recognized brands.
responsibilities
Collaborate with cross-functional teams to build and maintain the data integration solution
Develop, build, and optimize data pipelines using Apache Spark, Microsoft Azure, and Python
Ensure data pipelines are scalable, maintainable, and reliable
Take data science models and make them production ready
Develop and maintain forecasting models to support business decisions
Ensure data quality and consistency across all data sources
Monitor and optimize data pipelines to ensure efficient and effective data processing
Collaborate with data scientists to develop and implement machine learning models
Work with the team to continuously improve and optimize the data integration solution for Fedex
Stay current with emerging technologies and trends in data software engineering
requirements
At least 3+ years of experience in data software engineering or similar roles
Expertise in Apache Spark, Microsoft Azure, and Python
Strong knowledge of forecasting models, data science, and MLOps
Experience working with Databricks to build, maintain, and optimize data pipelines
Ability to take data science models and make them production ready
Experience with Git for version control
Understanding of basic Azure concepts including clouds, regions, ADLS, and compute
Strong analytical and problem-solving skills with the ability to think critically and creatively
Experience working in Agile development environments
Excellent English communication skills, both written and verbal (B2+ level)
nice to have
Experience with Panda for data manipulation and analysis
Strong knowledge of SQL and relational tables
Understanding of statistical models and the ability to develop models utilizing Python, Spark, etc.
Join our remote team as a Senior Data Software Engineer contributing to a project centered around Databricks workflows, APIs, analytical development, and data engineering.
In this role, you will play a pivotal part in constructing and sustaining intricate data pipelines while facilitating seamless deployments to production. Your involvement will extend to crafting end-to-end production solutions and collaborating with cross-functional teams to deliver top-tier solutions.
responsibilities
Contribute to the design and development of novel features within the Agile development framework (Scrum)
Prioritize and uphold high-quality standards across all development stages
Ensure the reliability, availability, performance, and scalability of systems
Troubleshoot and maintain code within expansive and intricate environments
Collaborate with Developers, Product and Program Management, and senior technical personnel to provide customer-centric solutions
Offer technical insights for new feature requirements, collaborating with business owners and architects
Stay abreast of industry trends and emerging technologies for continuous improvement
Implement solutions aligned with business objectives
Guide and mentor less experienced team members to foster skill enhancement and career growth
Participate in code reviews, upholding code quality and adherence to standards
Actively engage in architectural and technical discussions within cross-functional teams to achieve project goals
requirements
Minimum of 3 years of hands-on experience in Data Software Engineering in a production setting
Proficiency in Databricks, Microsoft Azure, PySpark, Python, and SQL for development and deployment to production
Familiarity with Azure DevOps, GitHub (or alternative platforms), and version control for effective project management
Capability to architect end-to-end production solutions
Robust experience on one or more cloud platforms like Azure, GCP, AWS
Proven track record in constructing resilient data pipelines
Ability to integrate disparate elements for comprehensive solutions across systems
Exceptional communication skills in both spoken and written English, at an upper-intermediate level or higher
nice to have
Exposure to REST APIs and Power BI would be advantageous
We are seeking a remote Senior Data Software Engineer with experience in PySpark, Azure Data Factory, and advanced SQL for the project.
The ideal candidate must have at least 2 years of solid/hands-on experience in Data Software Engineering with primary skills in Big Data and a strong data background.
responsibilities
Conduct data analysis and troubleshootin
Plan and implement new requirements/data entities on ED
Provide support for integration testing
Make sure data pipelines are scalable and efficient
requirements
3+ years of relevant work experience
Must have skills in DSE Python and Azure Databricks
Familiarity with EDL changes in DB Views/Stored procedures and integration testing support
Advanced knowledge of PySpark, Azure Data Factory, and SQL
Ability to collaborate effectively with the team
Excellent communication skills with an upper-intermediate level of English
nice to have
Experience with HDInsight, Azure Data Lake, Data API, Spark, Scala, and Kafka will be an added advantage
We are seeking a highly skilled Senior Data Software Engineer to join our team, working with one of the biggest sportswear brands in the world.
We are seeking a highly skilled Senior Data Software Engineer to join our remote team, working with one of the biggest sportswear brands in the world. As a Senior Data Software Engineer, you will play a crucial role in designing, developing, and maintaining data engineering solutions. You will be responsible for building scalable and reliable data pipelines, ensuring data quality, and optimizing data processing performance. If you have a passion for data engineering and are excited about working with cutting-edge technologies, we invite you to join us.
responsibilities
Design, implement, test and deliver robust, scalable and reusable data processing and ETL solutions
Automate data extraction, transformation and provisioning, irrespective of type of source and target
Participate in testing activities, create own test cases, manage test data, set up environments
Drive collaborative reviews of design, code, test cases and dataset implementation performed by other data engineers in support of maintaining data engineering standards
Troubleshoot complex data issues and perform root cause analysis to proactively resolve product and operational issues
Mentor and develop other data engineers in adopting best practices
Represent the team as technical ambassador
requirements
A minimum of 3 years of experience in Data Software Engineering, showcasing your expertise in designing and developing data engineering solutions
Strong proficiency in Python, PySpark, Databricks, enabling you to build scalable and reliable data pipelines
Solid understanding of data structures and algorithms, enabling you to optimize data processing performance
Good knowledge of Amazon Web Services (AWS) or other cloud platform tools
Experience working with Data warehousing tools, including Dynamo DB, Amazon Redshift, and Snowflake
Average CI/CD knowledge and experience using CI tools like Jenkins
Good testing experience with some of the most common tools in the market (PyTest, Nose, Cucumber, JBehave, etc.)
Fluent spoken and written English at an Upper-Intermediate level or higher, enabling effective communication
nice to have
Experience in building and deploying machine learning models
Familiarity with Apache Airflow or other workflow management tools
We are seeking a highly skilled Senior Data Software Engineer to join our remote team, working on a tech-enabled healthcare delivery system focused on improving the health of diverse populations.
We are seeking a highly skilled Senior Data Software Engineer to join our remote team, working on an integrated, holistic, and tech-enabled healthcare delivery system focused on improving the health of diverse populations. As a Senior Data Software Engineer, you will be responsible for designing and maintaining data integration pipelines and ETL/ELT workflows, implementing end-to-end data solutions. You will work closely with stakeholders to analyze data requirements and collaborate with the data science team to identify optimal solutions for problems related to data quality, reliability, and performance.
responsibilities
Design and maintain data integration pipelines and ETL/ELT workflows
Develop and implement end-to-end Data solutions to meet business requirements
Ensure that data solutions are scalable, secure, and optimized for performance
Work with stakeholders to analyze data requirements and implement data models that support business objectives
Collaborate with the data science and engineering team members to identify optimal solutions for problems related to data quality, reliability, and performance
requirements
3+ years of experience in Data Software Engineering, demonstrating your expertise in AWS, Apache Spark, PySpark
Demonstrated experience in designing and maintaining data integration pipelines and ETL/ELT workflows
Proficient in implementing best practices for data management, data governance, quality control, and data security
Ability to work independently and manage stress effectively, maintaining a high level of performance even under pressure
Fluent spoken and written English at a B2 level or higher
nice to have
Experience in Integration Management
viewing 1-10 out of 12 jobs found
0
latest insights
01
career advicesenior software engineer cover letter examplesLearn how to create the perfect cover letter and download a free senior software engineer cover letter exampleread more
02
productivitya 2024 guide to hybrid remote work in techWhat is hybrid remote work in tech and how does it differ from just remote work? Learn the meaning, pros and cons, and best practices in this guide.read more
03
career advicehow to make a portfolio to land a job in techGet these proven tips on how to put together a portfolio to impress recruiters and help you secure a job in tech.read more