senior big data developer for a canadian retail company
Currently, we are looking for a Remote Senior Big Data Developer to join our team.
The customer is a Canadian vehicle retailer operating online.
- Creating and maintaining scalable, maintainable and reliable pipelines that process large quantities of structured and unstructured data
- Unifying streaming and batch processing modes into one cohesive framework of processing including monitoring and alerting that fits into a unified and reliable data platform infrastructure
- Participating in the design and implementation of data models as part of modern data lake, data warehouse and ETL/ELT approaches
- Building streaming data pipelines that enable the business to engage consumers in real-time
- Identifying, designing, and implementing internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
- Regularly reviewing work with/from peers with a goal of improving personal knowledge of the systems, ensuring that code changes meet business goals and technology best practices
- Regularly interacting and collaborating with personnel across functions (product, technology, analytics, operations) within the assigned team and technology chapter
- Working with stakeholders including the Executive, Product, and Data teams to assist with data-related technical issues and support their data infrastructure needs
- 3+ years of experience in Big Data Engineering
- Experience with DataOps: improve quality and reduce cycle time of data analytics
- Experience working with data that is constantly generated by different data source
- Experience with ETL / ELT tools: knowledge and experience of tools to complete ETL / ELT process (e.g. Fivetran, Talend, Airflow, dbt, FlyData, or similar tools)
- Excellent problem solving, troubleshooting, and communication skills
- Good understanding of ETL / ELT process: collecting data from multiple sources and loading it into a centralized data warehouse
- Knowledge of data pipeline management: ensure consistent and reliable data, monitoring for failures
- Deep understanding of data modeling: creating a visual representation of an information system, defining and analyzing data requirements
- Familiarity with Query optimization: analyzing queries and choosing the most efficient means to execute SQL
- Knowledge of API design: handling dependencies, 3rd party integration, versioning
- Programming experience: Python, Java, Scala, etc.
- Knowledge of cloud data warehouse: snowflake, AWS
- B2+ level of English
- Demonstrated ability to design and write maintainable software
- Understanding of software engineering best practices, object oriented analysis & design, and design patterns & algorithms
looking for something else?
Find a vacancy that works for you. Send us your CV to receive a personalized offer.