Middle Big Data Engineer work from home | EPAM Anywhere

This website uses cookies for analytics, personalization and advertising. Click here to learn more or change your cookie settings. By continuing to browse, you agree to our use of cookies.

Back icon

Middle Big Data Engineer for a Healthcare Company

Middle Big Data Engineer for a Healthcare Company 40 hrs/week, 12+ months

We are currently looking for a remote Middle Big Data Engineer with 2+ years of experience in Data Engineering to join our team.

The customer is an American multinational company working in the combined industries of health information technology and clinical research.

Please note that even though you are applying for this position, you may be offered other projects to join within EPAM Anywhere.

Join EPAM Anywhere to quickly and easily find projects that match your knowledge and experience, while working with Forbes Global 2000 clients, building a successful IT career, and earning competitive rewards. The platform provides additional perks, including a flexible schedule, professional development opportunities, and access to a community of experts.

Responsibilities

  • Data transformation and warehousing
    • Analyze and correlate dataset to user profile
      • Create the required technical tasks in backlog and update status regularly
        • Provide Production Support for any issues raised by the Business and experience on Production/Integration release management
          • Recommend changes to client software based on information such as client feedback, language changes and new technology
            • developing Shell scripts to orchestrate execution of all other scripts and move the data files within and outside of HDFS
              • Participate in all Agile ceremony meetings and update task status and adhere to timelines and quality standards for the task
                • Working with business and peers to define, estimate, and deliver functionality

                  Requirements

                  • 2+ years of Big Data Engineering experience
                    • In-depth understanding/knowledge of Hadoop & Spark Architecture and its components such as HDFS, Job Tracker, Task Tracker, executor cores and memory parameters
                      • Experience in Hadoop development, working experience on SPARK, SCALA is mandatory and Database exposure is must
                        • Hands-on experience in Spark and Spark Streaming creating RDD's, applying operations -Transformation and Actions
                          • Experience in code optimize to fine tune the applications
                            • Expertise in writing Hadoop/spark Jobs for analyzing data using Spark, Scala, Hive, Kafka and Python
                              • Experience with streaming workflow operations
                                • Experience with developing large-scale distributed applications and developing solutions to analyze large data sets efficiently
                                  • Experience in Data Warehousing and ETL processes
                                    • Strong database, SQL, ETL and data analysis skills
                                      • Experienced in developing scripts for doing transformations using Scala
                                        • Experience in Using Kafka on publish-subscribe messaging as a distributed commit log, have experienced in its fast, scalable and durability
                                          • Able to understand existing business logic & implement the business changes
                                            • Should have gone through entire development life cycle including system integration testing
                                              • Right communication skills to be able to work with Business
                                                • Good logical reasoning, analytical abilities
                                                  • Experience working with remote teams
                                                    • Experienced in Agile development methodology

                                                      We offer

                                                      • Competitive compensation depending on experience and skills
                                                        • Work in enterprise-level projects on a long-term basis
                                                          • You will have a 100% remote full-time job
                                                            • Unlimited access to learning courses (EPAM training courses, English regular classes, Internal Library)
                                                              • Community of 38,000+ industry’s top professionals
                                                                Big Data
                                                                Scala
                                                                Python.Core
                                                                SQL

                                                                40 hrs/week

                                                                Hours per week

                                                                12+ months

                                                                Project length

                                                                Belarus, Russia

                                                                Locations eligible for the position