The candidate should have strong working experience in big data technologies, including Spark, Hadoop, and Hive. They will also need experience working with Google Cloud Platform (GCP) and be able to build and deploy data pipelines on GCP. Experience in big data engineering Strong understanding of Spark. GCP Certified Experience with data modelling and data visualization Excellent problem-solving and analytical skills Strong communication and teamwork skills
Responsibilities: Design, develop, and deploy data pipelines on GCP Work with data analysts to build and maintain data models Automate data processing tasks Troubleshoot and optimize data pipelines Provide technical support to other members of the data engineering team
You're about to be taken to the employer's website to complete your application.
Please either log in, or enter your name and email address before we re-direct you