As a Big Data Engineer, you will be responsible for designing and implementing the entire data pipeline for the Datawarehouse systems. You are expected to have in-depth technical and hands-on expertise in Big Data frameworks. You will be required to design, build, and maintain data management systems for almost real-time data refresh with minimal latency. Identify the optimum process to access the data and help build the analytics systems on top of the big data system.
A very critical aspect of the role is to work with different teams - product, customer support, business development, analytics, understand the data and reporting needs, and keep improving the big data system that works seamlessly at scale.
Desired Skills:
- Hands-on experience on big data technologies including Spark, Kafka, Scala, Airflow along with cloud ecosystem of AWS, GCP, or Azure
- Well versed with data ecosystems like Redshift or BigQuery
- Scripting and automation through Python or Shell, comfortable working on Linux environment