Acest anunț a expirat și nu este disponibil pentru aplicare
As a big data engineer you are trusted with the implementation of data processing flows according to the standards set by your peers and the business requirements.
A part of our daily work has been already organized in a rigorous way, but the other can be cast by your hand according to dynamic needs. Your previous experience and the way in which you apply it in our environment is of utmost importance.
Responsibilities:
- Work closely with data analysts in order to build and validate the data model
- Develop the processing flows using a generic Spark ETL tool written in Scala
- Publish the lineage and metadata on the data catalog
- Document the technical steps taken
- Push for new approaches in order to increase the efficiency of the team
- Raise the knowledge level of the team by sharing key details about big data technologies and tools
Concepts in order of importance:
- Git, Hadoop, Spark, SQL
- Scala or Python, CI/CD pipelines
- Cloud technologies (role management, scalable clusters)
- Data exploration and visualization tools (notebooks, BI tools)