Acest anunț a expirat și nu este disponibil pentru aplicare
ABOUT US
OPIS, part of IHS Markit, is one of the world's most comprehensive sources for petroleum pricing and news information. OPIS provides real-time and historical spot, wholesale/rack and retail fuel prices for the refined products, renewable fuels, and natural gas and gas liquids (LPG) industries. At its core, OPIS uses a set of complex IT systems and tools, handling huge amounts of data in a reliable way, and providing customers with business applications to use this data as efficient as possible.
YOUR ROLE
Currently OPIS is looking to expand its Romanian Big Data Development and Architecture team based in Bucharest. This team focuses on developing the data infrastructure to support our large volume transactional data processing, data science, and machine learning projects. This is a unique opportunity to join a fast-growing team that will become an important part of global OPIS IT. This team will work closely with the OPIS IT HQ based in Rockville, US. To fit this role, we are looking for that unique mix of solid technical capabilities, blended with strong interpersonal skills.
- Design and develop Big Data architecture to support the processing of large volumes of data
- Write complex and efficient code in SQL or Python to optimize data pipelines to ingest, cleanse, transform and integrate data from multiple disparate sources
- Work extensively with AWS services to deploy and run our database code via CI/CD pipelines
- Develop the data pipelines to support data lake, data science and ML initiatives
- Be part of an Agile team, using the company’s latest database development life-cycle working closely with team members from Romania and US
- Coach and mentor less experienced team members
- Adhere to best practice development standards for database design and architecture
- Believe in high quality, creativity, initiative and self-management in everything you work on
ABOUT YOU
- 8+ years of experience in data management technologies (Microsoft SQL Server, Postgres, advanced SQL coding, relational database design, data warehousing)
- 4+ years of experience in developing Big Data technologies (Spark, AWS EMR) and distributed data processing to handle large volumes of data
- 4+ years of experience writing Python or PySpark
- Experience working in a Linux and Windows environment
- Enjoy implementing new technologies and coming up with innovative solutions
- Value teamwork, being proud of both your own work and your team’s success
Nice to have:
- 1+ years of experience working with Parquet file formats and AWS S3 storage
- 1+ years of experience in writing infrastructure as code for AWS cloud services (Terraform, CloudFormation)
- 1+ years of experience creating CI/CD deployment pipelines in Azure DevOps
- 1+ years of experience working with containers (Docker, Kubernetes)
WHAT WE OFFER
- Attractive benefits package (Medical services, Special discounts for gyms, Meal vouchers)
- Ongoing Education (Participation in conferences and training)
- Access to the most interesting information technologies
- Flexible Working Hours
- Work from home
- Three days for charity/volunteering
- Chillout & fun room (pool table, PlayStation)
- Fruit days, Coffee, tea, chocolate
- New and modern office, easy to access (M Aurel Vlaicu), spacious desks, latest technologies/equipment