Ta oferta pracy jest już nieaktualna.
Zapraszamy do wysłania życiorysu - skontaktujemy się w przypadku wznowienia projektu lub podobnej oferty.
Zapraszamy do wysłania życiorysu - skontaktujemy się w przypadku wznowienia projektu lub podobnej oferty.
The Key Responsibilites:
- Building the Data Lake in AWS where we process 500M+ records daily
- Maintaining Data Warehouse (AWS Redshift)
- Constructing ETL data pipelines in Airflow, Python and Spark
- Standardizing and cleaning the data from many different sources (SQL, NoSQL DBs, APIs, queues) for internal reporting
- Architecting scalable Data Streaming is our next big milestone
Desired skills & experience:
- At least 3 y of experience on the similar role
- good knowledge of:
- Data Lake (AWS S3, EMR, Glue, Athena, Kinesis, Apache Hudi, Delta Lake)
- ETL pipelines in Python
- Docker and Kubernetes
- Advanced SQL and performance tuning
We offer:
- Employment base on a B2B contract
- 100% remote work, recruitment & oboarding
- 100% paid holidays (24 working days)
- Private medical insurance (basic dental services included) and Multisport
- Innovative and complex platform that will go live in 2020
- Experienced team from 4 to 10+ years in commercial projects
- Unique memes channel
- A lot of common sense, common goal approach and general friendliness