• Learning and certification platform
  • Hybrid work - 2-3 days in office (Poznań / Warszawa)
  • B2B Contract (160 - 190 PLN netto + VAT / h)

▪ Bachelor’s or master’s degree in computer science, Data Engineering, Information Systems, or a related field.
▪ 5+ years of experience in data engineering or data infrastructure roles.
▪ Proficiency in SQL, Python, and cloud-based data platforms (e.g., Azure).
▪ Experience with data pipeline tools (e.g., Azure Data Factory), and big data processing 
frameworks (e.g., Databricks).
▪ Strong understanding of data governance, data quality, and compliance standards.
▪ Familiarity with data lake architectures, especially Unilever Data Lake (UDL), is a strong 
advantage.
▪ Excellent problem-solving, communication, and collaboration skills.

Skills
▪ Data Pipeline Development
▪ Data Quality & Governance
▪ Cloud Data Platforms (Azure, Databricks, UDL)
▪ SQL, Python
▪ Data Security & Compliance
▪ Business Alignment & Collaboration
▪ Agile & Continuous Improvement Mindset

Responsibilities

▪ Data Pipeline Development & Maintenance, design, build, and maintain scalable data pipelines for CRM, SSD and Digital Engagement Data, leveraging the Unilever Data Lake (UDL) as the central data platform.
▪ Ensure seamless ingestion, transformation, and delivery of data to support analytics, reporting, and machine learning use cases.
▪ Data Quality Management, implement automated data validation and monitoring processes within UDL to ensure accuracy, completeness, and consistency of data.
▪ Collaborate with Data Scientists and Reporting Specialists to identify and resolve data quality issues proactively.
▪ Data Governance Implementation, apply global data governance frameworks to datasets stored in UDL, ensuring alignment with metadata standards, lineage tracking, and access controls.
▪ Data Compliance & Security, ensure all data processed through UDL complies with internal policies and external regulations (e.g., GDPR).
▪ Data Processing, Design and optimize scalable data pipelines using tools like Azure Data Factory and Databricks. Enable efficient ingestion, transformation, and delivery of data for analytics and reporting.
▪ Cross-Functional Collaboration, work closely with the Hub Lead, Data Scientists, Reporting 
Specialists, and Global Tech teams to align data engineering efforts with market and analytics needs.
▪ Support for Market Deployment, adapt global data engineering solutions to meet local 
requirements while maintaining consistency.

Jesteśmy Devire – firmą rekrutacyjną, której celem jest łączenie świetnych ludzi ze świetnymi pracodawcami.

Niezależnie czy rozglądasz się za nową pracą na stałe czy projektem w formie współpracy B2B – możesz polegać na naszym wsparciu na każdym kroku.

Współpracujemy z pracodawcami z terenu całej Polski i realizujemy rekrutacje we wszystkich kluczowych obszarach technologicznych.