Middle+/Senior Data Platform Engineer (Snowflake)
We are looking for a Middle+/Senior Data Platform Engineer (Snowflake):
π£ Language Proficiency: Upper-Intermediate
π§Ύ Employment type: Full time
π Candidate Location: Poland
π Working Time Zone: CET
π§ Planned Work Duration: 12+ months
π₯ Customer Description:
Our Client is a leading global management consulting company recognized for delivering high-impact solutions across industries.
The company works with large global enterprises across finance, media, technology, and public sector organizations, providing advanced platforms and consulting services.
π§© Project Description:
This project is part of a federated data delivery initiative within a secure enterprise technology ecosystem. The focus is on building and maintaining robust data pipelines that collect and process data from multiple enterprise systems and cloud platforms.
The objective is to enable leadership to gain actionable insights aligned with strategic goals and to support product and service teams in targeting appropriate user groups while measuring the effectiveness of AI-driven productivity initiatives.
βοΈ Project Phase: ongoing
π¨βπ»Project Team: Program Manager, 2 Product Managers, 2 Engineers, User Researcher, Design professional, Analytics Lead
π€ Soft Skills:
β’ Highly proactive with the ability to independently identify stakeholders and drive tasks to completion
β’ Strong stakeholder management skills with the ability to engage diverse roles across technical and product teams
β’ Curious mindset with a focus on continuous improvement and challenging existing processes
β’ Excellent communication skills for effective collaboration with cross-functional teams
β’ Strong time management with a high level of organization and reliability
π‘ Hard Skills / Must Have:
β’ 5+ years of data engineering experience
β’ Python for scripting, API development, and pipeline creation
β’ Apache Airflow β for pipeline orchestration; Dagster or Prefect accepted as alternatives
β’ AWS services β especially Glue, Lambda; experience deploying and maintaining production workloads
β’ Apache Spark β for distributed processing, particularly within AWS Glue
β’ Snowflake β preferred data warehouse; Redshift or BigQuery accepted if concepts transfer cleanly
β’ CI/CD pipelines β GitHub Actions or similar; this is how pipelines and scripts are deployed to Airflow and Glue
β’ API experience β consuming third-party APIs and building internal APIs with Python)
β’ Git / GitHub β version control, branching strategy, pull request workflow
β’ PostgreSQL or other OLTP databases β for operational data access and integration
β¨ Hard Skills / Nice to Have:
β’ Snowflake Cortex β increasingly used within the team
β’ Scala for distributed data processing tasks
β’ Agentic frameworks β LangChain, Pydantic ecosystem, or similar
β’ Snowflake access and role management β RBAC, column-level security (ABAC)
π Responsibilities and Tasks:
β’ Build data ingestion pipelines integrating AI tools and internal platforms into Snowflake
β’ Maintain and harden the existing Snowflake infrastructure β schemas and tables that grew organically without data engineering input β and bring them up to standard
β’ Deploy work through CI/CD pipelines into Airflow or AWS Glue
β’ Manage and process access requests
β’ Collaborate proactively with product managers and engineers to identify data needs
π§ͺ Technology Stack: Python, Snowflake, Apache Airflow, Apache Spark, Scala , PostgreSQL, AWS
π© Ready to Join?
We look forward to receiving your application and welcoming you to our team!
- Department
- Data Engineering
- Locations
- Poland
- Remote status
- Fully Remote
About Bonapolia
For job seekers, BONAPOLIA offers a gateway to exciting career prospects and the chance to thrive in a fulfilling work environment. We believe that the right job can transform lives, and we are committed to making that happen for you.