Cegeka's Data Solutions team manages and executes data platform, BI, Big Data, Advanced Analytics and AI projects for our clients in a pragmatic, low-threshold approach, so that our clients quickly get the right information and can easily grow as the information requirement increases. We always apply a Cloud First Strategy. The years of experience of our specialists, the developed methodology and the feedback from our clients make Cegeka an ideal long-term partner. Both for our clients, business partners and employees.
Here are the main responsibilities:
In your role as a Data Engineer you will be working remotely, in a customer-facing role, collaborating with colleagues on Business Intelligence, Big Data and Analytics projects.
We expect a lot from you and at same time, we also offer you a lot: modern technologies to work with, exceptionally well-trained colleagues and career opportunities.
- Create and maintain optimal data pipelines.
- Assemble large, complex data sets that meet functional / non-functional business requirements.
- Identify, design, and implement internal process improvements: automating manual processes, optimising data delivery, re-designing infrastructure for greater scalability, etc.
- Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using Microsoft technologies.
- Build analytics tools that utilise the data pipeline to provide actionable insights having in mind the key business performance metrics defined by the end customers.
- Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.
- Work with data and analytics experts to strive for greater functionality in the data systems.
Requirements:
- At least 3 years of experience as a Data Engineer
- Solid experience with Azure Databricks
- Experience with Azure cloud platform (certifications are nice-to-have, e.g. DP-203)
- Experience in data pipeline development and data integration.
- Strong proficiency in programming languages such as SQL and PySpark
- Experience with big data technologies and distributed computing frameworks such as Spark.
- Expertise in data warehousing and ETL principles, familiarity with data modelling.
- Understanding of data governance and data security best practices.
- Strong analytical and problem-solving skills with a keen attention to detail.
- Excellent communication skills, both written and verbal e.g. being able to convey technical information to non-technical stakeholders
- Proven experience in customer-facing role
- Ability to align data insights with customer’s business objectives and strategies
- Stakeholder expectations management
- Flexibility in adapting to changing project requirements
- Ability to work collaboratively in a team environment and manage multiple tasks simultaneously, as you could work on several projects in parallel
What we offer🎁
- 22 Annual Vacation days, 3 sick days that are not carried over the next year (no medical certificate required), plus a seniority day added every 3 years in the company
- Meal tickets
- Private Medical Insurance
- We offer flexibility when it comes to WFH policy
- Wellbeing at the center - we know that there is more to our lives than our jobs, therefore we make sure to tackle well-being aspects of our day-to-day lives through specialized sessions, webinars, and internal programs according to our employee's input
- In close cooperation is a value we live by, through #MomentsThatMatter: monthly hangout parties, team buildings, gamified online experiences
- Many more to come :)