Key Responsibilities
-
Develop and maintain ETL/ELT pipelines for data ingestion and transformation.
-
Work with Azure Data Factory, Databricks, and SQL-based systems to manage data workflows.
-
Optimize data processing and storage for performance and scalability.
-
Collaborate with data scientists and analysts to enable seamless access to data.
-
Ensure data quality, integrity, and governance in pipelines.
-
Automate and monitor data jobs to ensure reliability.
-
Work with structured and semi-structured data (JSON, Parquet, etc.)
Required Skills & Experience
-
2–3 years of experience in data engineering.
-
Hands-on experience with Azure Data Factory, Databricks, and SQL.
-
Strong knowledge of Python or PySpark for data processing.
-
Experience working with cloud-based databases (Azure SQL, Synapse, Snowflake, etc.).
-
Good understanding of data modeling, data lakes, and warehouse architecture.
-
Familiarity with version control (Git) and CI/CD for data workflows.
-
Ability to troubleshoot and optimize data workflows.
Good to Have
-
Knowledge of Azure Functions, Event Hubs, and Kafka.
-
Exposure to DataOps and Infrastructure as Code (Terraform, ARM templates).
EnablerMinds is a leading Data & AI consultancy specializing in Data Lakehouse platforms & AI Solutions and Applications on Databricks, Microsoft Azure, AWS and GCP. With a team of data engineering and AI experts, we help enterprises modernize their data landscape and drive business transformation across industries. Our methodologies and staffing model ensure high-performance, scalable solutions for various sectors.