Data architect having 10-12 years of experience in data management and data landscape assessment focused on Azure cloud platforms
Designs, builds, and maintains scalable data pipelines and ETL/ELT processes within Fabric, using components like Azure Data Factory, PySpark notebooks, and OneLake.
Leverages Fabric's capabilities for machine learning, advanced analytics, and predictive modeling, using PySpark, Python, and built-in MLflow for experiment tracking.
Builds visually compelling reports and dashboards in Power BI, creating data models, implementing DAX calculations, and connecting to various data sources.
Developing data models, reports, and dashboards, as well as designing and optimizing data pipelines using Spark within Fabric.
Key Technologies and Skills
Core Platform: Microsoft Fabric, including OneLake, Lakehouse, Dataflows, Pipelines, and Notebooks.
Programming Languages: Python, PySpark, and SQL for data manipulation and scripting.
Data Processing Tools: Azure Data Factory, Azure Synapse Analytics, and Spark.
Business Intelligence: Power BI for reporting and data visualization.
Data Modeling: Understanding data warehousing concepts and designing efficient data models.
Cloud Platforms: Azure and potentially AWS.
Data Governance: Implementing row-level security and ensuring data quality and integrity.
Collaboration: Working with stakeholders and other teams to gather requirements and deliver solutions.
Sector: Knowledge of Wealth Asset Management sector. Exposure to fund/security management and data systems.