AUSY is part of the Randstad Holding and offers services in 6 areas of expertise: Finance, IT, Human Resources, Life Sciences, Engineering and Sales & Marketing. However, our people are the asset that makes us stand out. Their passion, entrepreneurship, and fun attitude define the AUSY DNA.
We are constantly on the look-out to hire new inspired talent to join our organisation. Do you have what it takes to become an AUSY expert?
For a client located in Brussels, we are looking for a Freelance Data Engineer.
Our client is looking for a data engineer to reinforce their advanced Business Analytics team and form the bridge between their raw business data, with their specific technical and business representations, and the creative data-needs of their Data Scientists and Visualization experts.
The engineer will be responsible for managing, optimizing and monitoring data retrieval, Storage and distribution throughout the team, thereby supporting our business objectives and their gain from the company's large datasets, and hence, fostering data-driven decision making across the organization.
As a data engineer you will collaborate with Data Scientists, Visualisation Experts, Business Analysts and IT colleagues in several domains, in order to improve
- the reaction time to quickly changing data requirements
- data models that feed business analytics tools
- data accessibility and quality
- data consolidation of an Azure cloud environment
- general data engineering knowledge within the team.
There will be two main challenges : the data engineer will collaborate half-time on migrating analytics data to a new Azure Datalake cloud solution, and the other half-time on OnPremise data engineering.
Responsibilities for the Azure migration
- The engineer builds a new analytics solutions in collaboration with IT and the Analytics team
- implements, consolidates and troubleshoots the Azure Cloud solution
- design of data lakes and (data) ingestion pipelines
- identify, design and implement cloud process improvements
- optimize cloud data delivery
Responsibilities for day to day
- The engineer builds and maintains data flows and pipelines to support continuing increases in data volume and complexity.
- construct data sets that are easy to analyse and contain reliable and qualitative data.
- implements methods to improve data reliability, quality and processing efficiency, understands how to optimize data retrieval and automates and monitors deployment tasks.
- combines raw information from different sources to create consistent and reusable formats.
- troubleshoots data related issues and assists in their resolution.
- also develops and tests architectures that enable Agile data extraction and transformation for predictive modelling.
The role requires a significant set of technical and analysis skills, including
- a deep knowledge of SQL database design
- programming languages like Python or R (RStudio)
- Big Data (Parquet, Delta, Blob Storage)
- experience in a project for consolidation of data from heterogenous sources (OnPremise + Cloud) to a Microsoft Azure Datalake Cloud environment.
Experience in the following is considered an asset
- Spark / Python via Azure DataBricks
- Experience with DevOps tools (SVN, GitHub, Azure DevOps)
- Experience with delivering data for Analysis, Reporting (PowerBI), ...
Do you recognize yourself in this role?
Please apply via the button below!