Business & Decision is looking for consultants to reinforce its Data intelligence practice. Are you passionate about development? Do you have a strong interest in data, clean code, unit testing, continuous development? If you are eager to master the Storage, organization and exploitation of PBs (petabytes) of data, you should probably keep on reading!
What you can expect
As a Data engineer, you will take part in the B&D's Data engineering & Architecture team. Your strong development skills will allow you to work on data pipeline implementation, Storage, data crunching and manipulation.
In parallel, you will follow a Data Engineer career path where you will learn how to design end to end Big Data solutions by understanding business requirements, translate those into technical components as building blocks of the to-be solution. You will analyze data by sourcing, capturing, transformation and preparation techniques. You will work with new technologies such as Kafka, Flink, Storm, Spark, HIVE, Sqoop, Oozie, Impala, Akka, Elastic search... and possibly become a Data architect!
4 reasons to join us
- A multi-disciplinary approach: you are on boarded on ambitious data intelligence projects and can be working in collaboration with our digital department on big projects, cognitive solutions, among many others.
- More than a job, a career path: as Business & Decision is a fast-growing company it gives a lot of opportunities to grow in the direction you want. There's lots of flexibility, career paths in different directions, relevant trainings and certifications, and the opportunity to take ownership of emerging trends on the market.
- You integrate a team of strong experts that value after work activities and events just as much as hard work.
- You get an attractive consultant package including a company car, a strong career path, relevant training, challenging missions and projects and international perspectives.
We challenge you to directly apply if you recognize yourself in some of the following attributes. See you soon!
- Master in Computer Science, Engineering or any related field
- Strong interest in data manipulation (Storage, crunching, pipeline implementation)
- Minimum 2+ years of experience with ICT projects (preferably covering both Software and Hardware)
- Strong development experience in Java, Scala or C++
- Knowledge of continuous developments / deployment concepts (GIT, SBT / Maven, Jenkins, Nexus/Artifactory ...)
- Experience with SQL-like data manipulation is a must (SQL Server, Oracle, MySQL, PostgreSQL)
- Knowledge of Linux environment
- Good communication skills
- Conceptual and creative thinking
Nice to have
- First experience with Hadoop ecosystem (Spark, Kafka, HDFS, HIVE, HBase, ...)
- Experience with Web-services development (Rest or SOAP)
- Knowledge of scripting languages such as Python, Perl, Bash...
- Knowledge Docker and orchestration platform (Mesos, swarn, Kubernetes)
- Experience with Cloud platforms (AWS, Azure, Google cloud)
Experience with NoSQL DBs' (MongoDB, Cassandra, Neo4j)
Additional talents? Strange hobby? Weird humor? Our data heroes are like you, they share