Your tasks
- Work as the Big Data developer in a self-organizing Scrum team
- Implement data sourcing and transformation code, perform semantic data modelling, API build
- Manage the deployment, maintenance, and L3 user support of the reporting / data access tool(s)
- Create technical documentation of all delivered artifacts
- Perform other duties as assigned
Project description
Successful candidate would become a member of a Team, working daily in SCRUM, that consist of 16 (two of them are 2nd year TAs) resources and is focused on development of a particular product (HRDS). Within a team members there are several seniors developers with expertise in DWH and BI, some with Cloudera/Hortonworks BigData platform experience. There are also 2 testers managed by experienced tester
Who we're looking for?
MUST
- Project experience with at least one of the following Big Data platforms is a must: Cloudera, Hortonworks (min. 2 years)
- Knowledge of Hadoop ETL tools (sqoop, impala, hive, oozie)
- SQL programming skills
- Bash scripting experience (min. 2 years)
- Working knowledge of at least one of the programming languages: Python, PySpark, R, Scala, Java (min. 2 years)
- Self-motivated and a team player with good problem solving skills
- Ability to meet tight deadlines and work under pressure
NICE TO HAVE
- Pentaho skills are a plus
- Experience with Tableau reporting is a plus
- Experience in semantic data modelling is a big plus
- Knowledge of Anzo mapping tool is a big plus
- Experience with REST APIs, ESB and/or Apigee is a plus
- Knowledge about reporting solutions modeling is a plus
- Familiarity with scrum methodology is appreciated
- Experience with containers for Cloudera is a plus
- Experience with cloud-based data solutions (Azure, AWS)
Skills