- Bring Spark Big Data expertise to a local Scrum team, providing qualified deliverables and services on schedule.
- Participate in progress reviews
- Work with scrum master(s), tech lead(s) to analyze and understand user stories in each sprint
- Complete coding & unit testing for the allotted stories
- Create design documents or make changes to existing ones
- Complete code reviews
ML/Cloud based system that efficiently analyzes collected data to predict/prevent/troubleshoot system failures and performance issues in smart-devices
Multi-tenancy & medium-high data volume processing
Data collected from smart devices is accessed from cloud (AWS) storage and undergoes translation from device-specific schema, file formats, etc and transformations such as selection of relevant data and features before being applied to a ML model training subsystem; qualified models are then pushed to production environment for prediction/execution. Data handling employs scalable spark-based access. The entire processing workflow is kept in sync via pipelines defined in airflow.
The state of the entire data engineering (& ML models, training and execution) is available via Dashboard UI
Who we're looking for?
- At least 3 years of experience with Apache Spark
- At least 5 years of experience with Java, Spring Boot, Microservices
- At least 3 years of experience building data pipelines, CICD pipelines, and fit for purpose data stores
- Experience with Relational Databases: Postgres, MysQL or NoSQL
- Experience with Dimensional Data Modeling
NICE TO HAVE
- Experience with AWS Cloud technologies: AWS (Terraform, S3, EMR, EC2, Glue, Athena)
- Experience building data pipelines that process more than 1TB both in streaming and batch mode
- kubeFlow , Spark on Kubernetes