Share this Job

Data Scientist

Date: Sep 20, 2022

Location: Milano, IT

Company: Vodafone

Role Purpose

The role will be part of the VNO Digital function.
The team has as main objective to automate Network Operations processes, in particular:
-    Interact with Network operations teams to identify, develop and industrialize data science and analytics use cases, suited to the needs of Italian market
-    Evaluate ZTO vendor solutions and coordinate their integration with our platforms
-    Automate Field management processes 
In the described scope, the successful candidate will be responsible of identifying most profitable data science/big data business cases, implement machine learning python prototypes and deploy industrialized code on available platforms (Cloudera Hadoop, cloud AWS/GCP or Kubernetes based platforms), with the help of data engineering team.
She/He will be also able to perform and provide technical leadership for data analysis and support on data-driven business decisions. She/He will be leading data management choices and architectural solutions. 

Key accountabilities and decision ownership

•    Identification of data science / big data / analytics use cases for Network Operations and architectural High Level Design
•    Choice and implementation of the best machine learning algorithm suited to the use case
•    Industrialization of the use cases on Cloudera, Openshift/K8 or on AWS/Google cloud environments, with the support of data engineers 
•    Technical leadership in analysis and data management domains 
•    Data-driven evaluation of vendor product adoption 

Core competencies, knowledge and experience

•    2+ years experience in Machine Learning SW development and data analysis
•    Experience in designing and implementing use cases over big data architectures involving massive data volume, also under real-time constraints
•    Knowledge of pros and cons of existing data storage technologies (relational DB, Big Data Frameworks, no-SQL DB on cloud and on prem)

Must have technical / professional qualifications

•    Degree in Computer Science, Maths, Engineering or equivalent
•    SQL, Python (Pandas, Tensorflow, Scikit-learn, and main other ML libs), Pyspark and SW development capabilities
•    Machine learning algorithms knowledge (NLP, Neural Networks., Random Forest, SVM, Anomaly Detection especially over time series, Gradient Boost and all other main ML models both supervised and unsupervised)
•    Knowledge of deployment best practices and DevOps pipeline
•    Excellent analytics and mathematics skills

Nice to Have

•    Experience in the usage of BI reporting tools (Qlik / power BI / Tableau)
•    Experience in coding in microservices, within Docker-based architectures, in Cloud or On-prem platforms
•    Knowledge of DevOps pipeline and main tools for code versioning, automatic testing and continuous integration in both On-prem and Cloud environments   
•    Data Architect or Cloud Certifications