National Grid (https:\\careers.nationalgridus.com)
Full Time Employee
National Grid is seeking a data engineer with specialization in web crawler skills to join their Advanced Data and Analytics team. This role will bring start-up challenges and opportunities inside a big and stable company.
The data engineer will develop web crawler applications to grow our data inventory for analytics and modeling. You will design and implement deep web crawling mechanisms to extract publicly available data to further develop the knowledge assets of National Grid. You embrace the challenge of unraveling raw data and providing innovative solutions to our data consumers. Incumbents should be prepared to work in a highly multi-tasked environment with rapidly changing business priorities. Abilities to work cross functionally and in an Agile Team Setting are a must.
Design and implement systems to either manually or automatically download data from websites and parse, clean and organize the data
Learn new data sources and determine how best to structure the data for use in advanced analyses
Research opportunities for data acquisition
Assess and resolve data quality issues and correct at source.
Design, construct, install, test and maintain highly scalable data management systems
Ensure all data solutions meet business requirements and industry practices
Integrate new data management technologies and software engineering tools into existing structures.
Have extensive experience in employing a variety of languages and tools to marry disparate data sources
Have knowledge of different database solutions (NoSQL or RDBMS)
Have knowledge of NoSQL solutions such as MongoDB, Cassandra, etc.
Build scalable data pipeline solutions in a cloud environment
Work effectively both in a local server environment and in a cloud-based environment
Collaborate with Data Architects and IT team members on project goals
Collaborate with Data Scientists and Quantitative Analysts
Communicate effectively and translate business requirements into data solutions
Master’s degree in a data intensive discipline (Computer Science, Applied mathematics or equivalent) is strongly preferred, with a background in “big data” computer programming and/or a minimum of 3-5 years experience in “big data” processing. Additional preference would be given to a candidate with a PhD degree in a data intensive discipline. Exceptional candidates considered with Bachelor’s degree or Master’s degree in progress.
Strong programming experience with: Python, Java, SQL, Ruby
Proven experience with web communication protocol and web crawling tools.
Proven experience with building and deploying ETL pipeline
Proven experience with emerging big data technologies
Proven experience with AWS
Experienced with querying NoSQL databases
Experienced with relational databases and SQL
Plus - having geospatial and GIS skills
Plus- experience with one or more specialize areas: image and remote sensing data, natural language data
:Cust & Market Analytics
:Cust & Market Analytics
:Sep 2, 2016, 8:25:12 AM
:Sep 30, 2016, 11:59:00 PM
To apply for this job, contact: