Manager Data Engineering - GE07AEWe're determined to make a difference and are proud to be an insurance company that goes well beyond coverages and policies. Working here means having every opportunity to achieve your goals - and to help others accomplish theirs, too.
Join our team as we help shape the future. The Hartford's Enterprise Data Office is seeking Manager of Data Engineering primarily responsible for Third Party assets ingestion and management for the Enterprise. In this role, you will be responsible for expanding and optimizing data pipeline and product architecture, as well as optimizing data flow and collection for cross functional teams.
The ideal candidate is an experienced data pipeline builder and data wrangler who enjoys optimizing data systems and building them from the ground up. The Manager of Data Engineering will support our software developers, database architects, data analysts and data scientists on data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects. They must be self-directed and comfortable supporting the data needs of multiple teams, systems and products.
This role will have a Hybrid work arrangement, with the expectation of working in an office location (Hartford, CT) 3 days a week (Tuesday through Thursday).Job responsibilities:Accountable for building small or medium scale pipeline and data products. Rapidly architect, design, prototype/POC, implement, and optimize Cloud/Hybrid architectures to operationalize data.
Accountable for team development and influencing pipeline tool decisions.Build and implement capabilities for continuous integration and continuous delivery aligned with Enterprise DevOps practices.Accountable for data engineering practices (eg, Source code management, branching, issue tracking, access, etc.
) followed for the product.Independently review, prepare, design and integrate complex (type, quality, volume) data, correcting problems and recommend data cleansing/quality solutions to issues Provide expert documentation and operating guidance for users of all levels. Document technical requirements and present complex technical concepts to audiences of varying size and levelStay up to date on emerging data and analytics technologies, tools, techniques, and frameworks.
Evaluate, recommend and influence all technology-based decisions for tools and frameworks for effective delivery.Partner in the development of project and portfolio strategy, roadmaps and implementations.Knowledge, Skills, and Abilities Good working Technical Knowledge (Cloud data pipelines and data consumption products).
Proven ability to work with cross-functional teams and translate requirements between business, project management and technical projects or programs.Team player with transformation mindset. Ability to operate successfully in a lean and fast-paced organization, leveraging Scaled Agile principles and ways of working.
Develop and promote best practices for continuous improvement. Also troubleshoot and resolve problems across the technology stack.Working knowledge and understanding of DevOps technology stack and standard tools/practicesProvides mentorship, and feedback to junior Data EngineersQualifications:Candidates must be authorized to work in the US without company sponsorship.
The company will not support the STEM OPT I-983 Training Plan endorsement for this position.Bachelor's degree and 5 years of Data Engineering experience.Must have experience building data pipelines.
Experience in Python programming and PysparkFamiliar with Object Oriented Design patterns2+ years of developing and operating production workloads in cloud infrastructureProven experience with software development life cycle (SDLC) and knowledge of agile/iterative methodologies and toolsetsHands-on experience working on AWS Services like S3, EMR, Glue, Lambda and Cloud FormationHands-on experience on DevOps tools and practices for Continuous Integration and Deployment like GitHub, Jenkins, Nexus, Maven as well as CICD tools in AWS like Code Build and Code PipelineKnowledge and experience of working on SQL Queries and Unix scripting.Ability to design, implement, and oversee ETL processes.Excellent Troubleshooting Skills and performance tuning of ETL processesExperience working on Snowflake data warehouse is nice to have.
Any Cloud certifications desired.CompensationThe listed annualized base pay range is primarily based on analysis of similar positions in the external market. Actual base pay could vary and may be above or below the listed range based on factors including but not limited to performance, proficiency and demonstration of competencies required for the role.
The base pay is just one component of The Hartford's total compensation package for employees. Other rewards may include short-term or annual bonuses, long-term incentives, and on-the-spot recognition. The annualized base pay range for this role is:$122,880 - $184,320Equal Opportunity Employer/Females/Minorities/Veterans/Disability/Sexual Orientation/Gender Identity or Expression/Religion/AgeAbout Us | Culture & Employee Insights | Diversity, Equity and Inclusion | BenefitsSummaryLocation: Hartford, CT; Chicago, IL; Columbus, OH; Charlotte, NCType: Full time.
Web Reference : AJF/779202007-202
Posted Date : Sat, 23 Nov 2024
Please note, to apply for this position you will complete an application form on another website provided by or on behalf of The Hartford Financial Services Group. Any external website and application process is not under the control or responsibility of IT JobServe