[Hiring] Data Ops Engineer @McKesson
This description is a summary of our understanding of the job description. Click on 'Apply' button to find out more. Role Description SCRI’s Data Solutions team is looking for a Data Ops Engineer to support strategic data initiatives. As a Data Ops Engineer, you will play a crucial role in designing, constructing, and maintaining data architectures, databases, and large-scale processing systems to support SCRI data initiatives. • Design and implement scalable and efficient data pipelines to support various data-driven initiatives. • Collaborate with cross-functional teams to understand data requirements and contribute to the development of data architecture. • Work on data integration projects, ensuring seamless and optimized data flow between systems. • Implement best practices for data engineering, ensuring data quality, reliability, and performance. • Contribute to data modernization efforts by leveraging cloud solutions and optimizing data processing workflows. • Create and maintain technical documentation, including data mapping documents, solution design documents, and data dictionaries. • Effectively communicate technical concepts to both technical and non-technical stakeholders. • Automation and promotions to different environments using GitHub CICD with GitHub Actions / Liquibase. • Participate in the evaluation and identification of new technologies. • Other duties as assigned. Qualifications • 5+ years of experience in data engineering • Bachelor's degree in a related field (e.g., Computer Science, Information Technology, Data Science), or related experience. • Technical expertise in building and optimizing data pipelines and large-scale processing systems. • Technical expertise with Azure Cloud, Data Factory, Batch Service, and Databricks. • Experience working with cloud solutions and contributing to data modernization efforts. • Experience using Terraform and bicep scripts to build Azure infrastructure. • Experience implementing security changes using Azure RBAC. • Experience building cloud infrastructure including Data Factory, Batch Service, Azure Gen 2 Storage Account, and Azure SQL database. • Experience in developing data pipelines through proficiency in programming languages such as SQL, Python, Pyspark, or Scala for effective data manipulation and transformation. • Excellent understanding of data engineering principles, data architecture, and database management. • Strong problem-solving skills and attention to detail. • Excellent communication skills, with the ability to convey technical concepts to both technical and non-technical stakeholders. Requirements • Knowledge of the healthcare, distribution, or software industries is a plus. • Strong technical aptitude and experience with a wide variety of technologies. • Ability to rapidly learn and if required evaluate a new tool or technology. • Strong verbal & written communication skills. • Demonstrated technical experience. • Be an innovative thinker. • Must have a strong customer and quality focus. Benefits We care about the well-being of the patients and communities we serve, and that starts with caring for our people. That’s why we have a Total Rewards package that includes comprehensive benefits to support physical, mental, and financial well-being. As part of Total Rewards, we are proud to offer a competitive compensation package. This is determined by several factors, including performance, experience and skills, equity, regular job market evaluations, and geographical markets. In addition to base pay, other compensation, such as an annual bonus or long-term incentive opportunities may be offered. Apply tot his job