Data Engineer
Job Details
Oklahoma City-Corporate Office - Oklahoma City, OK
Fully Remote
Full Time
4 Year Degree

As the analytics industry is evolving in the organ procurement space, we are aiming to optimize our life-saving, organ donation processes. We are looking for naturally curious professionals to help us deliver on our organizational needs. This opportunity focuses heavily on the development and maintenance of data processing software and databases, and using tools to publish quality data and reporting to drive real time business decisions. As a member of the LifeShare Business Intelligence department, this position also gives you the opportunity to use critical thinking skills to solve problems and improve business intelligence and IT processes related to organ procurement and transplantation. You'll have the support to build a meaningful career, and help save lives in the process.


- Contribute to the design, architecture, implementation, and support of a complex enterprise data warehouse (using Microsoft Azure cloud technologies), that is accurate, reliable, high performing, and easily accessible for all LifeShare departments.

- Analyze, test, and implement physical database design supporting various business applications (including base definition, structure, documentation, long-range requirements, and operational guidelines).

- Ensure data recovery, maintenance, data integrity, and space requirements for physical database are met through formulations and monitoring of policies, procedures, and standards relating to database management.

- Assist with ETL, database design and Power BI analytic build, as needed.

- Work with all areas of the business to understand data structures, determine data definitions and evaluate accuracy of data elements.

- Protect database by developing access system and specifying user level of access.

- Combine and transform data from multiple tables, databases, and/or systems.

*This is a fully-remote position


- Experience working with and supporting RDBMSs/MDDBMSs (e.g., Microsoft SQL Server/SSMS) in one or more computing environments (e.g., Windows, Unix).

- Experience with Microsoft Azure technologies (e.g., Azure SQL DB, Azure Analysis Services, Azure API apps), knowledge of Azure Database Administration, and how to implement data flows (e.g., Azure Data Factory).

- Experience in designing, modeling, developing, and supporting large RDBMSs in an Azure environment.

- Proficient at using T-SQL scripts to manipulate data and simplify access to data (experience with subqueries, joins, unions, stored procedures, views, functions, temp tables, CTEs, etc.).

- Strong understanding of data sources, flow, and limitations with the ability to combine and manipulate data from multiple sources.

- Working knowledge and experience with extracting large datasets and designing ETL flows/processes to ensure performance/reliability (designing, implementing, and loading custom data models), and building and optimizing data pipelines, architectures, and datasets, and batch processing.

- Experience tuning ETL processes to ensure performance and reliability, and optimizing processes (query optimization, indexing, etc.).

- Familiarity with development tools, environments, and workflow management platforms (Visual Studio, GIT, Apache Airflow, etc.), and an understanding of data sources, flow, and limitation with the ability to develop processes that appropriately integrate data from multiple sources.

- Experience with High Availability (HA), Disaster Recovery (DR) options, as well as the monitoring, performance optimization, and backup of databases.

- Experience with Scripting languages (e.g. Python, Java, Scala).

Chris Cook, RRT-ACCS, Organ Recovery Coordinator