Portfolio item number 1
Short description of portfolio item number 1
Short description of portfolio item number 1
Short description of portfolio item number 2
Published in Clinical Chemistry, 2017
Recommended citation: Andrew W Lyon, Peter A Kavsak, Oliver A S Lyon, Andrew Worster, Martha E Lyon, Simulation Models of Misclassification Error for Single Thresholds of High-Sensitivity Cardiac Troponin I Due to Assay Bias and Imprecision, Clinical Chemistry, Volume 63, Issue 2, 1 February 2017, Pages 585–592, https://doi.org/10.1373/clinchem.2016.265058
Download Paper
Published in University of Saskatchewan, 2019
Plants and their structures have been studied for centuries. Many plants develop slowly and follow patterns through out their development. These patterns are often repetitive, or self-nested. To understand these patterns, there have been many tools and algorithms developed.
In the late 1960’s a biologist named Aristid Lindenmayer developed a type of formal language to model the growth of algae. This language would continue to be used for many years in the modeling of plants and other recursive structures. These tools were combined with turtle geometry, which allowed for repetitive sequences of drawing instructions to be built from a formal language to mimic a plants’ structure or pattern.
In 2010, an algorithm called NEST was developed by Chrisophe Godin and Pascal Ferraro to help study the branching structures of plants. This algorithm can be used as a predictor of the original plants’ branching structure. The ability to enter a Lindenmayer language and run a qualitative algorithm could be useful for comparison against real plant data. We expan of thier work, and provide a graphical interface.
Recommended citation: Oliver Lyon, Ian McQuillan, "Lindenmayer Systems - Inferring Branching Topology." University of Saskatchewan, 2019.
Published in Clinical Biochemistry, 2019
Recommended citation: Martha Lyon, Oliver Lyon, Nam Tran, Jeffrey DuBois, Andrew Lyon, "An insulin-dose error assessment grid: A new tool to evaluate glucose meter performance." Clinical Biochemistry, 2019.
Download Paper
Published in The Journal of Applied Laboratory Medicine, 2019
Recommended citation: Martha E Lyon, Roona Sinha, Oliver A S Lyon, Andrew W Lyon, Application of a Simulation Model to Estimate Treatment Error and Clinical Risk Derived from Point-of-Care International Normalized Ratio Device Analytic Performance, The Journal of Applied Laboratory Medicine, Volume 2, Issue 1, 1 July 2017, Pages 25–32, https://doi.org/10.1373/jalm.2017.022970
Download Paper
Published in In the proceedings of Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2020
Use Google Scholar for full citation
Recommended citation: Nazifa Khan, Oliver Lyon, Mark Eramian, Ian McQuillan, "A Novel Technique Combining Image Processing, Plant Development Properties, and the Hungarian Algorithm, to Improve Leaf Detection in Maize." In the proceedings of Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2020.
Published in Archives of Pathology & Laboratory Medicine, 2020
Recommended citation: Mark Inman, Andrew Lyon, Oliver Lyon, Martha Lyon, "Estimated Risk for Insulin Dose Error Among Hospital Patients Due to Glucose Meter Hematocrit Bias in 2020." Archives of Pathology & Laboratory Medicine, 2020.
Download Paper
Published in Queen" s University, 2021
In this thesis, we consider the nondeterministic state complexity of PCR-inspired (polymerase chain reaction) operations. Site-directed operations are used to formally describe the behavior of certain DNA (deoxyribonucleic acid) editing methods which need to identify a subsequence in a host DNA strand prior to editing. These operations can be considered as language operations acting to match patterns between two sets of strings. The site-directed insertion and deletion operations, insert or delete into a host string based on a directing string. The directing string must have a non-empty outfix that matches a substring in the host before operating. Prefix and suffix directed insertion are similar to site-directed insertion except, instead of matching a non-empty outfix, a non-empty prefix or suffix is matched before insertion. We consider the nondeterministic state complexity of site-directed insertion and deletion. Constructing a nondeterministic finite automaton (NFA) for the operation provides an upper bound for the state complexity of the operation. Our construction improves the earlier upper bound in the literature. Existing literature did not give lower bounds for the nondeterministic state complexity of site-directed insertion and deletion. Using the fooling set method we establish lower bounds that are fairly close to the upper bound, albeit the lower bound is not tight
Recommended citation: Oliver Lyon, Kai Salomma, "Nondeterministic State Complexity of Site-Directed Operations." Queen"s University, 2021.
Published in Journal of Automata, Languages and Combinatorics, 2022
Recommended citation: Oliver Lyon, Kai Salomaa, "Nondeterministic State Complexity of Site-Directed Insertion." Journal of Automata, Languages and Combinatorics, 2022.
Download Paper
Published in Implementation and Application of Automata , 2022
Recommended citation: Lyon, O.A.S., Salomaa, K. (2022). Nondeterministic State Complexity of Site-Directed Deletion. In: Caron, P., Mignot, L. (eds) Implementation and Application of Automata. CIAA 2022. Lecture Notes in Computer Science, vol 13266. Springer, Cham. https://doi.org/10.1007/978-3-031-07469-1_15
Download Paper
Published in Theoretical Computer Science, 2023
Recommended citation: Oliver Lyon, Kai Salomaa, "The nondeterministic state complexity of the site-directed deletion language operation." Theoretical Computer Science, 2023.
Download Paper
Published in The Journal of Applied Laboratory Medicine, 2023
Recommended citation: Oliver Lyon, Mark Inman, "A Statistical Simulation to Evaluate the Robustness of Hb A1c Measurement in the Presence of Quantitative Error." The Journal of Applied Laboratory Medicine, 2023.
Download Paper
Published in The Journal of Applied Laboratory Medicine, 2023
Recommended citation: Christopher Farnsworth, Oliver Lyon, "QC a Risky Business: The Development of Novel Risk-Based Tools for Assessing QC Methods." The Journal of Applied Laboratory Medicine, 2023.
Download Paper
Published:
BACKGROUND: In 2016 FDA proposed performance expectations for POCT INR devices in response to a post market risk analysis of serious clinical and patient selfmonitoring adverse events: 95% of all INR results should fall within ±0.4 for INR <2; ± 20% for INR ≥2 to 3.5; ± 20% for INR ≥3.5 to 4.5 and ± 25% for INR ≥4.5. OBJECTIVE: To estimate the clinical risk of warfarin dosing error as a consequence of POCT INR assay inaccuracy and imprecision at FDA performance goals. METHOD: INR values (n= 53, 535) were obtained from community adult patients in the Saskatoon Health Region (SHR). Monte Carlo simulation models were used to assess the influence of analytical bias and imprecision on INR values by evaluating the fraction of warfarin-dose-categories according to the SHR algorithm that were unchanged or changed by ≥1, ≥2 or ≥3 dose categories. RESULTS: Simulations used a bias of ±0.4 to ±0.8 combined with 3% imprecision and predicted that 45% to 75% of results would have ≥1 category warfarin dosing error, and 1% to 18% of results would have ≥2 category errors. If INR imprecision was increased to 10%, then the model predicted that 45% to 75% of results continue with ≥1 category warfarin dose error but the fraction with ≥2 category error would increase to 2% to 24%. CONCLUSIONS: Simulation models demonstrated the extent of one category and two category treatment errors for POCT INR assays is highly dependent on method bias and only partially affected by method imprecision ≤10%.
Published:
I was invited to present on the princials for medical simulation and interpratiation by the Medical And Scientific Affairs group for Nova. This lecture was focused on agent based modeling and fitting high dimesional data.
Published:
Background: The Clarke grid, Parkes error grids and Surveillance error grid were developed from expert opinion to assess the clinical accuracy of glucose meters. In the past decade there have been technological advances in the analytical performance of glucose meters and numerous insulin-dose protocols developed for local hospitalized and community patients. To relate accuracy of glucose tests and clinical use of insulin, an error grid could express glucose error in units of the ‘size of error of insulin dose’ administered, customized for the local insulin protocol for a specific patient group. Objective: To develop a grid to display the relationship between glucose error and the associated error in insulin dose, using an individual institutional insulin protocol. Methods: The effect of 0.5 mg/dL differences between reference and test methods on the risk of insulin dosing error was simulated using a published insulin dosing protocol (Karon et al., 2010). Data are displayed on a grid of reference glucose and meter glucose values with increasing color intensity applied as the size of clinical error in units of insulin dose errors increases. To evaluate a glucose meter, paired glucose data for the reference and test methods are plotted on the error grid and a histogram represents the frequency of insulin dose errors. Results: Figure 1. IDEA error grid analysis: Patient correlation data (n= 199) measured by reference and test glucose methods are plotted on the error grid. A frequency histogram of the insulin dose errors of the patient results depict 94.5% of insulin doses were within +/- 1 dose, 99.5% within +/- 2 doses. Conclusions: The IDEA grid is a useful tool that describes differences in glucose measurement in terms of insulin dosing error. This grid is capable of being individualized to an insulin dosing protocol to enable objective assessment of clinical risk attributed to analytic glucose meter error.
Published:
Introduction: The Clarke, Parkes and Surveillance error grids are expert opinion based tools developed to assess clinical risk associated with glucose meters. The objective of this study was to assess the clinical risk of the Nova StatStrip® glucose meter using an insulin dosing error grid that expresses glucose error in units of the ‘size of error in insulin dose categories’ which is customizable for local insulin protocols. Materials and Methods: Residual lithium heparin venous whole blood specimens (n=156) from hospitalized adult patients were immediately analyzed using the Nova Statstrip® glucose meter and 100 µL of whole blood was treated with perchloric acid prior to analysis with a traceable isotope dilution mass spectrometry (IDMS), the definitive method for glucose. The clinical accuracy of the Nova StatStrip® glucose meter relative to the IDMS was assessed using the Parkes, Surveillance Error and IDEA error grids. For the IDEA grid, the effect of 0.5 mg/dL differences between methods on the size of insulin dose category error was determined by simulation using the protocol for critically ill patients (Karon et al, 2010). Patient data was plotted on the error grid to indicate the extent that observed glucose results are expected to affect changes of insulin dose. Results: Parkes error grid analysis revealed that zone A contained 98.1% (153/156) of the Statstrip® results and zone B 1.9% (3/156). With the Surveillance error grid, 154/156 results were within the assessable range and 93.5% (144/154) indicated no degree of clinical risk. The remaining 6.5% (10/154) were within the “slight” category of clinical risk.
Conclusions: The clinical risk of inappropriately administering insulin with the Nova Statstrip® glucose meter was minimal (100% of results within +/- 2 insulin dose categories). The IDEA error grid analysis is a useful and adaptable tool to assess clinical applications of glucose meters.
Published:
Introduction: Accurate glucose measurement in critically ill patients is paramount when implementing moderate or tight glycemic protocols. In 2014, the surveillance error grid was established in response to a survey of 206 diabetes expert physicians. This grid differs from those of Clarke and Parkes in that 15 color coded zones were developed with associated levels of clinical risk. An insulin dosing error grid (IDEA) was developed that expresses glucose error in units of the ‘size of error in insulin dose categories’ which is customizable for local insulin protocols for specific patient groups. The objective of this study was to compare application of the surveillance and IDEA error grids using critically ill patient glucose data from a multicentre international investigation.
Materials and Methods: Retrospective analysis of 1,815 paired glucose results (Nova Statstrip® versus the central laboratory glucose methods) from 1,698 critically ill patients (DuBois et al., 2017) were analyzed with the surveillance and IDEA error grids. The surveillance error grid (SEG) analysis was conducted using SEG software. For the IDEA grid, the effect of 0.5 mg/dL differences between methods on the size of insulin dose category error was determined by simulation using the protocol for critically ill patients (Karon et al, 2010). Patient data was plotted on the error grid to indicate the extent that observed glucose results are expected to affect changes of insulin dose. Results: With the surveillance error grid, 99.1% were within the assessable range, 97.6% indicated no clinical risk, 2.3% demonstrated slight (low risk) and 0.1% showed slight (high risk). Analysis with the IDEA grid indicated that 76.8% were within +/- 1 insulin dose categories; 99.2% within +/-2 dose categories (low risk). Conclusions: The surveillance error grid and the IDEA error grid both indicated there was low clinical risk with using the Nova Statstrip® glucose meters to determine insulin dose in this critically ill population.
Published:
Introduction: Isotope dilution mass spectrometry (IDMS) is a definitive method for glucose measurement and perchloric acid treated specimens analyzed with hexokinase (PCA- hexokinase) is a reference method. The performance of clinical laboratory plasma glucose methods are commonly assessed by evaluating their traceability to either the IDMS or PCA-hexokinase results. Glucose error grids have been used to determine the clinical accuracy of point of care glucose meters. The objective of the current study was to describe the variation amongst the PCA hexokinase, plasma glucose oxidase and the IDMS methods using the insulin dosing error (IDEA) grid, Parkes and Surveillance error grids.
Materials and Methods: Residual lithium heparin venous whole blood specimens (n= 156) from hospitalized adult patients were treated with perchloric acid prior to analysis with the IDMS and the Roche Cobas® hexokinase methods. The remaining specimen was centrifuged and the plasma analyzed using the Beckman DxC® glucose oxidase method. The variation of the PCA-hexokinase and plasma glucose oxidase methods relative to the IDMS was assessed using the Parkes, Surveillance Error grid (SEG) and IDEA error grids. For the IDEA grid, the effect of 0.5 mg/dL differences between methods on the size of insulin dose category error was determined by simulation using the protocol for critically ill patients (Karon et al, 2010). Patient data was plotted on the error grid to indicate the extent that observed glucose results are expected to affect changes of insulin dose. Results: Parkes error grid analysis for PCA- hexokinase results revealed 86.5% zone A and 13.5% zone B, and for plasma glucose oxidase results were 99.4% zone A and 0.6% zone B. With SEG 98.7% of the PCA-hexokinase results were within the assessable range, 78.6% indicated no clinical risk, 19.5% demonstrated slight (low risk) and 1.9% showed slight (high risk). Plasma glucose oxidase results within the SEG required range demonstrated 94.7% had no clinical risk and 5.3% had slight (low risk). IDEA grid analysis of the PCA-hexokinase results indicated 85.9% within ±1 insulin dose category, 98.7% within ±2 categories. Plasma glucose oxidase results with the IDEA grid showed 96.8% within ±1 insulin dose category, 98.1% within ±2 categories.
Conclusions: Error grid analyses demonstrated that an automated lab glucose method (plasma glucose oxidase) and a glucose reference method (PCA-hexokinase) displayed imprecision and inaccuracy relative to a definitive glucose method. Error grid analyses of candidate glucose methods relative to automated lab glucose or glucose reference methods should not exclusively attribute analytic error to the candidate method.
Published:
I was invited to present the IDEA error grid to the Medical And Scientific Affairs group for Nova. This lecture was focused on teaching the group how to run the IDEA error grid for comparative analysis between instruments.
Published:
Background & Aims: Current American Diabetes Association (ADA) guidelines state a fasting plasma glucose (FPG) ≥126 mg/dL (7.0 mmol/L) is diagnostic of diabetes, 100-126 mg/dL (5.6-7.0 mmol/L) pre-diabetes and <100 mg/dL (5.6 mmol/L) as healthy. The objective was to evaluate the impact of analytic error of glucose measurement and biological variation on misclassification of healthy, pre-diabetic and diabetic patients. Methods: NHANES 2015 FPG dataset was used as a population sample (n=2972) for simulation studies: prevalence of 13.1% diabetics by FPG. FPG results were categorized using ADA criteria. FPG concentrations were then modified in a statistical model by addition of bias, imprecision and biological variation. The fraction of modified FPG results misclassified between ADA healthy, pre-diabetic and diabetic groups was assessed. Results: The fractions of FPG results misclassified as functions of bias and precision were determined. Representative results were: (A) Biologic variation of FPG alone misclassified: 15% of Healthy values as Pre-diabetics, 20% of Pre-diabetics as Healthy, 3 % of Pre-diabetics as Diabetic, and 4% of Diabetic as Pre-diabetics. (B) Addition of 2% precision and -5% bias misclassified: 44% of Pre-diabetics as Healthy and 11% of Diabetics as Pre-diabetics and 11% of Diabetics as Pre-diabetic. (C) Addition of 2% precision and +5% bias misclassified: 36% of Healthy patients as Pre-diabetics, 11% of Pre-diabetics as Diabetic and 11% of Pre-diabetics as Healthy. Conclusions: This simulation model demonstrated significant risk of misclassification errors of diabetics, pre-diabetics and healthy patients due to bias of FPG methods and demonstrated minor influence of precision.
Published:
Background & Aims: While HbA1c methods have improved, commercial methods continue to have ±5%bias (e.g. For a target of 53.5 mmol/mol: 50.8 to 56.2; For a target of 7.0%: 6.65 to 7.35%) in proficiency testing programs. The aim of this study was to evaluate the influence of HbA1c analytical error on misclassification of patients using diagnostic criteria outlined by the American Diabetes Association (ADA). Methods: NHANES 2015 HbA1c dataset was used as a population sample (n=6326) for simulation studies: prevalence of 11.0% diabetics by HbA1c. HbA1c results were categorized using ADA criteria as healthy, pre-diabetic or diabetic. HbA1c concentrations were then modified in a statistical model by addition of bias, imprecision and biological variation. The fraction of modified HbA1c results misclassified between ADA healthy, pre-diabetic and diabetic groups was assessed. Results: The fractions of HbA1c results misclassified as functions of bias and precision were determined. Representative results were: (A) Biologic variation of HbA1c alone misclassified: 7% of Healthy values as Pre-diabetics, 15% of Pre-diabetics as Healthy, 1 % of Pre-diabetics as Diabetic, and 2% of Diabetic as Pre-diabetics. (B) Addition of 2% precision and -5% bias misclassified: 62% of Pre-diabetics as Healthy and 16% of Diabetics as Pre-diabetics. (C) Addition of 2% precision and +5% bias misclassified: 25% of Healthy patients as Pre-diabetics and 17% of Pre-diabetics as Diabetic. Conclusions: This simulation model demonstrated significant risk of misclassification errors of diabetics, pre-diabetics and healthy patients due to bias of HbA1c methods and demonstrated minor influence of precision.
Published:
Site-directed deletion is a biologically inspired operation that removes a contiguous substring from the host string guided by a template string. The template string must match the prefix and suffix of a substring. When this occurs the middle section of the substring not contained in the prefix or suffix is removed. We consider the nondeterministic state complexity of the site-directed deletion operation. For regular languages recognized by nondeterministic finite automata with N and M states, respectively, we establish a new upper bound of 2NM + N and a new worst case lower bound of 2NM. The upper bound improves a previously established upper bound, and no non-trivial lower bound was previously known for the nondeterministic state complexity of site-directed deletion.
Published:
Background: Screening programs for chronic kidney disease (CKD) have used creatinine- based equations to estimate glomerular filtration rate (eGFR). A black race modifier was incorporated into the CKD-EPI 2009 equation whereas it was removed in the 2021 equation. In Canada, there is limited collection of race or ethnicity data for healthcare purposes due, in part, to the Canadian Human Rights Act which prohibits discrimination on numerous grounds including race. In many Canadian hospitals, the 2009 eGFR equation was implemented without using the race modifier. In this study, we estimated the effect of switching from the 2009 equation (calculated without race) to the new race independent 2021 CKD-EPI equation on the classification of CKD stages.
Methods: Participant creatinine, age and sex results from the CDC-NHANES 2017-18 dataset were used to calculate eGFR using the 2009 CKD-EPI equation (without race modifier) and the 2021 CKD-EPI equation. The impact of the two equations on the KDIGO-CKD stage categorization was assessed according to age and sex.
Results: A total of 550/4917 (11.2%) eGFR results were predicted to be < 60 ml/min/1.73m2 (female 275/2552 (10.8%); male 275/2365 (11.6%)) with the 2009 equation. In contrast, eGFR results < 60 ml/min/1.73m2 decreased to 443/4917 (9.0%) (female 224/2552 (8.8%); male 219/2365 (9.3%)) with the 2021 equation. The difference between the two equations was greatest for older individuals (6.4 % decrease 70-79 year age group; 2.6% decrease 30-39 year age group). Conclusions: An overall 2.2% decrease in eGFR results < 60 ml/min/1.73m2 is predicted if the 2009 CKD-EPI equation, calculated without the race modifier, is replaced with the 2021 CKD-EPI equation.
Published:
Background: In 2021 the equation to estimate glomerular filtration rate (eGFR) was modified to remove the race variable as it is a social and not a biologic construct (NEJM 2021). Prevalence of chronic kidney disease (CKD) among black adults was reported to increase by 3.5% with the removal of race from the equation (JAMA 2021). This is the first of a series of descriptive studies investigating factors contributing to variation in eGFR results. In this study, the contribution of creatinine analytic and biologic variation (BV) on the classification of Kidney Disease in Global Outcome (KDIGO) CKD stages using the CKD-EPI 2021 eGFR equation was assessed. Methods: Statistical bootstrapping of CDC-NHANES 2017-18 data (n= 6401?) was used to generate our study population of eGFR variables prior to the introduction of creatinine analytic (bias and imprecision) and BV. The impact of this variation on the sensitivity and specificity of KDIGO single eGFR thresholds and the actual CKD stage categorization was assessed.
Results: With no analytic or BV, the sensitivity and specificity at the 60 and 30 ml/min/m2 eGFR thresholds were both > 99%. Sensitivity and specificity estimates accounting for a creatinine within person BV of 4.5% were > 90% and > 99% respectively at both the 60 and 30 ml/min/m2 thresholds. The inclusion of -10% bias with the 4.5% BV resulted in sensitivity and specificity estimates of 63% and >99% (60 ml/min/m2) and 74% and >99% (30 ml/min/m2), respectively. Up to 3% of eGFR results were recategorized into different KDIGO CKD stages with combinations of creatinine analytic and BV.
Conclusions: Creatinine analytic and BV can introduce significant variation in calculated eGFR results thereby affecting the calculated sensitivity and specificity at the KDIGO single eGFR thresholds.
Published:
A central challenge in comparative genomics is identifying the genomic basis for lineage-specific adaptations to changing functional requirements. One approach to addressing this problem using genomic sequence data alone, which overcomes limitations of traditional dN/dS approaches, is to model site-heterogeneous sequence-fitness relationships and to infer changes to such relationships along the branches of a phylogeny (inference of ‘fitness shifts’). This requires very parameter-rich models and correspondingly large datasets. However, the factors shaping the detectability of fitness shifts, as well as the distinguishability of fitness shifts from other time-heterogeneous forces such as variation in effective population sizes across lineages, are unclear. Here, we develop a framework for identifying such factors by measuring the asymptotic distinguishability of models of sequence evolution along a fixed phylogeny. We developed an efficient C++ library for modelling and inference of time-heterogeneous (Markov-modulated) mutation-selection codon substitution models – a class of models with an explicit population genetics basis. Using this framework, we measured distinguishability of fitness-shift from time-homogeneous evolution, and fitness-shift from changes in the effective population size in terms of the Kullback-Leibler divergence between models. Using these measurements, we show how asymptotic power analysis can be easily performed to assess minimum sample sizes needed to achieve reasonable power levels. Notably, we focused our analysis on the SARS-CoV-2 phylogeny, given the pressing need to identify functional shifts among variants of concern amidst the ongoing pandemic.
Published:
The evolutionary process is influenced by many factors. However, not all forces of interest are guaranteed to be identifiable from comparative sequence data alone. Here we study what factors shape the distinguishability of changes in the underlying fitness landscape from variation in effective population size over time, since each of these processes can have similar influences on the expected distribution of codon states at affected positions. Using a family of Markov- modulated mutation selection codon models with an explicit population genetics basis, we study distinguishability in terms of the Kullback-Leibler divergence between evolutionary models, and extend this study using simulation. We thereby establish bounds on the number of sequence samples required before and after both fitness shifts and population size shifts on a large phylogenetic tree to achieve acceptable (best case) distinguishability and high power in resultant hypothesis tests. Our results highlight some of the challenges of modelling and inference of non-equilibrium molecular evolutionary processes from finite data.
Published:
I was a guest lecturer for the MDSC 523 course. AI: Applications in Health Care course. The lecture focused on the applications of PCA and LDA with a case example.
Undergraduate+Graduate course, Queens University, School of Computing, 2019
Introduction to fundamental concepts and applications in image processing and computer vision. Topics include image acquisition, convolution, Discrete Fourier Transform, image enhancement, edge detection, segmentation, image registration, human contrast perception, colour perception and reproduction, stereo vision.
Undergraduate course, Queens University, School of Computing, 2020
A wide range of topics of current importance in computing, including technical issues, professional questions, and moral and ethical decisions. Students make presentations, deliver papers, and engage in discussion.
Undergraduate course, University of Calgary, Cumming School of Medicine, 2024
A focus on concepts and ideas in artificial intelligence (AI) and machine learning, including statistical approaches, visualization, and human-computer interactions. An exploration of current research in AI and machine learning with a specific focus on applications to health.
Undergraduate course, University of Calgary, Cumming School of Medicine, 2025
An introduction to the questions, methods, research techniques uses and ethics arising across the different majors of Biomedical Sciences, Bioinformatics and Health and Society. Sessions will support the development of a broad perspective on health issues. A component of the course will also introduce students to principal theories and methods in research ethics.