NCUR 2021 Proceedings

Ahjah Hamilton: How Racism and Discrimination in the Literary Canon Effect American Education Education - Time: Tue 12:30pm-1:30pm - Session Number: 4168 Ahjah Hamilton, Barbara Thomspon and Dr. Althea Tait, Department of English, State University of New York 350 New Campus Dr, 14420 Ahjah Hamilton

With the assistance of the McNair Program at SUNY Brockport, I have studied the ways in which African Americans, as well as other POC are represented in the literary canon and how this representation correlates to the curriculum in American classrooms. I recognized that racism and discrimination affect almost every aspect of society, including the education system. These issues are shown through the lack of diversity in school staff as well as school curriculum. This research was conducted partly because the works of Black and other POC scholars are often overlooked and understudied in comparison to their white counterparts. This research was conducted with the purpose of identifying how racism affects education because it is unfair and unethical for educators to focus their curriculum on white culture and history, when this doesn't reflect the diversity of the classrooms in which they teach. The information used was found through interviews, as well as other scholarly journals and articles. One major finding is that a lot of Black and other POC writers and poets such as Mari Evans, often feel unrecognized for their achievements, a statement even more true for the Black and POC women in the literary canon. Another major finding was that studies have shown that children and students are more receptive to information if they can somehow relate to it. Another major finding was that teachers at suburban schools tended not to have much representation, while teachers in urban schools typically have a more diverse curriculum even though there is still a lot more room for improvement. This research is intended to spark a conversation about the ways we can begin to make the education system more inclusive for the benefit of future students.

With the assistance of the McNair Program at SUNY Brockport, I have studied the ways in which African Americans, as well as other POC are represented in the literary canon and how this representation correlates to the curriculum in American classrooms. I recognized that racism and discrimination affect almost every aspect of society, including the education system. These issues are shown through the lack of diversity in school staff as well as school curriculum. This research was conducted partly because the works of Black and other POC scholars are often overlooked and understudied in comparison to their white counterparts. This research was conducted with the purpose of identifying how racism affects education because it is unfair and unethical for educators to focus their curriculum on white culture and history, when this doesn't reflect the diversity of the classrooms in which they teach. The information used was found through interviews, as well as other scholarly journals and articles. One major finding is that a lot of Black and other POC writers and poets such as Mari Evans, often feel unrecognized for their achievements, a statement even more true for the Black and POC women in the literary canon. Another major finding was that studies have shown that children and students are more receptive to information if they can somehow relate to it. Another major finding was that teachers at suburban schools tended not to have much representation, while teachers in urban schools typically have a more diverse curriculum even though there is still a lot more room for improvement. This research is intended to spark a conversation about the ways we can begin to make the education system more inclusive for the benefit of future students.

An Examination of the Impact of Corporate Environmental Performance on Corporate Financial Performance using NHST versus Bayesian Approaches Business - Time: Tue 11:00am-12:00pm - Session Number: 3502 Lisa Pink and Dr. James Cordeiro, Department of Business & Management, State University of New York- Brockport, 350 New Campus Drive, Brockport NY 14420 Lisa Pink

Using data from diverse corporate environmental performance management sources, we compare the impact of environmental performance measures over 2002-2017 on corporate financial performance for a sample of large US corporations. Corporate financial performance is measured using both historical accounting measures (return on assets or ROA) and prospective market measures (Tobin’s Q).

We control for the impacts of firm size, leverage, and other relevant variables and utilize time and industry dummies in our models. We also test linear and non-linear (e.g. quadratic) specifications to test for a positive impact of corporate environmental performance on corporate financial performance as well as a proposed curvilinear (U-shaped) effect, consistent with recent theory in the corporate social performance (CSR) area.

A main contribution of the research is a discussion and explication of the differences between conventional null hypothesis significance testing (NHST) and Bayesian approaches to modeling. As part of this effort, we discuss the difference in philosophy underlying these two approaches, as well as in model specification such as the choice of priors, model testing, and interpretation of output.

(Keywords: Corporate environmental performance, Corporate Financial Performance, Null Hypothesis Significance Testing, Bayesian Analysis)

Using data from diverse corporate environmental performance management sources, we compare the impact of environmental performance measures over 2002-2017 on corporate financial performance for a sample of large US corporations. Corporate financial performance is measured using both historical accounting measures (return on assets or ROA) and prospective market measures (Tobin’s Q).

We control for the impacts of firm size, leverage, and other relevant variables and utilize time and industry dummies in our models. We also test linear and non-linear (e.g. quadratic) specifications to test for a positive impact of corporate environmental performance on corporate financial performance as well as a proposed curvilinear (U-shaped) effect, consistent with recent theory in the corporate social performance (CSR) area.

A main contribution of the research is a discussion and explication of the differences between conventional null hypothesis significance testing (NHST) and Bayesian approaches to modeling. As part of this effort, we discuss the difference in philosophy underlying these two approaches, as well as in model specification such as the choice of priors, model testing, and interpretation of output.

(Keywords: Corporate environmental performance, Corporate Financial Performance, Null Hypothesis Significance Testing, Bayesian Analysis) Anxiety, Impulsivity and Intolerance of Uncertainty Psychology - Time: Wed 1:30pm-2:30pm - Session Number: 6578 M. Fensken, L. B. Forzano (mentor), G. Becker, & C. Bakalik, Department of Psychology, College at Brockport, SUNY, 350 New Campus Dr., Brockport, NY 14420 Michael Fensken

Anxiety disorders represent the most frequently diagnosed mental health problem among American college students. Impulsivity has been linked with an anxiety as a potential risk factor. Impulsivity is defined as choosing smaller, sooner rewards, over larger, later rewards and is commonly measured with delay discounting tasks. It has been suggested that the delay discounting effect, i.e., the tendency to value less delayed rewards, in anxious individuals is driven by their intolerance of uncertainty. Intolerance of uncertainty (IU) is defined as how acceptable uncertain situations are. In the current study, it is hypothesized that those with higher levels of anxiety will exhibit more delay discounting and higher intolerance of uncertainty than those with lower levels of anxiety. Preliminary analyses of 29 participants currently reveals no significant relationships between anxiety, intolerance of uncertainty and impulsivity measures (i.e.. computerized delay discounting and impulsivity tasks). Data collection is ongoing and it is expected that as the sample size increases towards the proposed number of 60 participants, significant results will be found. This study will address the gap in the field that links anxiety with impulsivity. Addressing this gap can lead to an improvement in the treatment and prevention of anxiety.

Anxiety disorders represent the most frequently diagnosed mental health problem among American college students. Impulsivity has been linked with an anxiety as a potential risk factor. Impulsivity is defined as choosing smaller, sooner rewards, over larger, later rewards and is commonly measured with delay discounting tasks. It has been suggested that the delay discounting effect, i.e., the tendency to value less delayed rewards, in anxious individuals is driven by their intolerance of uncertainty. Intolerance of uncertainty (IU) is defined as how acceptable uncertain situations are. In the current study, it is hypothesized that those with higher levels of anxiety will exhibit more delay discounting and higher intolerance of uncertainty than those with lower levels of anxiety. Preliminary analyses of 29 participants currently reveals no significant relationships between anxiety, intolerance of uncertainty and impulsivity measures (i.e.. computerized delay discounting and impulsivity tasks). Data collection is ongoing and it is expected that as the sample size increases towards the proposed number of 60 participants, significant results will be found. This study will address the gap in the field that links anxiety with impulsivity. Addressing this gap can lead to an improvement in the treatment and prevention of anxiety.

Association Between E-Cigarette Use and Chronic Obstructive Pulmonary Disease Among Cancer Survivors Health & Human Development - Time: Tue 12:30pm-1:30pm - Session Number: 538 Wolfgarr Lobo, Godfred Antwi, Public Health, State University of Brockport, 350 New Campus Drive Brockport, NY 14420. Wolfgarr Lobo

Background: E-cigarette use among cancer survivors in the United States has increased since entering the U.S. marketplace roughly a decade ago. Marketed as a safer alternative to combustible cigarettes, available epidemiological evidence has shown that most e-cigarettes contain nicotine and other toxic chemicals such as diacetyl and ultrafine particles. A recent study has linked e-cigarette use to respiratory disease in the U.S. general adult population. However, the association between e-cigarette use and Chronic Pulmonary Disease (COPD) in the cancer survivor subpopulation remains unknown. Therefore, the study aimed to examine the association between e-cigarette use and COPD among cancer survivors aged ≥18 years in the United States.

Methods: The study used data from the 2018 Behavioral Risk Factor Surveillance System, respondents had a history of cancer diagnosis and completed cancer treatment. The study included 2,492 (weighted = 6, 64703) cancer survivors with information on e-cigarette use and COPD. Multivariable Logistic regression was used to analyze the cross-sectional association between e-cigarette user status and COPD, adjusting for conventional cigarette smoking and demographic variables including age, sex, education, race and Body Mass Index. All analyzes were carried out with SAS version 9.4.

Results: Of the 2,492 cancer survivors in this study, 15.34% self-reported history of COPD. The weighted prevalence of current and former e-cigarette use were 2.00% and 10.96%. After adjusting for covariates, current e-cigarette users had significant higher odds of being diagnosed with COPD (OR=4.49, 95% CI, 1.47-13.73) than never e-cigarette users. Former e-cigarette users were more likely to report being diagnosed with COPD (OR= 3.12, 95% CI: 1.68-5.80) relative to never e-cigarette users.

Conclusion: Although cross-sectional, the results suggest that e-cigarette use is independently associated with COPD in cancer survivors. These findings highlight the need for future studies to analyze the longitudinal risk of COPD with e-cigarette use in cancer survivors.

Background: E-cigarette use among cancer survivors in the United States has increased since entering the U.S. marketplace roughly a decade ago. Marketed as a safer alternative to combustible cigarettes, available epidemiological evidence has shown that most e-cigarettes contain nicotine and other toxic chemicals such as diacetyl and ultrafine particles. A recent study has linked e-cigarette use to respiratory disease in the U.S. general adult population. However, the association between e-cigarette use and Chronic Pulmonary Disease (COPD) in the cancer survivor subpopulation remains unknown. Therefore, the study aimed to examine the association between e-cigarette use and COPD among cancer survivors aged ≥18 years in the United States.

Methods: The study used data from the 2018 Behavioral Risk Factor Surveillance System, respondents had a history of cancer diagnosis and completed cancer treatment. The study included 2,492 (weighted = 6, 64703) cancer survivors with information on e-cigarette use and COPD. Multivariable Logistic regression was used to analyze the cross-sectional association between e-cigarette user status and COPD, adjusting for conventional cigarette smoking and demographic variables including age, sex, education, race and Body Mass Index. All analyzes were carried out with SAS version 9.4.

Results: Of the 2,492 cancer survivors in this study, 15.34% self-reported history of COPD. The weighted prevalence of current and former e-cigarette use were 2.00% and 10.96%. After adjusting for covariates, current e-cigarette users had significant higher odds of being diagnosed with COPD (OR=4.49, 95% CI, 1.47-13.73) than never e-cigarette users. Former e-cigarette users were more likely to report being diagnosed with COPD (OR= 3.12, 95% CI: 1.68-5.80) relative to never e-cigarette users. Conclusion: Although cross-sectional, the results suggest that e-cigarette use is independently associated with COPD in cancer survivors. These findings highlight the need for future studies to analyze the longitudinal risk of COPD with e-cigarette use in cancer survivors.

Beating Google Captcha with Artificial Intelligence Computer Science - Time: Tue 12:30pm-1:30pm - Session Number: 4008 John T Haag, Zach Kovalenko, Thomas Shear, Dr. Ning Yu, Department of Computer Science, SUNY College At Brockport, 350 New Campus Dr, Brockport, NY 14420, United States John Haag, Zach Kovalenko, Thomas Shear

The Turing test was named after the godfather of computer science Allan Turing. This important concept in computer science begs the question, whether or not a computer can tell the difference between the intelligence of a human and another computer. The Google Captcha (Completely Automated Public Turing test to tell Computers and Humans Apart) is an example of a Turing test. The Google Captcha was created to avoid spam, and robots filling in important data fields within websites. Our project is aiming to break the Google Captcha using modern Artificial Intelligence techniques. This will allow continued use of robots on websites that previously challenged the use of robots. The methodology used for our project comes in the form of object detection models. These models use Artificial Intelligence techniques such as Neural Networks to find a Google Captcha anywhere on a computer screen and output a percentage of likely hood it is a Captcha along with a label to the desired item. Our goal is to train two object detection models to find a Google captcha and build a computer bot to click on the Captcha without any human interaction. Thus, solving the Captcha “I am not a Robot” without images. Some research shows that a new type of object detection model may beat a classic TensorFlow model. Thus, one model will be created with TensorFlow and Deep learning, and the other with Pytorch and a YOLO (You Only Look Once) version 5 (v5) Neural Network. Our expectation is that the YOLO v5 model will beat the TensorFlow model in many benchmark measurements including a loss of less than 0.05, a high accuracy (more than 90%), and increase the speed at which testing the model occurs with a Nvidia GPU (Graphical Processing Unit).

The Turing test was named after the godfather of computer science Allan Turing. This important concept in computer science begs the question, whether or not a computer can tell the difference between the intelligence of a human and another computer. The Google Captcha (Completely Automated Public Turing test to tell Computers and Humans Apart) is an example of a Turing test. The Google Captcha was created to avoid spam, and robots filling in important data fields within websites. Our project is aiming to break the Google Captcha using modern Artificial Intelligence techniques. This will allow continued use of robots on websites that previously challenged the use of robots. The methodology used for our project comes in the form of object detection models. These models use Artificial Intelligence techniques such as Neural Networks to find a Google Captcha anywhere on a computer screen and output a percentage of likely hood it is a Captcha along with a label to the desired item. Our goal is to train two object detection models to find a Google captcha and build a computer bot to click on the Captcha without any human interaction. Thus, solving the Captcha “I am not a Robot” without images. Some research shows that a new type of object detection model may beat a classic TensorFlow model. Thus, one model will be created with TensorFlow and Deep learning, and the other with Pytorch and a YOLO (You Only Look Once) version 5 (v5) Neural Network. Our expectation is that the YOLO v5 model will beat the TensorFlow model in many benchmark measurements including a loss of less than 0.05, a high accuracy (more than 90%), and increase the speed at which testing the model occurs with a Nvidia GPU (Graphical Processing Unit). Breakdown of the Stokes-Einstein Equation in Reverse Micellar Solutions Chemistry - Time: Tue 2:00pm-3:00pm - Session Number: 609 Matthew D. Too, Dr. Markus M. Hoffmann, Department of Chemistry and Biochemistry, SUNY Brockport, 350 New Campus Drive, Brockport, NY 14420 Matthew Too

The well-known Stokes-Einstein equation relates the size of a moving particle in solution to its self-diffusion coefficient and the solution viscosity. The Stokes-Einstein equation is applicable for many liquids and solutions. However, there are also systems where the Stokes-Einstein equation “breaks down.” These systems include supercooled water, metal alloys, solutions of ionic liquids, and certain polymers. Prior studies in our lab have observed the breakdown of the Stokes-Einstein equation in solutions of poly(ethylene oxide) alcohol (C10E6) nonionic surfactant in cyclohexane. Specifically, unreasonably small aggregate radii were observed with varying C10E6 concentration in cyclohexane. This study considered whether an observable breakdown of the Stokes-Einstein equation would also occur in varying water content solutions of fixed concentration of C10E6 in cyclohexane. Therefore, corresponding new experimental results on self-diffusion coefficients and solution viscosity will be presented. Self-diffusion coefficients were found by NMR diffusion-ordered spectroscopy, and viscosities were measured by a rolling-ball viscometer. These data show that the Stokes-Einstein equation breaks down also in these water-in-oil reverse micellar solutions, resulting in unreasonably small average radii and aggregation numbers. However, the ratio of solvent and C10E6 self-diffusion coefficients provided average radii and aggregation numbers consistent with results published by others in the literature. In addition to these main findings, other interesting observations were made. For example, the ethylene oxide functional group of the C10E6 appears to diffuse at a slower rate than its alkyl chain functional group, and the cyclohexane self-diffusion coefficients appear to be independent of the water content. These and other observations will be included in the presentation.

The well-known Stokes-Einstein equation relates the size of a moving particle in solution to its self-diffusion coefficient and the solution viscosity. The Stokes-Einstein equation is applicable for many liquids and solutions. However, there are also systems where the Stokes-Einstein equation “breaks down.” These systems include supercooled water, metal alloys, solutions of ionic liquids, and certain polymers. Prior studies in our lab have observed the breakdown of the Stokes-Einstein equation in solutions of poly(ethylene oxide) alcohol (C10E6) nonionic surfactant in cyclohexane. Specifically, unreasonably small aggregate radii were observed with varying C10E6 concentration in cyclohexane. This study considered whether an observable breakdown of the Stokes-Einstein equation would also occur in varying water content solutions of fixed concentration of C10E6 in cyclohexane. Therefore, corresponding new experimental results on self-diffusion coefficients and solution viscosity will be presented. Self-diffusion coefficients were found by NMR diffusion-ordered spectroscopy, and viscosities were measured by a rolling-ball viscometer. These data show that the Stokes-Einstein equation breaks down also in these water-in-oil reverse micellar solutions, resulting in unreasonably small average radii and aggregation numbers. However, the ratio of solvent and C10E6 self-diffusion coefficients provided average radii and aggregation numbers consistent with results published by others in the literature. In addition to these main findings, other interesting observations were made. For example, the ethylene oxide functional group of the C10E6 appears to diffuse at a slower rate than its alkyl chain functional group, and the cyclohexane self-diffusion coefficients appear to be independent of the water content. These and other observations will be included in the presentation.

Cyclometallated Iridium(III) Complexes for G-Quadruplex DNA Sensing Platforms Chemistry - Time: Tue 2:00pm-3:00pm - Session Number: 637 Rachel Horowitz, Bernard Okai, Ammar Hasan, Joshua Blose, and Carly Reed , Department of Chemistry/Biochemistry, SUNY Brockport, 350 New Campus Drive Brockport, NY 14420 Rachel Horowitz, Bernard Okai

Cyclometallated iridium(III) complexes have been applied in G-Quadruplex DNA sensing platforms due to their stability, long-lived photoluminescence, and selectivity for quadruplex DNA over other DNA forms. There remain a wide range of ligand combinations in the possible library of iridium(III) complexes, as well as quadruplex specificity to explore, in the pursuit to understand a structure-activity relationship. A series of iridium(III) complexes were synthesized, via microwave irradiation, containing the pizp (2-phenyl-1H-imidazo[4,5-f]-1,10-phenanthroline) ligand combined with a variety of C^N ligands. The metal complexes were characterized using NMR spectroscopy, elemental analysis, and X-ray crystallography. The pure iridium(III) complexes were combined with various forms of G-quadruplex, double-stranded and single-stranded DNA to test binding selectivity and luminescence intensity enhancement in different buffer solutions. The time-dependence of the photoluminescence was also studied by incubating complexes in various DNA structures and buffers to further understand the effects of different chemical environments on photoluminescence. Finally, the binding assays were performed to determine the binding constants (Kd) in order to quantify the selective binding of the various synthesized iridium(III) complexes to different G-quadruplex DNA and other DNA forms. The impact that structural changes on the iridium(III) complexes have on selectivity and binding will be shared during this talk.

Cyclometallated iridium(III) complexes have been applied in G-Quadruplex DNA sensing platforms due to their stability, long-lived photoluminescence, and selectivity for quadruplex DNA over other DNA forms. There remain a wide range of ligand combinations in the possible library of iridium(III) complexes, as well as quadruplex specificity to explore, in the pursuit to understand a structure-activity relationship. A series of iridium(III) complexes were synthesized, via microwave irradiation, containing the pizp (2-phenyl-1H-imidazo[4,5-f]-1,10-phenanthroline) ligand combined with a variety of C^N ligands. The metal complexes were characterized using NMR spectroscopy, elemental analysis, and X-ray crystallography. The pure iridium(III) complexes were combined with various forms of G-quadruplex, double-stranded and single-stranded DNA to test binding selectivity and luminescence intensity enhancement in different buffer solutions. The time-dependence of the photoluminescence was also studied by incubating complexes in various DNA structures and buffers to further understand the effects of different chemical environments on photoluminescence. Finally, the binding assays were performed to determine the binding constants (Kd) in order to quantify the selective binding of the various synthesized iridium(III) complexes to different G-quadruplex DNA and other DNA forms. The impact that structural changes on the iridium(III) complexes have on selectivity and binding will be shared during this talk.

Evaluating the Performance of Caching Strategies in Diverse Information-centric Network Settings Computer Science - Time: Tue 12:30pm-1:30pm - Session Number: 4009 Rhonda-Lee Forbes and Adita Kulkarni, Department of Computing Sciences, SUNY Brockport, 350 New Campus Dr, Brockport NY 14420 Rhonda-Lee Forbes

Information-centric networks (ICN) is a future Internet architecture that rearchitects the current host-centric Internet to a content-centric one. Caching content within the intermediate nodes is one of the salient features of ICN. This in-network caching allows the content requests to be served from the intermediate nodes rather than the origin servers, thus reducing the content access time and the load on servers. Existing literature proposes many caching strategies for ICN and Leave Copy Everywhere (LCE), Leave Copy Down (LCD), Cache Less for More (CL4M) and ProbCache are the most popular ones. Performance of caching strategies vary significantly according to the behavior of the underlying network nodes. We evaluate the performance of the aforementioned caching strategies in diverse network settings and analyze which strategy is most suitable in specific scenarios. In this work, we consider static networks, synthetic mobile networks, and real-world pedestrian and vehicular mobile networks. Specifically, we consider static academic networks (WIDE, GEANT, GARR), two synthetic mobility models – grid and random waypoint, a pedestrian network designed using Stockholm pedestrian trace, and vehicular networks designed using Rome taxicab trace and Seattle bus trace. We conduct experiments in , a simulator extensively used for ICN research, using YouTube access trace, a real-world request stream trace.

Information-centric networks (ICN) is a future Internet architecture that rearchitects the current host-centric Internet to a content-centric one. Caching content within the intermediate nodes is one of the salient features of ICN. This in-network caching allows the content requests to be served from the intermediate nodes rather than the origin servers, thus reducing the content access time and the load on servers. Existing literature proposes many caching strategies for ICN and Leave Copy Everywhere (LCE), Leave Copy Down (LCD), Cache Less for More (CL4M) and ProbCache are the most popular ones. Performance of caching strategies vary significantly according to the behavior of the underlying network nodes. We evaluate the performance of the aforementioned caching strategies in diverse network settings and analyze which strategy is most suitable in specific scenarios. In this work, we consider static networks, synthetic mobile networks, and real-world pedestrian and vehicular mobile networks. Specifically, we consider static academic networks (WIDE, GEANT, GARR), two synthetic mobility models – grid and random waypoint, a pedestrian network designed using Stockholm pedestrian trace, and vehicular networks designed using Rome taxicab trace and Seattle bus trace. We conduct experiments in Icarus, a simulator extensively used for ICN research, using YouTube access trace, a real-world request stream trace.

Evaluating the Predictability of Automation's Role in Occupational Unemployment Economics - Time: Tue 3:30pm-4:30pm - Session Number: 717 Catherine McConnell, and Dr. Cameron Harwick, School of Business and Management, SUNY Brockport, 350 New Campus Dr. Brockport, NY 14420 Catherine McConnell

This paper evaluates the results of the 2013 Frey and Osborne paper “The Future of Employment: How Susceptible are Jobs to Computerisation?” where, using machine learning techniques, 700 occupations are given individual probabilities of computerization. To do this I compared Frey and Osborne’s predicted probability of job loss to actual job loss as reported by the United States Bureau of Labor Statistics from 2013 - 2018. This involved performing regression analysis to compare, x, the difference in 2013 and 2018 percent change in deviation of each BLS occupational category from the average unemployment for each year and, y, the weighted average probability of computerization for each BLS Occupational Category based on Frey and Osborne’s findings. Preliminary results show little correlation between Frey and Osborne’s predictions and actual trends in employment. Frey and Osborne’s paper is often used to make important economic policy decisions and was used by President Obama in creating his recovery plan after the 2008 recession. If Frey and Osborne’s predictions are incorrect then policies aimed at preventing widespread unemployment may be focused on fighting the wrong causes of unemployment or focused on providing aid to the wrong occupational groups.

This paper evaluates the results of the 2013 Frey and Osborne paper “The Future of Employment: How Susceptible are Jobs to Computerisation?” where, using machine learning techniques, 700 occupations are given individual probabilities of computerization. To do this I compared Frey and Osborne’s predicted probability of job loss to actual job loss as reported by the United States Bureau of Labor Statistics from 2013 - 2018. This involved performing regression analysis to compare, x, the difference in 2013 and 2018 percent change in deviation of each BLS occupational category from the average unemployment for each year and, y, the weighted average probability of computerization for each BLS Occupational Category based on Frey and Osborne’s findings. Preliminary results show little correlation between Frey and Osborne’s predictions and actual trends in employment. Frey and Osborne’s paper is often used to make important economic policy decisions and was used by President Obama in creating his recovery plan after the 2008 recession. If Frey and Osborne’s predictions are incorrect then policies aimed at preventing widespread unemployment may be focused on fighting the wrong causes of unemployment or focused on providing aid to the wrong occupational groups.

Inexpensive Search & Rescue UAV with Mission-Planning and Machine Learning Human Detection Computer Science - Time: Tue 12:30pm-1:30pm - Session Number: 4007 Dennis Pavlyuk, Foluso Odeyale, Ryan Ellis, and Dr. Ning Yu, Department of Computer Science, State University of New York Brockport, 350 New Campus Dr, Brockport, NY 14420 Dennis Pavlyuk, Ryan Ellis

There are many unmanned aerial vehicles (UAV's) available for search and rescue missions in the consumer market. The UAV industry is oversaturated with gyro-stabilized multicopters that support cameras attached to gimbals, have an artificial-intelligence-assisted flight, and have digital video transmission from impressive distances. The problem is that the majority of them are prohibitively expensive to own for small governments and municipalities. While these drones allow for high-definition video, the attachment of additional sensors, and stable flight, their short flight-times and exorbitant prices make their utility limited. We seek to show a solution to this problem by proving the efficacy of an inexpensive Fixed-Wing UAV.

We propose to make a relatively-inexpensive fixed-wing unmanned aerial vehicle that uses a model-based object-detection algorithm (tinyYOLOv3) to interpret images in real-time to identify humans on the ground. There has been a previous attempt to create a similar fixed-wing search and rescue UAV by Johnatehn Mendenhall, but it failed due to a.) pilot error, b.) excess weight, and c.) unoptimized design of the aircraft. We have addressed problem b.) by replacing the UAV's companion computer, an NVIDIA Jetson Nano, with a Raspberry Pi Zero W and Intel Movidius NCS, and also by replacing the UAV's companion computer's external battery pack with a connection to the power-distribution board of the flight controller. This is a total weight reduction of 370 grams. We have addressed problem (c) by prototyping with consumer-grade aircraft models used specifically by RC FPV hobbyists such as the ZOHD Nano Talon and Flite Test Blunt-Nose Versa Wing. There has been reported success with creating a search and rescue UAV using the ZOHD Nano Talon by MKME Labs. Still, they use Access Point Beacon Frames identified with an ESP8266 chip to locate individual mobile devices rather than an image object-detection approach of a bottom-facing camera.

There are many unmanned aerial vehicles (UAV's) available for search and rescue missions in the consumer market. The UAV industry is oversaturated with gyro-stabilized multicopters that support cameras attached to gimbals, have an artificial-intelligence-assisted flight, and have digital video transmission from impressive distances. The problem is that the majority of them are prohibitively expensive to own for small governments and municipalities. While these drones allow for high-definition video, the attachment of additional sensors, and stable flight, their short flight-times and exorbitant prices make their utility limited. We seek to show a solution to this problem by proving the efficacy of an inexpensive Fixed-Wing UAV.

We propose to make a relatively-inexpensive fixed-wing unmanned aerial vehicle that uses a model-based object-detection algorithm (tinyYOLOv3) to interpret images in real-time to identify humans on the ground. There has been a previous attempt to create a similar fixed-wing search and rescue UAV by Johnatehn Mendenhall, but it failed due to a.) pilot error, b.) excess weight, and c.) unoptimized design of the aircraft. We have addressed problem b.) by replacing the UAV's companion computer, an NVIDIA Jetson Nano, with a Raspberry Pi Zero W and Intel Movidius NCS, and also by replacing the UAV's companion computer's external battery pack with a connection to the power-distribution board of the flight controller. This is a total weight reduction of 370 grams. We have addressed problem (c) by prototyping with consumer-grade aircraft models used specifically by RC FPV hobbyists such as the ZOHD Nano Talon and Flite Test Blunt-Nose Versa Wing. There has been reported success with creating a search and rescue UAV using the ZOHD Nano Talon by MKME Labs. Still, they use Access Point Beacon Frames identified with an ESP8266 chip to locate individual mobile devices rather than an image object-detection approach of a bottom-facing camera.

Measuring the Effect of Water on the Density, Viscosity, and Self-Diffusion Coefficient of Polyethylene Glycols Chemistry - Time: Tue 2:00pm-3:00pm - Session Number: 637 Rachel H. Horowitz, Markus M. Hoffmann, SUNY Brockport, Department of Chemistry and Biochemistry, 350 New Campus Drive, Brockport, NY 14420 Rachel Horowitz

Many organic solvents that are currently being used for chemical synthesis and separations are toxic, flammable, and overall hazardous. Green solvents have been studied as more environmentally friendly solvent options. Polyethylene glycol (PEG) is one green solvent because it is biodegradable, nontoxic, and has low vapor pressure. While PEGs have been explored in chemical synthesis, a physicochemical characterization of polyethylene glycol as a solvent has been lacking. Therefore, Molecular Dynamic simulations are planned in our laboratory, but require verification of the parameters used to describe the intermolecular interactions with experimental density, viscosity, and self-diffusion coefficient data. However, published literature data is limited to just density and viscosity and only up to tetraethylene glycol. Therefore, these physical properties were measured from 25 degrees Celsius to 85 degrees Celsius using a vibrating tube density meter, a rolling ball viscometer, and diffusion ordered proton NMR Spectroscopy (DOSY). Water is the most common impurity because polyethylene glycols are hygroscopic. Rather than removing water, we intentionally added water to inspect the effect of present water on the physical properties. It was found that adding up to 0.1 mole fractions of water affected the physical properties very little. Densities decreased not more than 0.5%, viscosities showed an inconsistent pattern with some PEGs increasing and some PEGs decreasing viscosity by up to about 5%, and self-diffusion coefficients remained constant within measurement uncertainty. From diethylene glycol to nonaethylene glycol, self-diffusion coefficients decreased and the viscosities increased approximately linearly with the number of ethylene oxide repeat units. However, densities unexpectedly showed an unclear trend with the number of ethylene oxide repeat units. These observations hold true for all investigated temperatures. A summary of these results and other interesting findings will be presented.

Many organic solvents that are currently being used for chemical synthesis and separations are toxic, flammable, and overall hazardous. Green solvents have been studied as more environmentally friendly solvent options. Polyethylene glycol (PEG) is one green solvent because it is biodegradable, nontoxic, and has low vapor pressure. While PEGs have been explored in chemical synthesis, a physicochemical characterization of polyethylene glycol as a solvent has been lacking. Therefore, Molecular Dynamic simulations are planned in our laboratory, but require verification of the parameters used to describe the intermolecular interactions with experimental density, viscosity, and self-diffusion coefficient data. However, published literature data is limited to just density and viscosity and only up to tetraethylene glycol. Therefore, these physical properties were measured from 25 degrees Celsius to 85 degrees Celsius using a vibrating tube density meter, a rolling ball viscometer, and diffusion ordered proton NMR Spectroscopy (DOSY). Water is the most common impurity because polyethylene glycols are hygroscopic. Rather than removing water, we intentionally added water to inspect the effect of present water on the physical properties. It was found that adding up to 0.1 mole fractions of water affected the physical properties very little. Densities decreased not more than 0.5%, viscosities showed an inconsistent pattern with some PEGs increasing and some PEGs decreasing viscosity by up to about 5%, and self-diffusion coefficients remained constant within measurement uncertainty. From diethylene glycol to nonaethylene glycol, self-diffusion coefficients decreased and the viscosities increased approximately linearly with the number of ethylene oxide repeat units. However, densities unexpectedly showed an unclear trend with the number of ethylene oxide repeat units. These observations hold true for all investigated temperatures. A summary of these results and other interesting findings will be presented.

Molecular Dynamic Simulations of Solutions of Water in Monodisperse Polyethylene Glycols Chemistry - Time: Tue 2:00pm-3:00pm - Session Number: 609 Matthew D. Too, Dr. Markus M. Hoffmann, Department of Chemistry and Biochemistry, SUNY Brockport, 350 New Campus Drive, Brockport, NY 14420 Matthew Too

Polyethylene glycol (PEG) is a polymer with chemical formula H(OCH2CH2)nOH that is used in a variety of fields ranging from medicine to chemical industry. The characteristics of this polymer, including a negligible vapor pressure, low toxicity, low flammability, and biodegradability, makes it a good alternative to traditional solvents for chemical syntheses. To further support the use of PEG as a green solvent, a better physicochemical understanding of PEG as a solvent is desirable. One particular aspect is that PEG is hygroscopic (absorbs moisture from the air) and tends to always have some water presence in open-air environments. Prior work in our laboratory found that when adding small amounts of water to PEG the water was unexpectedly observed to self-diffuse slower than PEG for tetraethylene glycol and higher PEG oligomers at certain experimental conditions of temperature and water concentration. As an explanation of the slower water self-diffusion, we hypothesize that the water in higher PEG oligomers aggregates in clusters that are relatively stationary. To test this hypothesis, we will use molecular dynamics (MD) simulations which provide 3D movies of the movement and relative position of the water in the PEG solvent. MD simulations will allow for a quantitative analysis of the dynamics present. For example, the extent of hydrogen bonding will be quantified, and structural information will be obtained from radial distribution functions. Moreover, densities, self-diffusion coefficients, and viscosities will be evaluated and compared to experimental results for further validation of the MD method used to describe the intermolecular interactions present in the system. To the best of our knowledge, these MD simulations represent a new research direction because MD simulations of PEGs reported in the literature have focused thus far on PEG as a solute and not as a solvent.

Polyethylene glycol (PEG) is a polymer with chemical formula H(OCH2CH2)nOH that is used in a variety of fields ranging from medicine to chemical industry. The characteristics of this polymer, including a negligible vapor pressure, low toxicity, low flammability, and biodegradability, makes it a good alternative to traditional solvents for chemical syntheses. To further support the use of PEG as a green solvent, a better physicochemical understanding of PEG as a solvent is desirable. One particular aspect is that PEG is hygroscopic (absorbs moisture from the air) and tends to always have some water presence in open-air environments. Prior work in our laboratory found that when adding small amounts of water to PEG the water was unexpectedly observed to self-diffuse slower than PEG for tetraethylene glycol and higher PEG oligomers at certain experimental conditions of temperature and water concentration. As an explanation of the slower water self-diffusion, we hypothesize that the water in higher PEG oligomers aggregates in clusters that are relatively stationary. To test this hypothesis, we will use molecular dynamics (MD) simulations which provide 3D movies of the movement and relative position of the water in the PEG solvent. MD simulations will allow for a quantitative analysis of the dynamics present. For example, the extent of hydrogen bonding will be quantified, and structural information will be obtained from radial distribution functions. Moreover, densities, self-diffusion coefficients, and viscosities will be evaluated and compared to experimental results for further validation of the MD method used to describe the intermolecular interactions present in the system. To the best of our knowledge, these MD simulations represent a new research direction because MD simulations of PEGs reported in the literature have focused thus far on PEG as a solute and not as a solvent.

Molecular Interaction of Imidazolium-based Ionic Liquids with a DNA-Oligonucleotide Chemistry - Time: Tue 11:00am-12:00pm - Session Number: 3576 Josh Raymond, Michelle Seifert, and Dr. Mark Heitz, Department of Chemistry, SUNY Brockport, 350 New Campus Drive, Brockport NY 14420 Joshua Raymond

With the rise in popularity of ionic liquids (ILs), the viability of ILs as a potential green solvent for DNA was studied. A self-complimentary 14-mer (7-TA) double stranded DNA oligonucleotide and a representative set of ILs from the imidazolium family were selected to determine the degree to which the ILs intercalate within the DNA. The set of ILs included 1-hexadecyl-3-methyl-imidizolium chloride (C16mim), 1-decyl-3-methyl-imidizolium chloride (C10mim), and 1-butyl-3-methyl-imidizolium chloride (C4mim). Steady-state and time-resolved fluorescence measurements were made using the dye 4,6-diamidino-2-phenylindole (DAPI) in buffer solutions with varying IL concentrations. DAPI was selected because it is known to bind to the minor groove of AT-rich sections of DNA. Various control experiments were made against which we compared the results from DNA/IL solutions. Steady-state excitation and emission spectral maxima shift for DAPI/DNA in the presence of the C16mim and C10mim ILs. This shows that both of these ILs bind to the minor groove of the DNA and displace the DAPI. The observed thresholds for IL concentration were ~5µM and ~40µM, respectively. The C4mim had no significant effect on the DAPI/DNA complex. Time-resolved measurements, including excited-state intensity and anisotropy decays, also support the conclusion that ILs displace DAPI from the minor groove, though apparently at a lower IL concentration threshold than the steady-state measurements suggest. The lifetime and anisotropy time constants suggest that the IL threshold concentrations are ~3µM and ~10 µM. Similar to the steady-state, the time-resolved measurements indicate that the C4mim has no discernable effect on the DAPI/DNA complex. Differences between the threshold concentrations will be discussed.

With the rise in popularity of ionic liquids (ILs), the viability of ILs as a potential green solvent for DNA was studied. A self-complimentary 14-mer (7-TA) double stranded DNA oligonucleotide and a representative set of ILs from the imidazolium family were selected to determine the degree to which the ILs intercalate within the DNA. The set of ILs included 1-hexadecyl-3-methyl-imidizolium chloride (C16mim), 1-decyl-3-methyl-imidizolium chloride (C10mim), and 1-butyl-3-methyl-imidizolium chloride (C4mim). Steady-state and time-resolved fluorescence measurements were made using the dye 4,6-diamidino-2-phenylindole (DAPI) in buffer solutions with varying IL concentrations. DAPI was selected because it is known to bind to the minor groove of AT-rich sections of DNA. Various control experiments were made against which we compared the results from DNA/IL solutions. Steady-state excitation and emission spectral maxima shift for DAPI/DNA in the presence of the C16mim and C10mim ILs. This shows that both of these ILs bind to the minor groove of the DNA and displace the DAPI. The observed thresholds for IL concentration were ~5µM and ~40µM, respectively. The C4mim had no significant effect on the DAPI/DNA complex. Time-resolved measurements, including excited-state intensity and anisotropy decays, also support the conclusion that ILs displace DAPI from the minor groove, though apparently at a lower IL concentration threshold than the steady-state measurements suggest. The lifetime and anisotropy time constants suggest that the IL threshold concentrations are ~3µM and ~10 µM. Similar to the steady-state, the time-resolved measurements indicate that the C4mim has no discernable effect on the DAPI/DNA complex. Differences between the threshold concentrations will be discussed.

Molecular Solvation in Phosphonium Ionic Liquids Chemistry - Time: Tue 2:00pm-3:00pm - Session Number: 609 Rachel I. Riga and Mark P. Heitz, Department of Chemistry, The College at Brockport, SUNY, 228 Smith Hall, 350 New Campus Drive, Brockport, NY 14420. Rachel Riga

The goal of this research is to determine the solvation dynamics in four phosphonium ionic liquids (PILs) + methanol (MeOH) binary mixtures using steady state and time resolved fluorescence spectroscopy. Rose Bengal is a spectrally sensitive fluorescent molecule and is used in these experiments to probe the IL mixtures. Neat IL and MeOH solvents were used to form an array of IL mixtures in which Rose Bengal was dissolved. The Rose Bengal steady state data shows a systematic blue shift as PIL mole fraction (xPIL) is varied from 0.05 to 0.2. In addition to spectral position, the solute emission intensity is quenched. The data show that quenching is most effective at xPIL ~ 0.1, suggesting that the solvent-solute interactions are most unique in this range of mole fraction. Time resolved intensity decays and anisotropies were measured to assess Rose Bengal’s dynamical response. Similar to emission intensity, the intensity decays show a minimum value at xPIL ~ 0.1, confirming the solvent-solute interactions are most unique at this mole fraction. The time dependent spectral shift (e.g., center of gravity) and associated solvation correlation function, C(t), show that the solvation of Rose Bengal occurs at a faster rate in solutions of lower mole fraction PIL. The shift dynamics occur on a sub-ns time scale. Moreover, the anisotropy is complex, requiring two time constants to describe the rotational dynamics with the dominant fraction of the motion displaying ~180 ps time constant in neat MeOH. Rotational dynamics are proportionately slower with increased amounts of IL. Overall, the experimental data show that Rose Bengal is better solvated and more relaxed at MeOH-rich mole fractions.

The goal of this research is to determine the solvation dynamics in four phosphonium ionic liquids (PILs) + methanol (MeOH) binary mixtures using steady state and time resolved fluorescence spectroscopy. Rose Bengal is a spectrally sensitive fluorescent molecule and is used in these experiments to probe the IL mixtures. Neat IL and MeOH solvents were used to form an array of IL mixtures in which Rose Bengal was dissolved. The Rose Bengal steady state data shows a systematic blue shift as PIL mole fraction (xPIL) is varied from 0.05 to 0.2. In addition to spectral position, the solute emission intensity is quenched. The data show that quenching is most effective at xPIL ~ 0.1, suggesting that the solvent-solute interactions are most unique in this range of mole fraction. Time resolved intensity decays and anisotropies were measured to assess Rose Bengal’s dynamical response. Similar to emission intensity, the intensity decays show a minimum value at xPIL ~ 0.1, confirming the solvent-solute interactions are most unique at this mole fraction. The time dependent spectral shift (e.g., center of gravity) and associated solvation correlation function, C(t), show that the solvation of Rose Bengal occurs at a faster rate in solutions of lower mole fraction PIL. The shift dynamics occur on a sub-ns time scale. Moreover, the anisotropy is complex, requiring two time constants to describe the rotational dynamics with the dominant fraction of the motion displaying ~180 ps time constant in neat MeOH. Rotational dynamics are proportionately slower with increased amounts of IL. Overall, the experimental data show that Rose Bengal is better solvated and more relaxed at MeOH-rich mole fractions.

Motives and Manifestation of Jean Gerson's Biases Towards Women History - Time: Wed 12:00pm-1:00pm - Session Number: 928 Marissa Scharlau, Professor Katherine Clark Walter, Department of History, The College at Brockport, 350 New Campus Drive, Brockport, NY 14420 Marissa Scharlau

The French theologian Jean Gerson was one of the leading authorities on mysticism in the Catholic Church during the fourteenth and fifteenth centuries. His work on “discernment of spirits” provided the first comprehensive framework for determining the validity of mystical experiences and was used for centuries by Church leaders. Historical analyses of Gerson’s legacy have associated his teachings on mysticism with an increase in the oppression of women, with several historians linking his work to a decrease in female authority within the Church and to the Malleus Maleficarum, a treatise which endorsed the extermination of witches. The focus of this paper addresses the cause of Gerson’s contribution to female oppression, by examining the biases towards women found in his writing and hypothesizing about the motives for those biases. This topic exemplifies a broader tendency within male institutions throughout history to codify the biases of the dominant group, to justify and secure their power. Through a close-reading of Gerson’s texts in translation using the methodology of intellectual and cultural history, this paper attempts to contribute to the current historiography on the topic by comparing a critic of relevant works with an original analysis of one of Gerson’s most influential writings, “On Distinguishing True from False Revelations." It argues that although Gerson supported some female saints like Joan of Arc, and expressed mistrust of lay mysticism generally, a study of his work reveals deep misogyny and gender bias against women which manifested itself in the rhetorical language he used and his predominant use of females as examples of false mysticism.

The French theologian Jean Gerson was one of the leading authorities on mysticism in the Catholic Church during the fourteenth and fifteenth centuries. His work on “discernment of spirits” provided the first comprehensive framework for determining the validity of mystical experiences and was used for centuries by Church leaders. Historical analyses of Gerson’s legacy have associated his teachings on mysticism with an increase in the oppression of women, with several historians linking his work to a decrease in female authority within the Church and to the Malleus Maleficarum, a treatise which endorsed the extermination of witches. The focus of this paper addresses the cause of Gerson’s contribution to female oppression, by examining the biases towards women found in his writing and hypothesizing about the motives for those biases. This topic exemplifies a broader tendency within male institutions throughout history to codify the biases of the dominant group, to justify and secure their power. Through a close-reading of Gerson’s texts in translation using the methodology of intellectual and cultural history, this paper attempts to contribute to the current historiography on the topic by comparing a critic of relevant works with an original analysis of one of Gerson’s most influential writings, “On Distinguishing True from False Revelations." It argues that although Gerson supported some female saints like Joan of Arc, and expressed mistrust of lay mysticism generally, a study of his work reveals deep misogyny and gender bias against women which manifested itself in the rhetorical language he used and his predominant use of females as examples of false mysticism.

Moving Through Liminal and Queer Space Dance - Time: Mon 3:00pm-4:00pm - Session Number: 248 Lucy Mundschau, Janet Schroeder PhD, Department of Dance, State University of New York Brockport, 350 New Campus Drive, Brockport, NY 14420 Lucy Mundschau

This research investigates the relationship between queerness and liminality through dance. Dance performance and movement studies offer a breadth of avenues to explore liminality and Queerness in conversation. In this research, I use the term liminality to refer to a state of being in transition and undefined. A dancing body existing in liminal space encompasses a sense of being out of bounds. I argue that the fluidity of gender and sexuality makes it liminal, and I endeavor to find how Queer identities and bodies are constantly in this space of transition, mobilizing in and through liminality.

By engaging site-specific dance composition, as well as utilizing somatic practices and queer theory, this research seeks to express the liminality of the movement and performance space(s) and highlight the queer narrative. The temporal nature of liminality is also worth noting. Experiencing liminality, and being locked in this unsettled state of in-between destination, creates a new perspective for understanding time, as well as reality (the present) and memory (the past). When reality and memory blur, it creates a sense of nostalgia as the past becomes revisited in the present context, making it essentially queer by highlighting the transition and passage of time.

I connect this experience of liminality with the queer experience, postulating that liminal spaces are inherently queer. I ask: when a queer body enters a space, does that space become liminal? When a body enters a liminal space, do they enter a state of queerness? How can existing in liminality and embodying this concept of transition become empowering?

This research will be documented through dance choreography captured on video, and a portion of the research will also be written. Excerpts of this writing and the process of writing will be shared as part of the final performance and presentation.

This research investigates the relationship between queerness and liminality through dance. Dance performance and movement studies offer a breadth of avenues to explore liminality and Queerness in conversation. In this research, I use the term liminality to refer to a state of being in transition and undefined. A dancing body existing in liminal space encompasses a sense of being out of bounds. I argue that the fluidity of gender and sexuality makes it liminal, and I endeavor to find how Queer identities and bodies are constantly in this space of transition, mobilizing in and through liminality.

By engaging site-specific dance composition, as well as utilizing somatic practices and queer theory, this research seeks to express the liminality of the movement and performance space(s) and highlight the queer narrative. The temporal nature of liminality is also worth noting. Experiencing liminality, and being locked in this unsettled state of in-between destination, creates a new perspective for understanding time, as well as reality (the present) and memory (the past). When reality and memory blur, it creates a sense of nostalgia as the past becomes revisited in the present context, making it essentially queer by highlighting the transition and passage of time.

I connect this experience of liminality with the queer experience, postulating that liminal spaces are inherently queer. I ask: when a queer body enters a space, does that space become liminal? When a body enters a liminal space, do they enter a state of queerness? How can existing in liminality and embodying this concept of transition become empowering?

This research will be documented through dance choreography captured on video, and a portion of the research will also be written. Excerpts of this writing and the process of writing will be shared as part of the final performance and presentation.

Non-Emergency Response Time Prediction using Deep Learning Computer Science - Time: Tue 11:00am-12:00pm - Session Number: 3673 Jillian Magyar and Adita Kulkarni, Department of Computing Sciences, SUNY Brockport, 350 New Campus Dr, Brockport NY 14420 Jillian Magyar

Resource allocation for management of non-emergency incidents is an important problem that needs to be addressed for smooth functioning of cities. In this work, we investigate how long it takes to resolve non-emergency requests in cities, which enables efficient resource planning for future incidents. Prior work related to non-emergency incidents use simple models like gradient boosting regression, random forests and gaussian conditional random fields for solving prediction problems, that do not effectively capture the complex dependencies in the data. In contrast to the previous work, we design a deep learning based model that captures the complex underlying pattern in the data and accurately predicts future response time for non-emergency requests based on historical data. Our model is an encoder-decoder sequence-to-sequence Long-Short Term Memory (LSTM) based Recurrent Neural Network (RNN). We perform extensive experiments on the publicly available NYC 311 service requests provided by NYC Open Data. We effectively preprocess the data to deal with missing values and outliers since it makes the prediction task challenging. We use Root Mean Squared Error (RMSE) and Mean Absolute Error (MAE) as our performance metrics. We anticipate that our LSTM based model accurately predicts the future response times with minimum RMSE and MAE values. Resource allocation for management of non-emergency incidents is an important problem that needs to be addressed for smooth functioning of cities. In this work, we investigate how long it takes to resolve non-emergency requests in cities, which enables efficient resource planning for future incidents. Prior work related to non-emergency incidents use simple models like gradient boosting regression, random forests and gaussian conditional random fields for solving prediction problems, that do not effectively capture the complex dependencies in the data. In contrast to the previous work, we design a deep learning based model that captures the complex underlying pattern in the data and accurately predicts future response time for non-emergency requests based on historical data. Our model is an encoder-decoder sequence-to-sequence Long-Short Term Memory (LSTM) based Recurrent Neural Network (RNN). We perform extensive experiments on the publicly available NYC 311 service requests provided by NYC Open Data. We effectively preprocess the data to deal with missing values and outliers since it makes the prediction task challenging. We use Root Mean Squared Error (RMSE) and Mean Absolute Error (MAE) as our performance metrics. We anticipate that our LSTM based model accurately predicts the future response times with minimum RMSE and MAE values.

On the Basis of Politics: Public Approval of the Supreme Court Political Science - Time: Wed 1:30pm-2:30pm - Session Number: 1035 Melissa Barnosky and Dr. Susan Orr, Department of Political Science & International Studies, The College At Brockport State University of New York, 350 New Campus Drive, Brockport, NY 14420 Melissa Barnosky

The U.S. Supreme Court is a chief institutional actor in the American constitutional system of government. Maintaining the Court’s legitimacy is therefore fundamental to preserving American democracy. Public approval of the Supreme Court is a significant aspect of the Court’s legitimacy as an institution. Over the last century, the Court’s decisions have become increasingly salient to the public, as they regard some of the most polarized topics in American politics. This has prompted serious concern for the institution’s legitimacy; if citizens view the Court as political as they do the Presidency and Congress, then the judicial branch will no longer be seen as possessing a distinct role in American politics—a role premised on the perception that the Court exercises legal rather than political reasoning. However, what happens if the legal reasoning applied by justices to constitutional interpretation becomes politicized in the public’s mind; could this undermine the Court’s legitimacy? This study will use survey experiments to determine to what extent the public utilizes constitutional interpretation as a proxy for ideology. Specifically, it will investigate how closely individuals’ approaches to interpreting text align with their political ideologies. It will further seek to discover whether average citizens are able to take their expressed approaches to interpreting the Constitution and apply them to hypothetical Supreme Court case decisions or if they contradict their expressed approaches by instead applying their political ideologies to such case decisions. The latter would indicate that public approval of the Supreme Court is based on ideology rather than interpretative approach. If ideological alignment increases approval, then public legitimacy of the Supreme Court rests on the foundation of politics. This means that the Court’s legitimacy in the eyes of the public is contingent on politics itself rather than the Court’s ability to remain above politics as an independent institution.

The U.S. Supreme Court is a chief institutional actor in the American constitutional system of government. Maintaining the Court’s legitimacy is therefore fundamental to preserving American democracy. Public approval of the Supreme Court is a significant aspect of the Court’s legitimacy as an institution. Over the last century, the Court’s decisions have become increasingly salient to the public, as they regard some of the most polarized topics in American politics. This has prompted serious concern for the institution’s legitimacy; if citizens view the Court as political as they do the Presidency and Congress, then the judicial branch will no longer be seen as possessing a distinct role in American politics—a role premised on the perception that the Court exercises legal rather than political reasoning. However, what happens if the legal reasoning applied by justices to constitutional interpretation becomes politicized in the public’s mind; could this undermine the Court’s legitimacy? This study will use survey experiments to determine to what extent the public utilizes constitutional interpretation as a proxy for ideology. Specifically, it will investigate how closely individuals’ approaches to interpreting text align with their political ideologies. It will further seek to discover whether average citizens are able to take their expressed approaches to interpreting the Constitution and apply them to hypothetical Supreme Court case decisions or if they contradict their expressed approaches by instead applying their political ideologies to such case decisions. The latter would indicate that public approval of the Supreme Court is based on ideology rather than interpretative approach. If ideological alignment increases approval, then public legitimacy of the Supreme Court rests on the foundation of politics. This means that the Court’s legitimacy in the eyes of the public is contingent on politics itself rather than the Court’s ability to remain above politics as an independent institution.

Picture Books That Normalize Same-Sex Relationships English & Literature - Time: Tue 3:30pm-4:30pm - Session Number: 723 Emily Kincade, Dr. Megan Norcia, Department of English, The College at Brockport, 350 New Campus Drive 14420. Emily Kincade

LGBTQ characters are severely underrepresented in children’s literature. My project was to accumulate a reading list so these books are more easily accessible. Exposure to queer relationships at an early age helps promote acceptance for both other people and themselves if the child experiences same-sex attraction. The books Heather has Two Mommies (1989) by Leslea Newman and Maiden & Princess (2019) by Daniel Haack show the progression within children’s literature towards acceptance of LGBTQ relationships. Maiden & Princess is the more recent and progressive of the two, as it is one of the only books I have found that focuses exclusively on the relationship. Most children’s books focus on the couple in relation to family, as this lessens the shock of including a gay couple. My research involved reading book reviews to see what critics are saying, as some critics have noted that Maiden & Princess is one of the first queer children’s books to include people of color. Scholars have also noted that Heather has Two Mommies was a ground-breaking, but highly challenged book. I searched humanities databases, where I found an interview from the author Leslea Newman where she discusses the new life of her book, as well as revisiting its formation when she was still struggling to even get it published. This shows the changes that have occurred in the field in just over 20 years. Even though this genre within children’s literature is so new, there have been rapid changes in the level of openness for it. The results of my research show that the field has changed drastically since the earlier publications, but that some of the biggest roadblocks have been people’s reluctance to include these books.

LGBTQ characters are severely underrepresented in children’s literature. My project was to accumulate a reading list so these books are more easily accessible. Exposure to queer relationships at an early age helps promote acceptance for both other people and themselves if the child experiences same-sex attraction. The books Heather has Two Mommies (1989) by Leslea Newman and Maiden & Princess (2019) by Daniel Haack show the progression within children’s literature towards acceptance of LGBTQ relationships. Maiden & Princess is the more recent and progressive of the two, as it is one of the only books I have found that focuses exclusively on the relationship. Most children’s books focus on the couple in relation to family, as this lessens the shock of including a gay couple. My research involved reading book reviews to see what critics are saying, as some critics have noted that Maiden & Princess is one of the first queer children’s books to include people of color. Scholars have also noted that Heather has Two Mommies was a ground-breaking, but highly challenged book. I searched humanities databases, where I found an interview from the author Leslea Newman where she discusses the new life of her book, as well as revisiting its formation when she was still struggling to even get it published. This shows the changes that have occurred in the field in just over 20 years. Even though this genre within children’s literature is so new, there have been rapid changes in the level of openness for it. The results of my research show that the field has changed drastically since the earlier publications, but that some of the biggest roadblocks have been people’s reluctance to include these books.

Precipitation Prediction Using Multiple AI Approaches Computer Science - Time: Wed 12:00pm-1:00pm - Session Number: 913 Timothy Haskins, Emma Doyle, Reid Hoffmeier, and Ning Yu, Department of Computer Science, SUNY Brockport, 350 New Campus Drive, Brockport NY 14420 Timothy Haskins

Humanity has been recording weather data and attempting to predict it for thousands of years. As a result, there is a plethora of weather data and even more ways to express that data. This project aims to take advantage of the availability of that data and modern AI to determine the best combination of data expression and algorithm for predicting if the accumulated precipitation for Rochester, NY will be greater than zero inches for a given day. Based on the research of similar projects, the accuracy goal for this project is above 70%. Ten years of daily weather data were calculated from weather stations in and around Rochester, NY. The data describes the distribution of different measurements such as wind speed, temperature, dew point, relative humidity, and wind direction for a given day. Multiple versions of the data are derived from the original dataset. There are purely categorical versions, purely quantitative versions, and hybrids of the two pure sets. There are also versions of the data that have been normalized with different normalization methods and some that are not normalized at all. Each of these different data expressions are through different algorithms to determine which works best for each expression. The algorithms used in this project are: K-Nearest Neighbor, Deep Neural Net, Wide Neural Net, Deep and Wide Neural Net, SVM, LSTM, and Transformers. For all of the different versions of data, each is split into a training set and a testing set. The training set is 80% and the testing set is 20% of the data for each set. After an algorithm has been trained or fit to the training sample of the set, it is then set to predict the precipitation category for the test set and compared to the known precipitation to calculate accuracy.

Humanity has been recording weather data and attempting to predict it for thousands of years. As a result, there is a plethora of weather data and even more ways to express that data. This project aims to take advantage of the availability of that data and modern AI to determine the best combination of data expression and algorithm for predicting if the accumulated precipitation for Rochester, NY will be greater than zero inches for a given day. Based on the research of similar projects, the accuracy goal for this project is above 70%. Ten years of daily weather data were calculated from weather stations in and around Rochester, NY. The data describes the distribution of different measurements such as wind speed, temperature, dew point, relative humidity, and wind direction for a given day. Multiple versions of the data are derived from the original dataset. There are purely categorical versions, purely quantitative versions, and hybrids of the two pure sets. There are also versions of the data that have been normalized with different normalization methods and some that are not normalized at all. Each of these different data expressions are run through different algorithms to determine which works best for each expression. The algorithms used in this project are: K-Nearest Neighbor, Deep Neural Net, Wide Neural Net, Deep and Wide Neural Net, SVM, LSTM, and Transformers. For all of the different versions of data, each is split into a training set and a testing set. The training set is 80% and the testing set is 20% of the data for each set. After an algorithm has been trained or fit to the training sample of the set, it is then set to predict the precipitation category for the test set and compared to the known precipitation to calculate accuracy.

Queer Dance in Liminal Spaces Dance Lucy Mundschau, Dr. Janet Schroeder, Department of Dance, SUNY Brockport, 350 New Campus Drive Brockport, NY 14420 Lucy Mundschau

This research investigates bodies moving through spaces of liminality. Liminality is a state of being in transition. Existing in liminal space encompasses a sense of being out of bounds, positioned in the negative space of uncertainty. To be liminal is to be undefined. I speculate that the relationship of locational, temporal, emotional, and bodily states of transition are interconnected. This project engages with the difficulties and victories of understanding identity, particularly Queer identities, in liminal space(s). I argue that the fluidity of gender and sexuality makes it liminal. I endeavor to find how Queer identity, expressed both externally and internally, is constantly in this period of transition, mobilizing in and through liminality.

Experiencing liminality, and being locked in this unsettled state of being in-between, creates a new perspective for understanding and differentiating reality (the present) and memory (the past). The nostalgia of liminal spaces connects them with the past, making them seem familiar. And yet, this familiarity, instead of providing comfort, creates unrest and turmoil. I connect this experience of liminality with the queer experience, postulating that liminal spaces are inherently queer. I ask, when a queer body enters the space, does the space become liminal? When a body enters a liminal space, do they enter a state of queerness? How can existing in liminality and embodying this concept of boundlessness become empowering?

I will be utilizing studies in queer theory, dance composition, somatic practice, and site-specific dance. The research will be documented through dance choreography captured on video. The choreography will seek to express the liminality of the space(s) as well highlight the queer narrative of the transitional, bodily state. A portion of the research will also be written. Experts of this writing and the process of writing will be shared as part of the final performance and presentation.

This research investigates bodies moving through spaces of liminality. Liminality is a state of being in transition. Existing in liminal space encompasses a sense of being out of bounds, positioned in the negative space of uncertainty. To be liminal is to be undefined. I speculate that the relationship of locational, temporal, emotional, and bodily states of transition are interconnected. This project engages with the difficulties and victories of understanding identity, particularly Queer identities, in liminal space(s). I argue that the fluidity of gender and sexuality makes it liminal. I endeavor to find how Queer identity, expressed both externally and internally, is constantly in this period of transition, mobilizing in and through liminality.

Experiencing liminality, and being locked in this unsettled state of being in-between, creates a new perspective for understanding and differentiating reality (the present) and memory (the past). The nostalgia of liminal spaces connects them with the past, making them seem familiar. And yet, this familiarity, instead of providing comfort, creates unrest and turmoil. I connect this experience of liminality with the queer experience, postulating that liminal spaces are inherently queer. I ask, when a queer body enters the space, does the space become liminal? When a body enters a liminal space, do they enter a state of queerness? How can existing in liminality and embodying this concept of boundlessness become empowering?

I will be utilizing studies in queer theory, dance composition, somatic practice, and site-specific dance. The research will be documented through dance choreography captured on video. The choreography will seek to express the liminality of the space(s) as well highlight the queer narrative of the transitional, bodily state. A portion of the research will also be written. Experts of this writing and the process of writing will be shared as part of the final performance and presentation.

Reactive Measures: A Comparative Analysis of Disinformation Policy in the US and the EU Political Science - Time: Wed 1:30pm-2:30pm - Session Number: 1035 Pablo Stein and Dr. Steven Jurek, Department of Political Science and International Studies, State University of New York at Brockport, 350 New Campus Drive, Brockport, NY 14420 Pablo Stein

Since allegations of Russian interference in the 2016 US presidential campaign and the breaking of the Cambridge Analytica scandal, disinformation has come to be considered a public problem in need of a policy response by decision-makers in both the US and the EU. Recent literature on disinformation on both sides of the Atlantic highlights the role of foreign influence campaigns (both real and potential) in spreading disinformation, focusing on Russia, Iran and China. Much of this literature approaches disinformation policy from a security studies perspective and focuses on the ability of military and intelligence organizations to counter influence operations through inter-agency and transatlantic cooperation. This article seeks to broaden the disinformation policy debate by considering the effect of institutional differences between the US and the EU on the formulation of disinformation policy. Based on a process-tracing analysis of institutions that are key to disinformation policy, I argue that policymakers and government agencies in the aforementioned polities are responding to different “policy legacies”—as described by Skocpol, Wier, and Hall—with American institutions having been shaped by the Cold War and Soviet “active measures”, in ways that European institutions were not. Because of this, I argue that the US is geared more narrowly towards countering foreign influence campaigns at the expense of domestic disinformation, while the EU is set to take a more holistic approach to disinformation by combining regulatory, digital literacy, and resilience approaches with traditional military and intelligence defense.

Since allegations of Russian interference in the 2016 US presidential campaign and the breaking of the Cambridge Analytica scandal, disinformation has come to be considered a public problem in need of a policy response by decision-makers in both the US and the EU. Recent literature on disinformation on both sides of the Atlantic highlights the role of foreign influence campaigns (both real and potential) in spreading disinformation, focusing on Russia, Iran and China. Much of this literature approaches disinformation policy from a security studies perspective and focuses on the ability of military and intelligence organizations to counter influence operations through inter-agency and transatlantic cooperation. This article seeks to broaden the disinformation policy debate by considering the effect of institutional differences between the US and the EU on the formulation of disinformation policy. Based on a process-tracing analysis of institutions that are key to disinformation policy, I argue that policymakers and government agencies in the aforementioned polities are responding to different “policy legacies”—as described by Skocpol, Wier, and Hall—with American institutions having been shaped by the Cold War and Soviet “active measures”, in ways that European institutions were not. Because of this, I argue that the US is geared more narrowly towards countering foreign influence campaigns at the expense of domestic disinformation, while the EU is set to take a more holistic approach to disinformation by combining regulatory, digital literacy, and resilience approaches with traditional military and intelligence defense.

The Potential Role for Anoctamin 1 in Zebrafish Gastrointestinal Mucus Production Biology - Time: Mon 4:30pm-5:30pm - Session Number: 3050 Author: Pasoon Ahmad, Faculty Mentor : Adam Rich, Institution: The College at Brockport, State University of New York, Address: 350 New Campus Drive, Brockport, NY 14420 Pasoon Wazir

Anoctamin 1 (Ano1) is a calcium activated chloride channel that influences membrane potential in gastrointestinal (GI) pacemaker cells, total solute transport in epithelial cells, and mucus secretion in the respiratory system in mice. In the GI tract mucus provides a protective barrier the helps to prevent infection from luminal contents, and increased mucus production indicates infection. It is unknown if Ano1 plays a role in GI mucus production in humans or in zebrafish. The zebrafish GI tract is morphologically similar to the humans. Our goal is to determine if Ano1 functions in zebrafish GI mucus secretion. Goblet cells secrete mucins and epithelial cells secrete water, resulting in mucus production. We hypothesize that Ano1 functions in Goblet cells to promote mucin secretion and in epithelial cells to promote water secretion. We predict that Ano1 inhibition will reduce mucus secretion. Zebrafish larvae were incubated with dextran sodium sulfate (DSS) from 3 days post fertilization (DPF) until 6 DPF to stimulate inflammation and GI mucus production. Larvae were fixed in 4% paraformaldehyde and stained using Alcian Blue to identify goblet cells and intestinal mucus. DSS treated larvae have increased intestinal mucus and increased goblet cell numbers. Experiments are underway to examine the role of Ano1 in GI mucus production using transgenic zebrafish lacking Ano1. Pharmacological inhibition will also be used to examine the role of Ano1 in GI mucus production using Benzbromarone and Niflumic acid.

Anoctamin 1 (Ano1) is a calcium activated chloride channel that influences membrane potential in gastrointestinal (GI) pacemaker cells, total solute transport in epithelial cells, and mucus secretion in the respiratory system in mice. In the GI tract mucus provides a protective barrier the helps to prevent infection from luminal contents, and increased mucus production indicates infection. It is unknown if Ano1 plays a role in GI mucus production in humans or in zebrafish. The zebrafish GI tract is morphologically similar to the humans. Our goal is to determine if Ano1 functions in zebrafish GI mucus secretion. Goblet cells secrete mucins and epithelial cells secrete water, resulting in mucus production. We hypothesize that Ano1 functions in Goblet cells to promote mucin secretion and in epithelial cells to promote water secretion. We predict that Ano1 inhibition will reduce mucus secretion. Zebrafish larvae were incubated with dextran sodium sulfate (DSS) from 3 days post fertilization (DPF) until 6 DPF to stimulate inflammation and GI mucus production. Larvae were fixed in 4% paraformaldehyde and stained using Alcian Blue to identify goblet cells and intestinal mucus. DSS treated larvae have increased intestinal mucus and increased goblet cell numbers. Experiments are underway to examine the role of Ano1 in GI mucus production using transgenic zebrafish lacking Ano1. Pharmacological inhibition will also be used to examine the role of Ano1 in GI mucus production using Benzbromarone and Niflumic acid.