Ithaca College Digital Commons @ IC

Ithaca College Theses

2020

The Reliability of Countermovement Jump Variables When Measured Using a Cell Phone App

Dakota Brovero Ithaca College

Follow this and additional works at: https://digitalcommons.ithaca.edu/ic_theses

Part of the Exercise Science Commons

Recommended Citation Brovero, Dakota, "The Reliability of Countermovement Jump Variables When Measured Using a Cell Phone App" (2020). Ithaca College Theses. 430. https://digitalcommons.ithaca.edu/ic_theses/430

This Thesis is brought to you for free and open access by Digital Commons @ IC. It has been accepted for inclusion in Ithaca College Theses by an authorized administrator of Digital Commons @ IC. THE RELIABILITY OF COUNTERMOVEMENT JUMP VARIABLES WHEN MEASURED USING A CELL PHONE APP

A Masters Thesis presented to the Faculty of the Graduate Program in Exercise and Sport Sciences Ithaca College

______

In partial fulfillment of the requirements for the degree Master of Science

______

by Dakota Brovero October, 2020

Ithaca College School of Health Sciences and Human Performance Ithaca, New York

CERTIFICATE OF APPROVAL ______

MASTER OF SCIENCE THESIS ______

This is to certify that the Thesis of Dakota Brovero submitted in partial fulfillment of the requirements for the degree of Master of Science in the School of Health Sciences and Human Performance at Ithaca College has been approved.

Thesis Adviser: ______

Committee Member: ______

Candidate: ______

Chair, Graduate Program: ______

Dean of HSHP: ______

Date: 12.04.2020

ii

ABSTRACT

Background: Testing is used by coaches to evaluate and identify program direction, identify player potential and to monitor fatigue. The countermovement jump (CMJ) test is commonly used to measure and monitor athletes’ power production. To interpret change, coaches need to establish the expected level of variability in athlete tests performance. Research indicates different populations exhibit different levels of variability in test scores. Thus, coaches must understand the variability of the team / athletes being tested before drawing conclusions from test results. Objective: The purpose of this study was to measure the reliability of CMJ performance in collegiate athletes with and without the use of an arm swing. Methods: Eight athletes (5 female and

3 male) of Ithaca College’s varsity athletics program completed six CMJs (3 allowing arm swing and 3 without arm swing) on two separate days. Jumps were recorded on participant’s personal cellular phone and uploaded online for the research team to analyze. Movement time (MT) and flight time (FT) were measured with the mobile application My Jump 2 (XCode 5.0.5 for Mac OSX 10.9.2; Apple, Inc., USA). Jump height (JH) and the modified reactive strength index (RSImod) were calculated from MT and FT. Statistical Analysis: A 2-by-2 repeated measures analysis of variance was completed to examine differences in jump variables between jump conditions and between days. Intraclass correlation coefficients (ICC), 95% confidence intervals of the

ICC and coefficients of variation were used to identify reliability between the two test days for MT, FT, JH, and RSImod. Results: Data showed FT to be significantly larger in the countermovement jump with an arm swing relative to the countermovement jump without an arm swing condition. Examination of reliability indices showed weak

iii

reliability for MT, FT, and JH in the countermovement jump without an arm swing condition. MT, FT, and JH exhibited poor reliability in the countermovement jump with and arm swing condition. RSImod exhibited poor reliability under both jump conditions.

Conclusion: The results suggest that FT is the only variable to produce a statistically significant difference when completing the two jump conditions. Based on the current study’s results, the My Jump 2 phone application does not appear to be a reliable device to measure variables of CMJ performance for athletes remotely.

iv

ACKNOWLEDGEMENTS

I would like to thank all the athletes and coaches at Ithaca College for the opportunity to work with them each day for the past two years. I am not only forever grateful for the lessons I learned from you, but also for the all the fun times we had that kept me levelheaded during this process. Thank you Coach Brown for taking a chance on me and hiring me as a graduate assistant. I would never have had this great opportunity if it was not for you. Thank you to all my friends I made at Ithaca, you made the experience so much more fun. Thank you to the “805 south boys”, you made my second year unforgettable and could not have asked for a better duo to live with throughout this entire process. And finally, thank you Mom, Dad, Jarrod, and Justice. You saw me at my best and worst, but where there for me for the entire process regardless. Thank you for always being there to love, support, and help me pursue my goals.

v

Table of Contents

ABSTRACT ...... iii

ACKNOWLEDGEMENTS ...... v

LIST OF TABLES ...... ix

LIST OF FIGURES ...... x

INTRODUCTION ...... 1

Statement of Purpose ...... 5

Hypotheses ...... 5

Assumptions of the Study ...... 5

Definition of Terms ...... 5

Delimitations ...... 7

Limitations ...... 8

Summary ...... 8

REVIEW OF LITERATURE ...... 9

Introduction ...... 9

Usefulness of Testing ...... 9

Program and Player Evaluation ...... 9

Identify Player Potential ...... 11

Fatigue Monitoring ...... 12

vi

Measurement Validity and Reliability ...... 16

Validity ...... 17

Criterion validity ...... 17

Construct validity...... 18

Ecological validity ...... 18

Reliability ...... 20

Circadian rhythm ...... 20

Biological variability ...... 21

Population variability ...... 22

Measuring reliability ...... 23

Assessing Power ...... 24

Jump Performance to Monitor Fatigue...... 27

Reliability of CMJ Testing ...... 30

Summary ...... 34

METHODS ...... 36

Participant Characteristics ...... 36

Experimental Procedures...... 37

Measures...... 38

Statistical Analysis ...... 38

RESULTS ...... 40

vii

DISCUSSION ...... 42

Limitations ...... 49

Practical Implications ...... 51

Summary ...... 51

SUMMARY, CONCLUSIONS, AND RECOMMENDATIONS...... 53

Summary ...... 53

Conclusions ...... 54

Recommendations ...... 54

REFERENCES ...... 55

APPENDICES ...... 70

APPENDIX A ...... 70

APPENDIX B ...... 73

APPENDIX C ...... 75

APPENDIX D ...... 76

viii

LIST OF TABLES

Table ...... Page

1. Descriptive Statistics for both jump types Day 1 and 2 averages ...... 40

2. Reliability statitics for variables measured under countermovement jump

without an arm swing condition ...... 41

3. Reliability statistics for variables mesaured under countermoevemnt jump with

arm swing conditions ...... 41

ix

LIST OF FIGURES

Figure ...... Page

1. Graph of theoretical training curve ...... 11

2. Graph of adaptation phases ...... 14

3. Graph of general adaptation syndrome ...... 15

4. Picture representing the interaction of reliability and validity in tests ...... 17

5. Force-velocity curve (dotted and dashed line) with power curve (dotted line) . 25

6. Force-time curve of CMJ measured on force plates represented by solid red line

...... 30

x Chapter 1

INTRODUCTION

In every phase of life, decision making is an extremely important task. To form evidence-based decisions, an individual must collect and evaluate data or information from various sources. Tests are often used to gather information that is desired (Morrow et al., 2011). A test is “a procedure, reaction, or reagent used to identify or characterize a substance or constituent” (Merriam-Webster.com, 2019). Testing and measurement are important components of many professions. For example, a mechanic working on a car needs to assess and identify what is not functioning in the car. The mechanic will complete tests on different parts of the machine collecting information to decide where it needs to be fixed.

In exercise science, testing is used to assess characteristics of athletes/clients’ capacities on aspects of physical and mental conditioning. Prior to data collection, coaches must identify the purpose of the testing battery. Doing so helps with the selection of appropriate tests. Subsequently comparing athletes’ results with previous data (e.g. normative data, athletes previous results), will provide meaning to data gathered (Morrow et al., 2011). Without reference values, the test may not provide evaluative information.

In addition, a historical view can be taken such as assessing the effectiveness of an off- season program on athlete development. Testing can also guide future program direction and exercise selection by identifying an athlete’s risk of injury or re-injury (Harman,

2008).

A coach must consider the ability of the test to provide adequate information to ensure that the test selected collects meaningful data. (Atkinson & Nevill, 1997).

1 2

Atkinson and Nevill (1997) suggest that a test provides meaningful data when it produces valid and reliable information. A test is considered valid when it measures what it was intended to measure (Atkinson & Nevill, 1997). Tests are reliable when the results are consistent within and between testing sessions for the population being assessed (Weir,

2005). Multiple factors can affect the strength of a test’s validity or reliability. By acknowledging these factors, coaches will be able to select and implement adequate tests to assess their players. If a coach fails to do so, incorrect assumptions can be made about the athletes’ status or program effectiveness. This in turn can impact future decisions made by the coach and overall program effectiveness.

The production of power is an important component for jumping, change of direction and sprint ability in athletes. During a soccer match, the athlete that is faster stopping and re-accelerating has a much better opportunity to reach the ball before their opponent does. During a game, rebound success can be seen in the player that is able to jump the highest. Considering the importance of power to athletic performance, coaches invest a great deal of effort into testing and training this attribute in athletes

(Vescovi & McGuigan, 2008). To measure the power production of an athlete, the total amount of force produced during an action over the time taken to complete the action needs to be measured (Newton & Kraemer, 1994). One action that can be used to measure power production in athletes is the vertical jump (VJ) (Knudson 2009).

The VJ test is an umbrella term consisting of different types of jumps a coach may select to implement with athletes. Jump performance can be assessed by measuring an athlete’s squat jump (SJ), countermovement jump (CMJ), and drop jump (DJ)

(Arteaga et al., 2000; McGuigan, 2016a). The SJ begins with the athlete starting at a 90°

3 knee bend and then maximally extending the hips, knees, and ankles to reach maximum height, allowing the coach to measure the concentric power of the athletes (Shalfawi et al., 2011). The CMJ is initiated with a quick downward movement immediately followed by rapid extension of the lower body (Ebben & Petushek, 2010). When completing the

CMJ, coaches can test an athlete’s jump performance without the use of their arms, countermovement jump without arm swing (CMJ-WOS), or with the use of their arms, countermovement jump with arm swing (CMJ-WAS). The DJ is performed by having the athlete step off a box of a certain height and jumping vertically upwards as quickly as possible following ground contact (Raffalt et al., 2017). Many coaches will utilize the

CMJ and DJ, because of the opportunity to assess an athlete’s ability to use the stretch- shorten cycle (SSC) (Lockie et al., 2012; Peng et al., 2019; Risso et al., 2017; Salaj &

Markovic, 2011; Smirniotou et al., 2008).

As explained by Knudson (2009), VJ testing has to be a reliable and valid assessment of athletic performance. Coaches may use the VJ specifically to identify if a meaningful change in power production has occurred following a training intervention

(Taylor et al., 2010). Taylor and colleagues (2010) suggest a meaningful change to be when the results of the test fall outside the expected variation of the test. Thus, before meaningful change can be identified, the expected variation within a test needs to be established. Raffalt and colleagues (2017) add that variability in testing is different for every population (i.e. gender or age), and the results established in one population cannot be applied to another.

When measuring the reliability of the CMJ, studies have shown different populations to possess different levels of reliability. Reliability has been observed to be

4 affected by the level of training and/or competition experience of the individual

(Heishman et al., 2018; Moir et al., 2008; Rodríguez-Rosell et al., 2016; Sattler et al.,

2012). To date, studies have examined the reliability of variables measured under CMJ conditions in professional athletes (Rodríguez-Rosell et al., 2016; Sattler et al., 2012),

NCAA Division I (Heishman et al., 2018), and NCAA Division II athletes (Moir et al.,

2008). To our knowledge research has not looked at the reliability of CMJ variables measured in NCAA Division III athletes. Researchers have also compared the differences of reliability values in males and females when completing the CMJ. Data shows females exhibit higher levels of variability overall when compared to males, but two limitations to these studies are that the researchers failed to control for age (Harrison & Gaffney, 2001) or for training experience (Nuzzo et al., 2011; Slinde et al., 2008). Furthermore, only two studies have examined test reliability of CMJ-WOS and CMJ-WAS, both finding the

CMJ-WOS to exhibit stronger reliability (Heishman et al., 2018; Slinde et al., 2008). To date, research has yet to examine the level of variability in CMJ-WOS and CMJ-WAS in

NCAA DIII male and female athletes.

An important note to regarding reliability studies reviewed is that all testing had been completed in person. When working with student-athletes there will be times of the academic year where athletes will not be at the facilities with their coach for extended periods. It would be beneficial for these periods to have a reliable method to remotely assess and monitor athlete performance. Of the current research reviewed, none of the studies examined the level of reliability of CMJ testing with and without the use of an arm swing under remote conditions.

5

Statement of Purpose

This study aims to examine the reliability of variables measured under remotely administered testing of CMJ-WOS and CMJ-WAS conditions in DIII male and female collegiate athletes.

Hypotheses

1. When comparing jump types, it is hypothesized that CMJ-WOS will exhibit less

variability than CMJ-WAS

2. When comparing intersession reliability, it is hypothesized that both jump

conditions will exhibit strong reliability.

Assumptions of the Study

For this study, the following assumptions were made at the start of the investigation:

1. Participants are representative of trained Division III collegiate varsity athletes

2. Participants will not be completing a training program with a focus on developing

power production.

Definition of Terms

The following terms are operationally defined for the purpose of this investigation:

1. Countermovement Jump without Arm Swing (CMJ-WOS): countermovement

jump initiated with a quick downward movement followed by a propulsive

upward movement without the aid of an arm swing.

6

2. Countermovement Jump with Arm Swing (CMJ-WAS): countermovement jump

initiated with a quick downward movement followed by a propulsive upward

movement with the aid of arm swing.

3. Neuromuscular fatigue: A decline in force, power, or velocity due to decreased

performance of the neuromuscular system (Williams & Ratel, 2009).

4. Stretch-Shorten Cycle (SSC): The utilization of an active pre-stretch or

lengthening of a muscle, immediately followed by active shortening of the same

muscle (Strojnik & Komi, 2000)

5. Validity: The ability of a test to measure what it is intended to (Fraenkel et al.,

2015).

6. Construct Validity: The degree to which a test is able to correctly assess the

hypothetical construct that is established (Currell & Jeukendrup, 2013).

7. Criterion Validity: An objective measure comparing the similarity of a test result

to results recorded on-an established “gold standard” test (Currell & Jeukendrup,

2013).

8. Ecological Validity: The assessment of the test results’ transferability to on the

field or court performance (Bradshaw et al., 2007)

9. Reliability: The ability for a test to produce consistent results every time an

individual is tested (Hopkins, 2000).

10. Intraday Variability: The variability that occurs due to the body cycling through

daily processes of low morning performance and maximal activity in the

afternoon (Sedliak et al., 2008).

7

11. Population Variability: The concept that populations will vary differently on tests

and the variation seen in one population is not the same as another population.

12. Biological Variability: Is the variability seen during performance testing due to

the inability of the neuromuscular system to coordinate in the exact same fashion

every time.

13. Movement Time (MT): The duration of time the athlete is on the force plate from

the start of the jump to the point of take-off. Made up of the weighing,

unweighting, braking and propulsion phases of the CMJ.

14. Flight Time (FT): The duration of time the athlete is in the air from the point of

take-off to the point of landing.

15. Modified Reactive Strength index (RSImod): A ratio that indicates how high a

jump is relative to the amount of time it takes to complete (Kipp et al., 2016). The

score is found using the FT and dividing it by MT.

16. Impulse (IMP): Measure of the total amount of force multiplied by the total time

taken during the propulsion phase (Harrison & Gaffney, 2001)

17. Peak Force (PF): The greatest amount of force produced at any point during the

propulsion phase of the CMJ.

18. Rate of Force Development (RFD): The rate of change in force for a specific

action; the longer the time taken to develop force, the lower the value (McLellan

et al., 2011)

Delimitations

The delimitations of the study are:

8

1. College-aged female and male varsity athletes from Ithaca College were used as

participants.

2. The phone application My Jump 2 was used to measure athletes.

3. Only MT, FT, JH, and RSImod were measured by the My Jump 2 Application

Limitations

The limitations of the study are:

1. The results are only generalizable to college-aged male and female varsity

athletes.

2. The results may only apply to CMJ-WOS and CMJ-WAS tests.

3. The measurement variables selected may only indicate performance capabilities

for the jump tests administered and are not accurate indicators of other types of

jumps or actual sports skill performance.

Summary

Power is a quality that coaches try to track in each of their athletes. The CMJ test is used to measure lower body power and provides coaches with the opportunity to assess program effectiveness and fatigue daily. Data showing the reliability of variables measured under CMJ-WOS and CMJ-WAS conditions is limited. Despite this, the CMJ-

WAS is utilized by many coaches of all levels to assess athletes. This study aims to examine the reliability of CMJ-WOS and CMJ-WAS in Division III male and female collegiate athletes. It is expected that the results of the study will clarify the reliability of performance in the current population of Division III soccer athletes. This information could allow coaches to make more accurate assessments and decisions in player and team development, increasing skills and abilities of the athletes.

Chapter 2

REVIEW OF LITERATURE

Introduction

This study examines the reliability of variables recorded under countermovement jump conditions when performed with (CMJ-WAS) and without (CMJ-WOS) the inclusion of an arm swing in male and female collegiate athletes. In the literature review, reasons why coaches use testing to assess athletes and how the countermovement jump

(CMJ) can be used for fatigue monitoring are discussed. Next, the importance of specific measurement concepts (i.e., validity, reliability) are reviewed. To finish, the appropriateness of the CMJ test to assess power and athlete performance are discussed.

Usefulness of Testing

Morrow and colleagues (2011) consider a test to be a tool used to evaluate and draw conclusions from the results. Testing is an important part of the support process and allows coaches to evaluate athletes’ physical skills, abilities, and characteristics. Strength and conditioning coaches will select performance tests that exhibit similarities to the movements within the sport. Tests are generally used to evaluate program effectiveness and player status (Morrow et al., 2011), identify player talent (Reiman & Manske, 2009), and monitor the fatigue of the athlete (Taylor et al., 2010).

Program and Player Evaluation

Coaches can compare current results to those of previous athletes from the same or different samples (Morrow et al., 2011). Observing the results from the same sample at different time-points allows the coach to evaluate the effectiveness of a program and to evaluate the need for future changes (Morrow et al., 2011; Reiman & Manske, 2009).

9 10

Tse, McManus, and Masters (2005) utilized testing to assess the effectiveness of an eight- week core training program on core endurance and stability tests. After completing a pre- and post-test, the researchers found the training program improved participants’ scores in the side bridge test compared to the control group, but there was no difference in the abdominal fatigue test (Tse et al., 2005). The results allow coaches to assess the strengths of a program and to identify where areas of improvements are required for the future.

Sheppard and Gabbert (2016) describe how testing can take a futuristic point of view and help program direction by identifying the size of the adaptation “window” of an athlete. They suggest the “window” represents how much room for improvement the athlete has and where training is best focused (Sheppard & Gabbert, 2016). Coaches can identify the strengths and weaknesses of their athletes, which allows them to focus on their athletes’ needs (McGuigan, 2016b; Morrow et al., 2011; Reiman & Manske, 2009;

Sheppard & Gabbert, 2016). Using Figure 1, assume that an athlete is experienced in sprinting, but lacks the desired strength demands of the sport. If the focus of training was on sprinting, the improvements experienced by the athlete would be nominal due to being near the end of their genetic potential. If the focus of training was on strength instead, the athlete would see a larger increase in strength for the same amount of time training.

Therefore, to see the greatest return of training, the coach should spend time developing the strength of the athlete (Sheppard & Gabbert, 2016). If used effectively, testing provides coaches the opportunity to evaluate program effectiveness and provide program direction to further team success.

11

Figure 1. Graph of theoretical training curve. Adapted from “NSCA’s guide to program design,” by Jay R. Hoffman, 2012, Champaign, IL: Human Kinetics, p. 25. Copyright by

National Strength and Conditioning Association. Reprints with permission.

Identify Player Potential

Coaches often use performance testing to aid roster selection or place players in the best position to help the team (Morrow et al., 2011). Some sports require positional specialization and although superior-performance test scores do not guarantee success, it helps coaches identify performance potential. The National Football League (NFL) combine is a multi-day event used to assess collegiate players potential before deciding on whether to draft them for their team or not. Vincent and colleagues (2018) compared the NFL combine data of 2005-2010 on-field performance for different positions during competitions. The researchers used Pearson’s correlation coefficient to assess the relationship between combine data and player positional statistics. For the quarterback position, performance in 40-yd dash had the strongest relationship (r = 31 – .51) with on field performance (i.e. number of pass attempts, total passing yards, number of rush

12 attempts, total rushing yards and average number of yards per rush attempt). For running backs, 40-yd time had a significant relationship (r = -.42 – -.29) with every positional statistic examined (i.e. total number of rush attempts, average number of attempts per game, total yards rushed, average number of yards per rush attempt, average number of yards per game, total number of touchdowns, and longest rush). For wide receiver’s, CMJ height was the only test to exhibit a relationship with the average number of yards per reception and first down percentage (r = .24 – .25). For linebackers, 40-yd time correlated the most (r = -.19 – -.17) with on-the-field performance (i.e. total number of tackles, number of solo tackles, and number of sacks). For defensive ends, CMJ height correlated

(r = .30 – .35) with all four on-the-field performance metrics (i.e. total number of tackles, number of solo tackles, number of assisted tackles, and number of sacks). The only significant relationship calculated for the defensive tackle position was between the SLJ and number of sacks (r = .26); (L. M. Vincent et al., 2018). It is clear from analysis of data, that there is no perfect correlation between test and on-field performance. Thus performance tests can be used to identify potential, but cannot guarantee success This is important especially for new players to the program who lack experience and rely on test results to show their abilities and readiness to compete (Harman, 2008).

Fatigue Monitoring

When training or competing, the body is exposed to high amounts of stress, causing fatigue and acute decreases in performance. If appropriate recovery is allowed, the body will adapt/super compensate which will increase resistance to the repeated stress and thus enhance performance (Figure 2) (Rountree, 2011). If the athlete is not given enough recovery time, the body will remain chronically stressed (Figure 3). This will lead

13 to a decrease in the performance ability of the athlete and will increase risk of injury

(Richardson et al., 2008; Rountree, 2011). The negative response of the body is due to overtraining, which is the body’s failure to adapt and recover to the accumulation of training stress (Richardson et al., 2008). The lack of recovery increases injury risk because the body cannot handle the volume of training. Overuse injuries such as muscle strains or stress fractures often result (Richardson et al., 2008). While, stressing the body is necessary to achieve adaptations and to increase performance, coaches are required to provide a balance between levels of work and recovery for their athletes (Richardson et al., 2008; Rountree, 2011). As individuals adapt and recover at different rates, coaches are required to monitor and assess players readiness to maximize adaptations and reduce injury risk (Rountree, 2011).

14

Figure 2. Graph of adaptation phases. Adapted from “The athlete’s guide to recovery rest, relax & restore for peak perfromance,” by Sage Rountee, 2011, Boulder, CO:

Velopress, p. 6. Copyright by Sage Rountee. Reprints with permission.

15

Figure 3. Graph of general adaptation syndrome. Adapted from “The athlete’s guide to recovery rest, relax & restore for peak perfromance,” by Sage Rountee, 2011, Boulder,

CO: Velopress, p. 5. Copyright by Sage Rountee. Reprints with Permission.

Different types of assessments can be used to measure fatigue and monitor recovery from previous training. Laboratory tests such as blood lactate tests can be used to assess athlete recovery (Rountree, 2011). These tests are expensive to implement and require technical expertise. Other tests used are those that observe changes in resting heart rate (RHR) or heart rate variability (HRV). Testing RHR allows for an athlete to assess if they are in a state of stress (RHR is 10 or more beats per minute higher than normal). HRV looks at the variability in the time between heartbeats. A varied HRV represents a relaxed state, whereas similar times between beats represents a stressed state.

While easier to implement than laboratory assessments, measurement of RHR and HRV is challenging because they must be taken immediately after waking up (Rountree, 2011).

16

This cannot be accomplished by the coach since they are not present when the athletes wake. Therefore, to monitor fatigue a test must be selected that is easy to implement, can be completed by the coach before training, and that allows for a whole team to be tested quickly. In addition, coaches should select a test that is valid and reliable.

Measurement Validity and Reliability

To measure components of athletic performance, a coach must select an appropriate test. An appropriate test is one that exhibits strong reliability and validity

(Figure 4). When selecting a test, coaches must understand that there is always an amount of error or variance associated with the test. There are two types of error when assessing a test; systematic and random (Brendersen, 2011). A systematic error appears due to poor calibration or lack of attention to detail in administering testing protocols (Brendersen,

2011). These errors are easier to predict and control during testing. Random error is more unpredictable and occurs due to natural variation with every test trial (Atkinson & Nevill,

1997; Brendersen, 2011). Once a coach understands error and the expected variation within a test, they can make an informed decision on the selection of the test to use with their athletes.

17

Figure 4. Picture representing the interaction of reliability and validity in tests. Adapted from “How to design and evaluate research in education,” by J.R. Fraenkel, N.E. Wallen, and H.H. Hyun, 2015, Fourth edition, Boston, MA. Copyright 2014 by McGraw-Hill.

Reprints with Permission.

Validity

A test is considered valid when it correctly measures what it intends to measure

(Fraenkel et al., 2015; Harman, 2008; Morrow et al., 2011; Reiman & Manske, 2009).

Tests are also considered valid when they provide useful and meaningful results for the coach (Fraenkel et al., 2015). In order to assess the validity of a test, there are three aspects that should be measured; criterion, construct, and ecological validity (Castagna et al., 2018; Currell & Jeukendrup, 2013; Fraenkel et al., 2015; Harman, 2008; Morrow et al., 2011).

Criterion validity. Criterion validity objectively compares the results of a test to an established “gold standard” or criterion measure (Currell & Jeukendrup, 2013). Some lab-based tests are proven as the “gold standard”, but are not always practical for use with large groups of athletes (Currell & Jeukendrup, 2013; Fraenkel et al., 2015; Harman,

18

2008; Morrow et al., 2011). Coaches, therefore, use more convenient field-based tests to assess their teams. To ensure these portable tests are usable and valid, criterion validity assessments are completed comparing field- to lab-based tests. If available ,the athlete’s measurement are taken by both tests at the same time (Currell & Jeukendrup, 2013;

Fraenkel et al., 2015; Harman, 2008; Morrow et al., 2011). Criterion validity can assess different aspects of validity in a test, with Hopkins (2000) claiming the most important assessment is concurrent validity. Concurrent validity compares the results of two or more tests at the same time, with one of the tests being the “gold standard” (Currell &

Jeukendrup, 2013; Fraenkel et al., 2015; Harman, 2008; Morrow et al., 2011). The similarity of results between the field-based test and the criterion measure identifies the validity of the field-based test.

Construct validity. Construct validity is the degree to which a test is able to correctly assess the hypothetical construct (Currell & Jeukendrup, 2013). Every test looks to measure a hypothetical construct—the better a test is at identifying the construct the stronger the validity of the test (Currell & Jeukendrup, 2013; Fraenkel et al., 2015;

Harman, 2008). Currell and Jeukendrup (2013) explain the results of a test with strong construct validity can differentiate between the performance capabilities of a professional and amateur cyclist. If a coach selects a test with poor construct validity and relies on it for player selection, the coach may not be selecting the best of the group. To successfully test current or potential players’ skills and abilities, strong construct validity is required for a test.

Ecological validity. Ecological validity is the transferability of a test’s results to on-field or court performance (Bradshaw et al., 2007). Ensuring ecological validity of a

19 test allows a coach to identify the extent to which the results will transfer to on-field performance. Castagna and colleagues (2018) conducted a study to assess the ecological validity of the Yo-Yo intermittent recovery test level one (YYIRT1) to the intensity of 9 versus 9 small-sided games in young male soccer players (age: 11.1 ± 0.9 years). The authors reported large associations for distances covered at speeds greater than 18km.h-1

(r = .65) and within-match play, and between high intensity distances with a metabolic power greater than or equal to 20 watts.kg-1 (r = .68). A very large association was also calculated for high intensity activity (r = .73) between the YYIRT1 and match play

(Castagna et al., 2018). These results show that the YYIRT1 exhibits strong ecological validity in soccer athletes.

Aside from measuring movement patterns and the energy systems of the athlete, the coach must also be aware of the economics of the test. While some tests are high in validity, it may not be possible for the coach to implement the test due to time, finances, and execution requirements. For example, the “gold standard” test of aerobic capacity is measuring the maximal oxygen uptake (VO2 max) while on the treadmill (Martínez-

Lagunas & Hartmann, 2014). Although high in validity, the test requires expensive equipment and requires the tester possess particular technical expertise. If a coach does not have the necessary equipment and skill levels to complete the test, it loses its validity.

Coaches can use valid field-based tests, such as the YYIRT1, in place of testing on the treadmill. Although it is not as valid as the VO2 max test, it is a suitable replacement to assess soccer specific endurance ability (Castagna et al., 2006). The YYIRT1 allows coaches to complete team testing much quicker and is much cheaper to implement than the VO2 max test.

20

Reliability

Reliability represents the consistency of measurement of the test or the absence of error due to poor measurement procedures (Atkinson & Nevill, 1997). A test is considered to have strong reliability when it consistently produces similar results over multiple trials (Fraenkel et al., 2015; Harman, 2008; Hopkins, 2000; Hopkins et al., 2001;

Morrow et al., 2011; Reiman & Manske, 2009). Currell and Jeukendrup (2013) explain that reliability is important to provide information on the variation involved with the test.

By understanding the level of the variation that is expected within each test, the coach can identify if results are due to error in the test or if a change has occurred within their athletes. When measuring reliability, different factors can affect variability. The main factors coaches must be aware of are the effects of the circadian rhythm (Sedliak et al.,

2008), biological variability (Bradshaw et al., 2007), and the populations being studied

(Doré et al., 2005).

Circadian rhythm. The body’s biological processes vary throughout the day. The variation is found to impact neuromuscular performance within the same day. Sedliak and colleagues (2008) examined variation in power output between two consecutive days at different times of day (7:00 am, 12:00 pm, 5:00 pm, 8:30 pm). Participants completed three jumps using 60% of their one repetition maximal back squat weight. The authors reported no significant difference in power output between the two consecutive days at the same time of day. However, power output was significantly lower (p < .05) at 7:00 am of day 1 (637.7 ± 124.2 W) and the remaining test times of day 1 (mean = 667.1 –

673.7W). Power output was significantly lower (p < .05) on day 2 at 7:00 am (mean =

649.4 ± 128.4 W) than at 12:00pm (mean = 681.3 ± 126.2 W) and 8:30pm (mean = 676.9

21

± 115.4 W) (Sedliak et al., 2008). The work of Sedliak and colleagues (2008) has been supported by Taylor and colleagues (2010) who found power produced during a loaded jump squat was larger in the afternoon compared to the morning (meanam = 5457 ± 453

W, meanpm = 5719 ± 424 W). Sedliak and colleagues (2008) showed that the largest power output occurred during the 12:00pm testing sessions relative to other times tested.

Castaingts, Martin, Hoecke, and Pérot (2004) hypothesize the improvement is due to the increase in neuronal activity and enhancement of the contractile property of muscles.

Other studies support power output improved due to enhanced muscle contraction response, but neuronal firing does not change throughout the day (Guette et al., 2005;

Martin et al., 1999; Nicolas et al., 2005). Although scientists are unclear as to the reason for intra-day changes in strength and power, coaches should complete testing at the same time of day every time to maintain high reliability.

Biological variability. Biological variability plays a very impactful role in performance testing. Hopkins, Hawley, and Burke (1999) describe how sprinting requires high neuromuscular firing rates as well as efficient coordination of muscle contraction.

Due to the high coordination and timing of events to occur simultaneously, it is very unlikely that performance will be the same every time (Hopkins et al, 1999). Bradshaw and colleagues (2007) considered the biological variation seen in different aspects of sprint performance. The biological variation was represented as the biological coefficients of variation (CV), which was calculated by finding the standard error of measurement (SEM) and subtracting it from the total CV. The biological CV seen in participants’ best 10 m sprint ranged from .12 to .97% (Bradshaw et al., 2007). While the percentage of variation due to biological variability may seem low, the effect it has on

22 performance may be the difference between a first-place finish and placing outside the podium. Since one can expect a range of values instead of the same score every time, the expected range must be established. With the established range, coaches can then correctly identify if an athlete’s test result is due to a change in performance or just the variability of the test. Therefore, coaches must account for biological variability during testing to assess the true capability of the athletes’ performance.

Population variability. When coaches select a test, they must be aware of the demographics of the athletes. The variation of results from one population does not guarantee the same variation to be seen in a different population. A study by Ehlertand colleagues (2019) observed the variability between male and female goalkeepers in a goalkeeper specific YYIRT1 test. Comparing the results of two tests, the researchers presented the variability in both genders in the form of CV. The CV value is considered to represent the population’s standard deviation from the calculated mean, representing the variation that is found within the population. The CV is measured as a percentage, with smaller variation represented by a value closer to 0 (Hopkins et al., 2001). Ehlert and colleagues (2019) showed males (CV = 5.82%) had less variability than females (CV

= 9.6%) in scores of the YYIRT1 test. If the researcher only tested female goalkeepers and applied the results to the male goalkeepers, then an inappropriate assumption would have been made for the change or lack of change in the athletes’ abilities. The results showing differences in variability between genders are backed by Ribeiro and colleagues

(2014). The authors assessed the reliability of a one-repetition maximum bench press across two different sessions. It was observed that females (CV = 5.6%) had better reliability than males (CV = 6.5%) in the bench press (Ribeiro et al., 2014). The data from

23 these studies show that the amount of variability during testing changes based on the population that is being measured. Therefore, for a coach to evaluate change in a group of athletes, the expected variability for that population must be established.

Measuring reliability. To assess the reliability of a test, multiple trials must be completed comparing the scores together. Hopkins (2000) describes comparisons of repeated measurement as retest correlation. Coaches can calculate the intraclass correlation (ICC) to measure retest correlation, which assesses the relative reliability of the test. ICC values range from 0 to 1, with similar results and less variation represented as a value closer to 1 (Hopkins, 2000; W. J. Vincent & Weir, 2012). A limitation to calculating the ICC is that the values produced only provide the ability to compare the reliability results within that study (W. J. Vincent & Weir, 2012; Weir, 2005). For reliability and variability of a test to be compared across different studies, the SEM must be found.

The SEM of a test estimates the absolute reliability of the test and provides a result range that should be expected to occur due to error (Fraenkel et al., 2015; Morrow et al., 2011; W. J. Vincent & Weir, 2012). Although the SEM can be calculated as its own variable, it can also be presented as the percent error expressed as a CV (Hopkins et al., 2001). The CV represents the relative standard deviation of the entire sample, with less deviation from the sample mean represented as a percentage closer to 0. Once the expected variation of a test is identified, the coach can accurately assess the performance of their athletes and compare results to previous tests. If the new results fall within the expected variation, then the coach cannot differentiate if the change in score is due to test error or some extrinsic factor. If the scores fall outside the range of expected variation,

24 the coach can safely suggest a change has occurred and can begin to interpret the reasons for the changes seen.

When selecting a test, coaches must identify an appropriate test to assess their athletes. Therefore, a test must have strong validity, reliability, and measure the desired sport characteristic that is being assessed.

Assessing Power

A fundamental characteristic seen across all sports is the ability to generate power. Thus assessing power is critical for the evaluation of athletic capacity and performance (Brown & Weir, 2001; Haff & Nimphius, 2012). Power is the measurement of the rate of change of work. It is calculated as the scalar product of force and velocity

(Newton & Kraemer, 1994). High Power output requires one to produce a large amount of force in a short amount of time. At slow velocities, even with high force, power is sub optimal (Figure 5). Contrarily, at fast velocities, force output is low and again power is again sub-optimal. Many sports require athletes to produce a large amount of force very quickly to be successful in activities such as sprint acceleration and jumping (Haff &

Nimphius, 2012). Athletes that produce the most power have a better chance of outrunning or outjumping their competition to the setpoint or ball / puck giving them a greater advantage to win (Alemdaroğlu, 2012; Comfort et al., 2013; Cronin & Hansen,

2005). Power can be assessed using a vertical jump, standing long jump, and any of the

Olympic lifts; clean, snatch, and jerk (McGuigan, 2016a).

25

Figure 5. Force-velocity curve (dotted and dashed line) with power curve (dotted line).

Adapted from “Training Principles for Power,” by G.G. Haff and S. Nimphius, 2012,

Strength and Conditioning Journal, 34(6), p. 7. Copyright 2012 by the National Strength and Conditioning Association. Reprint with Permission.

The CMJ is a test that is quick and easy to complete with an entire team requiring minimal technical expertise. Due to the coordination and force production from the legs, the CMJ can be used to measure lower body power production (Bui et al., 2015; Klavora,

2000; López-Segovia et al., 2011; Risso et al., 2017; Rodríguez-Rosell et al., 2016; Ziv &

Lidor, 2010). CMJ-WOS is a more valid measure for lower body power because of leg isolation (Heishman et al., 2018). Leg isolation is accomplished by requiring jumps be completed with hands placed on the hips. Coaches select CMJ-WAS, though, due to the sport specificity of the test (Heishman et al., 2018). In sports such as basketball and

26 , testing CMJ-WAS has stronger ecological validity for assessing athletic performance and readiness for competition (Sattler et al., 2012; Ziv & Lidor, 2010). In two studies that assessed JH between CMJ-WOS and CMJ-WAS, a difference of 4 – 7.5 cm in height was measured with CMJ-WAS producing a higher value.

Athletes’ CMJ performance can be measured using a variety of technologies / equipment including a Vertec, Akabalov’s belt test, switch mats, and force plates

(Klavora, 2000). The most common devices used are force plates (Klavora, 2000;

Markovic et al., 2004; Slinde et al., 2008). When measuring VJ performance, force plates have been recognized as the “gold standard” (Bui et al., 2015; Kenny et al., 2012), providing the opportunity to measure multiple variables of the athlete’s jump (Aragón-

Vargas, 2000; Cormack, Newton, McGuigan, et al., 2008; Floria et al., 2014; Klavora,

2000; Moir et al., 2005; Nibali et al., 2015). Force plates allow coaches to calculate JH by measuring flight time (FT) or impulse (Cormacket al., 2008; Heishman et al., 2018). (See equation, Klavora, 2000; Sattler et al., 2012).

퐹푇2푥푔 퐽푢푚푝 퐻푒𝑖푔ℎ푡 = 8

Where g represents acceleration due to gravity (i.e. 9.81m.s-2). Force plates also allow coaches to measure the PF of the jump which is the maximum amount of force produced during the concentric phase (Cormack, Newton, McGuigan, & Doyle, 2008; Heishman et al., 2018; Taylor et al., 2010). Along with maximum force, the athlete’s impulse (IMP) can be found which is the product of average force and time (Heishman et al., 2018;

Taylor et al., 2010). JH can also be found with impulse using the impulse-momentum theorem, which has been deemed the ‘gold standard” to measure JH in previous studies

(Heishman et al., 2018) Using force plates also provides the ability to measure the rate of

27 force development (RFD), which is the rate at which force is developed (Laffaye et al.,

2014). While extremely effective, lab-based force plates are not used often by coaches due to the cost of the device and the technical expertise required to operate them.

Therefore, coaches use switch mats to calculate JH by measuring the FT of the athlete

(Klavora, 2000). Due to the reliance on switch mats to measure CMJ performance, coaches need to ensure that these are an appropriate device to use.

To ensure the accuracy of switch mats, studies have measured the criterion validity of switch mats to measure JH. Kenny and colleagues (2012) examined the criterion validity of switch mats when compared to the use of force plates to measure JH in CMJ-WOS. The authors showed a strong correlation (r = .996) between JH records achieved from the switch mat relative to force plates (Kenny et al., 2012). Similarly,

Buckthorpe, Morris, and Folland (2012) showed strong criterion validity (r = .90) for JH measured under CMJ-WAS conditions using switch mats when compared to force plates.

The support from these studies shows that switch mats are an acceptable device to use in place of force plates when measuring CMJ height. Deeming them appropriate to measure the variability with each athlete the coach is testing.

Jump Performance to Monitor Fatigue

When the lower body reaches a fatigued state, one skill that is seen to be affected is jump performance. When jumping, the body relies on the activation and coordination of the neuromuscular system (Cormack et al., 2008; Markovic et al., 2004; Sedliak et al.,

2008; Taylor et al., 2010). In a fatigued state, the neuromuscular system has been observed to perform poorly and can be seen in decreased jump height of athletes

(Andersson et al., 2008). Ronglan and colleagues (2006) assessed changes in CMJ

28 performance of elite players before and after a training session. The study reported a significant decrease (6%, p < .05) in JH following the training session

(Ronglan et al., 2006). Anderson and colleagues (2008) and Ronglan and colleagues

(2006), both showed that JH decreased significantly when athletes completed high- intensity sessions on consecutive days, suggesting the need for longer recovery times before training again.

While the data provided by Andersson and colleagues (2008) and Ronglan and colleagues (2006) support the use of JH to monitor fatigue, more recent research suggests that JH may not be an appropriate indicator of fatigue (Cormack et al., 2008; Talpey et al., 2019). Cormackand colleagues (2008) showed no change throughout in CMJ height in 22 Australian Rules football players when tested pre- and post-matches. Data showed that mean power and mean force measured decreased substantially post-match (Cormack,

Newton, & McGuigan, 2008). Similarly, Talpey and colleagues (2019) studied the training readiness of Yale’s men’s program and found that there was a significant decrease in relative peak force (PF) (8.6%) from the start of the season to the end of the season. Along with relative PF, jump variables of relative peak power (PP), average velocity, and total IMP decreased over the course of the season. JH was also measured over the same period and was found to have increased at the end of the season.

These results suggest that the athletes changed their jump kinematics in order to jump higher at the end of the season although they were considered to have neuromuscular fatigue. These studies propose the idea that in a fatigued state, an athlete can achieve the same CMJ height by altering the kinematics of their jumps. Therefore, to properly assess fatigue coaches should measure different variables of the CMJ and not limit themselves

29 to only JH. To do so effectively, coaches need to accurately measure the jump performance of their athletes in order to make appropriate assessments on their player readiness.

To measure the different jump variables of an athlete, force plates are required to measure the force-time curve of the CMJ. The force-time curve described by McMahon,

Suchomel, Lake, and Comfort (2018) can be divided into six phases (Figure 6). The first phase, weighing, is used to establish the starting force and to ensure the athlete begins from a standstill. Phase two is the unweighting phase, which begins with the initiation of the CMJ and finishes when the plates measure the athlete’s return to the starting force.

The third phase, braking, occurs as the force rises above the starting force due to the immediate deceleration of the athlete’s center of mass from the unweighting phase. This phase makes up the eccentric portion of the stretch-shorten cycle. The fourth phase is the propulsion phase, or concentric phase, of the CMJ, where the athlete applies force to reaccelerate the body, extending their knees, hips, and ankles to jump off the force plates.

The closer the propulsion phase is to the braking phase; the less energy is lost during the amortization of the jump. This is important to identify because future testing to monitor fatigue allows for coaches to understand the state of their athletes. Talpey and colleagues

(2019) describe that in a fatigued state a longer amount of time is taken to complete the jump, allowing for more force to be created. This allows the athlete to reach the same JH that they would have reached in a rested state. The fifth phase, flight, is the period the athlete is no longer on the force plate and measures a force of zero. In the last phase, landing, the plates measure the applied forces to stop the downward velocity of the athletes (McMahon et al., 2018).

30

Figure 6. Force-time curve of CMJ measured on force plates represented by red line.

Reliability of CMJ Testing

Considering the frequent use of the CMJ to assess athletes, multiple studies have examined the reliability of the test. Heishman and colleagues (2018), studied the test- retest reliability of CMJ-WOS in 22 (male n = 14; female n = 8) NCAA Division I basketball players and found an ICC of .93 when measuring JH between days. The smallest worthwhile change (SWC) was also provided, .018 cm. The SWC is similar to the SEM by representing the amount of change required for the coach to safely assume the changes in results are not due to test error. Kenny and colleagues (2012) support this, reporting an ICC value of .99 for JH when measuring ten university aged males with varied participation in sports and training. A number of studies are in agreement with both studies that CMJ-WOS is a reliable test to measure lower body power (Markovic et al., 2004; Moir et al., 2004, 2008; Nuzzo et al., 2011; Rodríguez-Rosell et al., 2016;

Slinde et al., 2008). Comparing the differences between the two studies, Kenny and colleagues (2012) calculated larger variation (CV = 28.1%) than Heishman and colleagues (2018) (CV = 5.1%). Suggesting more research is required in test variability to

31 establish the SEM. Studies have also found the CMJ-WAS test to be a reliable tool in measuring jump performance. In the same study by Heishman and colleagues (2018), participants also completed the CMJ-WAS and found an ICC value of .93 and SWC of

.018 m (Heishman et al., 2018). These findings are reinforced by previous studies that found strong reliability in CMJ-WAS testing when measuring JH (Sattler et al., 2012;

Slinde et al., 2008). Sattler and colleagues (2012) measured JH in 93 male volleyball players and observed less variability (CV = 2.6%) than Heishman and colleagues (2018)

(CV = 5.1%). The results suggest the variability in CMJ-WAS also must be established for each population. While both types of CMJ are reliable tests, coaches must be aware of the variability in testing and the factors that affect it to appropriately assess their team.

Coaches across all sports and levels may implement CMJ testing to assess players. Due to its wide use, different populations have been observed to produce different results of reliability for a test. Shown by Harrison and Gaffney (2001), age may impact the reliability of CMJ testing. The authors measured take-off velocity in 22 children and 20 adults and used the data to calculate the variation in children (CV =

187.8%) and adults (CV = 88.3%). Data showed that children exhibit greater variability in jump performance than adults. The findings of Harrison and Gaffney (2001) are supported by Floria and colleagues (2014) who studied the amount of force produced by female adults and children in CMJ-WOS. The researchers found children (CV = 8.8 ±

4.1%) exhibited greater variability than adults (CV = 6.4 ± 3.7%), this is also supported by Raffalt, Alkjær, and Simonsen, (2017) who showed children (CV = 10.10 ± 1.4%) had greater (p < .001) variability in JH than adults (CV = 5.35 ± 0.75%). The same study measured the variability in JH when completing a CMJ-WAS, finding children (CV =

32

11.5 ± 4.9%) had a significantly larger (p < .05) variation in results than the adults (CV =

6.9 ± 4.4%) (Floria et al., 2014). Research has also shown that an athlete’s sport can affect the reliability of the test. Rodríguez-Rosell and colleagues (2016) studied the reliability of CMJ-WOS in 127 male soccer players and 59 male basketball players across three different age groups; under-15 (u15), under-18 (u18) and adults. Comparing the data between sports at the same age, JH reliability and/or variability was different for u15

(Soccer: ICC = .990, CV = 2.60%; Basketball: ICC = .992, CV = 2.30%), u18 (Soccer:

ICC = .989, CV = 2.30%; Basketball: ICC = .992, CV = 2.70%), and adults (Soccer: ICC

= .995, CV = 1.61%; Basketball: ICC = .995, CV = 2.15%) (Rodríguez-Rosell et al.,

2016).

When comparing different genders in CMJ, researchers have observed conflicting results of reliability. Nuzzo and colleagues (2011) examined the reliability of CMJ-WOS height between university-aged males (n = 40) and females (n = 39). Data showed that, females had better reliability (ICC = 0.92, SEM = 1.7, CV = 4.4) than males (ICC = 0.82,

SEM = 3.6, CV = 3.3) (Nuzzo et al., 2011). Moir and colleagues (2008), however, found the opposite to be true for reliability in JH measuring CMJ-WOS in thirty-male and thirty-five female football, volleyball, basketball, and athletes (Male: ICC

= .87 – .96, CV = 4.0 – 5.6; Female: ICC = .87 – .94, CV = 4.4 – 6.6). The data observed by Moir and colleagues (2008) are backed by the same study by Harrison and Gaffney

(2001). The authors calculated the variation in take-off velocity in ten university-aged males (CV = 50%) and twelve university-aged females (CV = 121.6%). When completing

CMJ-WAS, research on variability between genders is more limited. From the research

33 reviewed, Slinde and colleagues (2008) were the only ones to compare the difference of reliability in seventeen males (ICC = .88) and thirteen females (ICC = .82) of varying age.

Reviewing previous reliability data, all studies completed testing in person. When working with collegiate athletes, there are periods throughout the year where athletes will not be on campus to work with their coaches. Therefore, it would be beneficial to assess athletes during periods of break from the semester. The phone application My Jump 2

(Version 1.0, Carlos Balsalobre-Fernández) provides the opportunity for coaches to complete remote assessments. My Jump 2 has been shown to have strong intra and interday reliability, however none of these studies used the phone application in a remote environment. Previous studies have looked at the reliability of the phone application assessing intraday (Balsalobre-Fernández, Glaister, & Lockey, 2015; Cruvinel-Cabral,

Oliveira-Silva, Medeiros, Claudino, Jiménez-Reyes, & Boullosa, 2018; Gallardo-Fuentes et al., 2016), interday (Gallardo-Fuentes et al., 2016), and interrater (Balsalobre-

Fernández et al., 2015; Coswig et al., 2019) reliability. For all three variables measured, the application was found to be a reliable measure of jump performance. The only study to measure interday reliability was Gallardo-Fuentes and colleagues (2016), where they assessed the CMJ-WOS condition in male (n = 14) and female (n = 7) track athletes separately and together. Pearson’s product moment correlation coefficient (r) was calculated for the male athletes (r = .93), female athletes (r = .86), and the whole group

(r= .95) (Gallardo-Fuentes et al., 2016). One important acknowledgment from this study is that testing was completed with the rater and the individual together. Providing the opportunity for the instructor to provide instruction and feedback immediately ensuring

34 proper jump technique every time. Therefore, research is required on the reliability of the phone application when completed remotely with the coach and athlete separated.

Summary

Coaches rely on performance test results to make accurate and informed decisions on both players and programs. The information gathered is important especially if used to monitor fatigue. When monitoring lower body fatigue, power output is negatively affected by fatigue (Cormack, Newton, & McGuigan, 2008). The CMJ has been shown to be an appropriate test to asses lower body fatigue due to its quickness and minimal technical expertise required (Andersson et al., 2008; Cormack, Newton, & McGuigan,

2008; Ronglan et al., 2006). To accurately assess athletes, the expected range of results on a single test session should be established. Different factors affect the reliability of the test, specifically between populations. Reviewing the literature, studies have shown that age, gender, and sport all appear to impact the reliability of the CMJ test (Floria et al.,

2014; Harrison & Gaffney, 2001; Moir et al., 2008; Nuzzo et al., 2011; Raffalt et al.,

2017; Rodríguez-Rosell et al., 2016; Slinde et al., 2008). Reviewing Rodríguez-Rosell and colleagues (2016) data, reliability appears to be impacted by age when activity level and gender are similar for the entire sample. The authors also show the activity level of the athletes affects the reliability when gender and age remain consistent (Rodríguez-

Rosell et al., 2016). Of the studies researched comparing gender, the sample age or levels of physical activity were not consistent throughout, which could have influenced the results of CMJ reliability (Harrison & Gaffney, 2001; Moir et al., 2008; Nuzzo et al.,

2011; Slinde et al., 2008). To support gender as a factor to affect reliability, further research is required when sample age and physical activity levels remain consistent.

35

Of the data reviewed, the CMJ reliability with and without the use of arm swing does not appear to be established when comparing male and female Dvision III athletes.

Maintaining age and physical activity levels of both groups will help identify the impact gender has on the reliability of CMJ testing and further CMJ research in collegiate athletes. The study will also identify if coaches need to establish the reliability of the sample they work with before making assessments. Therefore, the purpose of the current study is to compare the difference in reliability between remotely administered CMJ-

WOS and CMJ-WAS in university-aged males and females competing in Division III athletics. It is hypothesized that the CMJ-WOS condition will exhibit less variability than the CMJ-WAS when comparing jump conditions. When comparing intersession reliability for each jump type, both conditions will exhibit strong reliability.

Chapter 3

METHODS

This chapter describes the methods of data collection and analysis for examining

MT, FT, JH, and modified reactive strength index (RSImod) for CMJ-WOS and CMJ-

WAS. The methods section identifies participant characteristics along with experimental procedures and the statistical analysis procedures employed.

Participant Characteristics

Three male (age = 18.33 ± .58 years; height = 1.79 ± .09 meters; mass = 78.32 ±

11.71 kilograms) and five female (age = 19.60 ± 1.52 years; height = 1.70 ± .05 meters; mass = 67.49 ± 4.84 kilograms) members of Ithaca College athletics were recruited to participate. The participants were members of the men and women’s soccer, , teams as well as members of strength and conditioning. Participants were excluded from participating if they had a pre-existing injury that resulted in missed training for consecutive days in the prior three months. Participants provided informed consent

(Appendix A) and to complete a Health Screening Questionnaire (Appendix B) prior to participation.

A power analysis was conducted using R (v3.6.2; Lucent Technologies, United

States) clarified that a sample size of 22 per group was required for ICC calculations, using an ICC value of .83 and a power level of .8 when alpha was set at .05. The ICC value was chosen from the calculation of the lower bound value of the 95% confidence interval (CI) of the average ICC values observed in previous studies (Heishman et al., 2018; Kenny et al., 2012; Markovic et al., 2004; Moir et al., 2004, 2008; Nuzzo et al., 2011; Rodríguez-

Rosell et al., 2016; Slinde et al., 2008). Due to the early termination of in-person classes

36 37 because of the Coronavirus pandemic (COVID-19), participants particpated remotely.

After reaching out to all sports teams, only 8 participants responded to participate in the study.

Experimental Procedures

Participants completed two testing sessions (Day 1 and 2) with three to seven days separating each session. The procedure was the same for Days 1 and 2.

Following a standardized warm-up, participants completed six randomized CMJ,

3 without (CMJ-WOS) and 3 with (CMJ-WAS). For all six jumps, the procedures of

Heishman et al. (2018) were followed with participants beginning each jump standing tall and feet shoulder-width apart. Instructions on how to complete both types of jumps were explained using videos uploaded onto YouTube (Google, LLC, USA) for participants to view. Participants were instructed to initiate the jump with a quick countermovement to a self-selected depth, followed by a maximum effort jump to reach maximal height.

Participants were instructed to maintain extension during the flight phase of the jump and to land in an athletic position staying on the mat. Completing the CMJ-WOS, participants were required to perform the CMJ with arms akimbo. Completing the CMJ-WAS, participants were instructed to complete arm swing to their comfort. Participants were asked to repeat trials if they flexed their knees or hips in the air, or did not stick the landing. Participants were also instructed to repeat trials if the hands came off the hips at any point during the CMJ-WOS trials. Participants were given three minutes recovery between repetitions to limit residual fatigue (Baechle et al., 2008).

38

Measures

All jumps were recorded by participants’ personal phones and shared with the research team through Google drive (Google, LLC, USA) To record videos, participants placed their phone perpendicular to the ground at a distance of 1.5 meters from their toes.

Using the phone’s rearview camera, videos were recorded in the frontal plane with the phone positioned in landscape view. MT and FT were measured for both jump conditions using the mobile application My Jump 2 on a Samsung Galaxy S7 (Samsung Electronics,

South Korea). The app was used to identify the moment the hips begin to bend to start the countermovement (initiation), when both feet leave the ground (take-off), and the moment contact is reestablished with the ground (landing) through observation of the participants’ videos of jumps (Balsalobre-Fernández et al., 2015). The app calculates MT by calculating the time between initiation and take-off. FT was found by calculating the time between take-off and landing. JH was calculated from the following formula

(Klavora, 2000; Sattler et al., 2012):

퐹푇2푥 푔 퐽푢푚푝 퐻푒𝑖푔ℎ푡 = 8

-2 Where 푔 represents acceleration due to gravity, 9.81m.s . The app calculated the RSImod using MT and FT:

퐹푇 푀표푑𝑖푓𝑖푒푑 푅푒푎푐푡𝑖푣푒 푆푡푟푒푛푔푡ℎ 퐼푛푑푒푥 = 푀푇

Statistical Analysis

All quantitative measures were expressed as means and standard deviations and were assessed for normality using z-scores for skewness and kurtosis. A two-by-two

39 analysis of variance was conducted in JASP (Version 0.12.2, , 2020) to analyze differences between CMJ-WOS and CMJ-WAS and between days for all variables measured. Post-hoc analysis along with calculations of ICC two-way random effects of absolute agreement with multiple measurements (ICC(2,k)) and 95% CI assessed the relative reliability of all the variables for day 1 and day 2. The CV was calculated to assess the absolute reliability. (See equation, Atkinson & Nevill, 1997)

Standard Deviation Coefficent of Variation = x 100 Mean

The indices of reliability for all variables were considered strong if there was no significant difference between the two days of testing, an ICC value of .80 or higher was recorded, the ICCLower was above .60 and the CV was less than or equal to 10% (Joseph et al., 2013). Variables were considered moderate if only one of these standards were not met. If a variable only met two of the standards, the reliability index was considered weak. If one or less of the standards were met, then the variable was assigned a poor reliability index.

Chapter 4

RESULTS

This chapter details results comparing jump performance between CMJ-WOS and

CMJ-WAS. The jump performance for each jump condition between day 1 and day 2 is also compared. Statistical analyses of MT, FT, JH and RSImod are presented. All variables are expressed as sample means. Raw data can be found in Appendix C and D.

Table 1 shows mean (± SD) MT, FT, JH, and RSImod recorded under CMJ-WOS and CMJ-WAS during Day 1 and Day 2. Data showed no significant interaction effect or main effect for MT, JH, and RSImod between jump conditions and test days. For FT, no significant interaction effect and main effect between test days was calculated. The FT for the CMJ-WAS condition was significantly larger than the CMJ-WOS condition (F(1,7)

2 = 7.683, p < .05, ƞp = .523). Post-hoc analysis showed no significant difference between test days in either jump condition for all variables.

Table 1 Descriptive Statistics for both jump types Day 1 and 2 averages

CMJ-WOS CMJ-WAS

Day 1 Day 2 Day 1 Day 2 Variable Mean ± SD Mean ± SD Mean ± SD Mean ± SD

MT (s) .97 ± .17 .92 ± .16 .94 ± .07 1.05 ± .16 FT (s) .44 ± .05 .43 ± .06 .48 ± .07 .54 ± .16 JH (m) .24 ± .06 .23 ± .06 .29 ± .09 .45 ± .41

RSImod .47 ± .09 .47 ± .09 .55 ± .21 .56 ± .16

Note. MT = movement time. FT = flight time. JH = jump height. RSImod = modified reactive strength index.

Table 2 shows results of the reliability analysis for the CMJ-WOS condition. All variables, excluding RSImod, showed an ICC of .80 or higher. The lower bound of the

95% CI for all variables were below .60 and all CV values were above 10%. Taken

40 41

together, MT, FT, and JH variables demonstrated weak reliability. Further, RSImod exhibited a poor reliability index (ICC ≤ .8, ICCLower ≤ .00, CV > 10%) between testing time points under CMJ-WOS conditions.

Table 2 Reliability statistics for variables measured under countermovement jump without arm- swing conditions

ICC 95% Confidence Interval % Δ CV (%) Index

Jump Variable ICC Lower Bound Upper Bound MT (s) .893 .520 .978 -4.28 17.62 Weak FT (s) .827 .166 .965 -2.90 12.38 Weak JH (m) .846 .290 .969 -5.42 25.07 Weak

RSImod .785 -.226 .958 .52 18.55 Poor Note. MT = movement time. FT = flight time. JH = jump height. RSImod = modified reactive strength index. Δ = mean day 2- mean day 1. ICC = intraclass correlation coefficient. CV = coefficient of variation.

Table 3 shows results of the reliability analysis for the CMJ-WOS condition. All jump variables measured exhibited ICCs less than .80. Additionally, all four variables measured had ICCLower values below .60. As with the CMJ-WOS, the CV value for all variables was above 10%. Thus, MT, FT, JH, and RSImod all exhibited poor reliability indices between testing times under CMJ-WAS conditions.

Table 3 Reliability statistics for variables measured under countermovement jump with arm- swing conditions

ICC 95% Confidence Interval CV Jump Variable ICC Lower Bound Upper Bound % Δ (%) Index MT (s) .793 .116 .957 11.50 11.45 Poor FT (s) .096 -4.255 .779 13.83 22.21 Poor JH (m) .178 -4.762 .763 57.49 67.49 Poor RSImod .799 -.128 .961 1.22 32.99 Poor Note. MT = movement time. FT = flight time. JH = jump height. RSImod = modified reactive strength index. Δ = mean day 2- mean day 1. ICC = intraclass correlation coefficient. CV = coefficient of variation.

Chapter 5

DISCUSSION

The purpose of this study was to examine the reliability of variables measured under remotely administered CMJ-WOS and CMJ-WAS conditions in Division III collegiate athletes. Due to the international pandemic of Coronavirus-19 (COVID-19), the study’s sample size was less than desired. Despite the small sample size important conclusions were made on the reliability of remote use of the My Jump 2 phone application. The two main findings in the current study are: (1) the reliability of MT, FT,

JH, and RSImod was weak-to-poor under CMJ-WOS conditions, and (2) the reliability of all jump variables measured under the CMJ-WAS conditions was poor.

Before comparisons can be made between the jump conditions in the current study and previous studies, it is important to acknowledge the study’s results does not support the measurement of assessing jump performance remotely. When comparing the results for CMJ-WAS and CMJ-WOS, there was no significant difference between the variables MT, RSImod and JH. These are not in agreement with previous studies that observed CMJ-WAS to exhibit greater MT, RSImod, and JH than CMJ-WOS (Heishman et al., 2019; Lees et al., 2004; Vaverka et al., 2016). With the small sample size and poor reliability scores observed in the current study, it is important to take caution when analyzing comparisons between the current study and previous studies completed. The current study measured the arm swing condition for MT to be 0.05 s (4.7%) longer than without an arm swing. A previous study by Lees and colleagues (2004), found CMJ-

WAS to take a significantly longer (p<.001) time to complete than CMJ-WOS (0.1 s;

10.4%). In the study, 20 college-aged athletically active males were assessed using force

42 43

platforms (Lees et al., 2004). The CMJ-WAS RSImod score was 0.08 (14.6%) larger than the CMJ-WOS. Measuring the performance of male and female collegiate basketball players on force plates, a 0.12 (20.5%) increase in RSImod score was observed in the arm swing condition (Heishman et al., 2019). An important note that may explain the difference between the current study and (Heishman and colleagues (2019), is that

Heishman and colleagues (2019) calculated RSImod using the JH and the MT of the individual. The My Jump 2 app calculates RSImod using MT and FT.

The calculated JH for the current study was found to be 0.134 m (36.4%) higher when completing the CMJ-WAS condition. A smaller difference between the two conditions was reported previously by Vaverka and colleagues (2016), but the arm swing condition was significantly higher than the without arm swing condition (p<.001). The work of Vaverka et al. (2016) measured 18 members of the ’s top male volleyball championship team. Completing measurements with force plates, it was reported that the athletes on average jumped 0.143m (27.4%) higher when completing the

CMJ-WAS condition (Vaverka et al., 2016). When comparing average JH measured in both studies, Vaverka and colleagues (2016) reported higher JH values for both jump conditions (CMJ-WOS: 0.379 v 0.235; CMJ-WAS: 0.522 v 0.370) relative to the current study. The large difference of results is not surprising considering the participants in the current study are NCAA Division III collegiate athletes and the participants in the previous study are elite professional athletes. When comparing the JH of Junior Elite and

Division III players, Peterson, Fitzgerald, Dietz, Ziegler, Ingraham, Baker, and Snyder (2015) observed Junior Elite players to jump higher. The difference was explained to be due to an increase in the number of fast twitch muscle fibers in Junior

44

Elite players allowing for a larger force production (Peterson et al., 2015). When looking at values of collegiate athletes completing a CMJ-WOS, McGuigan (2016) reported an average JH of 0.409 m and 0.310 in female soccer players. Under the CMJ-WAS condition, an average JH of 0.616m and 0.528m was reported in male soccer players and female volleyball players, respectively.

In the current study, FT was found to be significantly longer, 0.08 s (15.5%), for

CMJ-WAS than CMJ-WOS. Similar results were observed by a previous study, reporting that FT to be longer (p < .01) when completing the CMJ-WAS, 0.04s (6.6%) (Mosier,

Fry, & Lane., 2019). Although the current study saw a greater difference in FT, the previous study reported a larger average FT for both conditions (CMJ-WOS: 0.57 v 0.44;

CMJ-WAS: 0.61 v 0.51). A potential reason for the difference is that the study by Mosier and colleagues (2019) measured only male participants, while the current study included both males and females. Laffaye and colleagues (2014) explain that females have a decreased JH due to the difference in muscle dimensions and architectures. Males have been observed to have increased pennation angles and fascicle length in their lower bodies promoting larger force production (Laffaye et al., 2014). The larger force production results in a longer FT and consequently larger JH. Therefore, the inclusion of males and females in the current study, may explain the lower FT values relative to previous work. Although FT was the only variable to differ significantly, all jump variables were higher for the arm swing condition than without arm swing. The difference in variables can be attributed to direct and indirect effects of the arm swing.

As the arms swing upward during the concentric phase of the CMJ, an opposite force is applied back down through the body decreasing the speed at which the lower

45 body joints reach full extension (Harman et al., 1990). The decrease in speed results in a longer MT to complete the jump. This subsequently increases tension development time which enhances the force output of the muscles involved during the concentric phase which increases both FT and JH (Harman et al., 1990; Lees et al., 2004; Mosier et al.,

2019). In addition, the use of the arms increases the build-up of kinetic energy and is released at the point of take-off with the arms driving the rest of the body upwards (Lees et al., 2004; Mosier et al., 2019). Both the direct and indirect effect of the arm swing, explain the increase in MT, FT, and JH in the CMJ-WAS condition. The increase in

RSImod is relative to the change in variables. An increase in this score is due to an increase of FT or decrease of MT. Although MT and FT both increased in the arm swing condition, FT increased to a greater degree, which caused increases in RSImod.

For the assessment of reliability Joseph et al. (2013) suggest, values greater than

.80 for interday ICC values as adequate. When completing the CMJ-WOS, the ICC scores for interday reliability for MT, FT, and JH were all above .80, but were due to large CI’s. A study by Choukou and colleagues (2014) measured college-aged sports men with accelerometers and force plates and calculated similar ICC values to the current study for MT. The ICC values with 95% CI reported by Choukou and colleagues (2014) were .88 – .93, which were deemed “good” by the authors based on the criteria of an ICC value of .8 or larger (Choukou et al., 2014). Reviewing literature for FT and JH,

Heishman and colleagues (2018) measured the jump performance using force plates of male and female basketball players similar in age to the study’s current sample. The ICC values reported for FT and JH were .957 and .958, respectively. The RSImod score was the only variable to report and ICC value below .80. A previous study by McMahon and

46 colleagues (2018) reported ICC scores >.90 when observing college-aged individuals from an exercise science department on force plates. The ICC values calculated by

McMahon and colleagues (2018) do not support the reliability of RSImod reported by the current study.

In the present study, all four variables produced ICCLower values below .6. In the study by Choukou and colleagues (2014), ICCLower were reported for MT (.88), and JH

(.80). McMahon and colleagues (2018) calculated the ICCLower for JH and RSImod, reporting values >.8 and >.75, respectively. Although some studies have observed “good”

ICCLower, Aben and colleagues (2020) observed “poor” values for MT and FT. Studying the reliability of interday testing in professional rugby players using force plates, the authors calculated ICClower for MT and FT reporting -.42 and .57, respectively (Aben et al., 2020).

Reviewing the reliability data for the CMJ-WAS condition, the ICC for all variables were below .8. In the same study described previously, Heishman and colleagues (2018) observed the CMJ-WAS condition to produce good ICC scores for FT and JH (both .93) based on the current study’s criteria. Shadmehr and colleagues (2016) calculated jump height using flight time collected on force plates from female participants (n = 25) who lacked training experience. An ICC of .96 was calculated for the test separated by 24 hours (Shadmehr et al., 2016). Slinde and colleagues (2008) calculated the ICC for JH of 30 college-aged males and females (.93) measured on contact mats, supporting JH to have good reliability in terms of ICC calculations. Of all the studies reviewed, the analysis of reliability for MT and RSImod for CMJ-WAS is lacking. Similar to CMJ-WOS, all jump variables ICCLower calculation were below .60.

47

Of the studies mentioned above, JH was the only variable measured to have included

95% CI. Shadmehr and colleagues (2016) supported the current study reporting an

ICCLower of .15. Slinde and colleagues (2008) contrarily found an ICCLower of .86. It is important to note that the study by Shadmehr and colleagues (2016) used participants without training experience. Due to the coordination of the CMJ-WAS, someone unfamiliar to the procedure may see more variation between jumps. This would explain the low ICCLower value calculated by Shadmehr and colleagues (2016). In the current study, the participants all reported at least 3 years of training experience.

When using the CV to measure the reliability of a test variable, previous studies have deemed a value less than 10% to be acceptable (Atkinson & Nevill, 1997; Joseph et al., 2013; Stokes, 1985). In the current study, all variables for CMJ-WOS had a CV greater than 10%, resulting in poor reliability. The current results for MT are supported by Aben and colleagues (2020), who reported a CV of 10.5%. The CV was also calculated for FT and JH, 3.08% and 5.19% respectively. In the study by McMahon and colleagues (2018), the CV was calculated for all variables with a value less than 10% reported (J. McMahon et al., 2018). In the current study, the CMJ-WAS observed similar findings to that of the CMJ-WOS, with all jump variables reporting a CV greater than

10%. Only one study calculated CV when analyzing the with arm swing condition

(Heishman et al., 2018). The previous study calculated the CV for FT (2.6%) and JH

(5.1%) and found these variables to possess “good” reliability scores regarding the current study’s standard. Similar to ICC, reliability for MT and RSImod in CMJ-WAS appears to be lacking.

48

Reviewing the studies utilized to make comparisons, force plates were commonly used to assess participants. The decision to use the My Jump 2 phone application was initially made due to the need to complete testing remotely during the Covid-19 pandemic. Following the review of reliability for testing remotely, there was an apparent need for a means to test collegiate athletes during times of breaks in the school year when away from campus. The My Jump 2 application was selected in hope of providing more information for the present gap in literature. Based on the reliability analyses and criteria set for both jumps, the data suggests that the use of the My Jump 2 phone application is not a reliable measurement tool for remote assessments. The use of the phone application as a reliable device has been supported by previous research (Balsalobre-Fernández et al.,

2015; Coswig et al., 2019; Cruvinel-Cabral et al., 2018; Gallardo-Fuentes et al., 2016), noting that testing was completed with the rater and the individual together. Providing the opportunity for the instructor to provide instruction and feedback immediately ensuring proper jump technique every time. When completed remotely, it is impossible to be certain that participants followed the procedures exactly as described to them.

Participants were expected to take three minutes of rest between each jump. Videos of jumps were uploaded separately, so there is no way of telling if the participants took the required rest between jumps or did all 6 in a row. Videos analyst recorded the participants feet and hips at the start and end of the jump. Once participants left the ground, the feet would leave the camera view. Therefore, there is no way of identifying if the participants tuck their knees or pike their hips during the jump to increase FT or if the hands left their hips at any point during the without arm swing condition. Therefore, although the phone

49 application has been deemed reliable in person, the application use remotely is not supported.

When reviewing the different reliability studies, it is important to look at the number of participants that were involved in the study. When calculating ICC values and

95% CI, “poor” ICC values do not only occur due to inadequate measurement repeatability but also due to the lack of variability among subjects or a small number of subjects (Koo & Li, 2016; Morrow Jr. & Jackson, 1993). Sample size also factors into the calculation for CV. The CV is calculated by dividing the standard deviation (SD) of the population by the mean of the population and multiplying it by 100 (Atkinson & Nevill,

1997). As the sample size increases, the SD and consequently CV can be seen to decrease. Of the studies mentioned, Aben and colleagues (2020) had a sample size of 11,

Choukou and colleagues (2014) had a sample size of 20, Heishman and colleagues (2018) had a sample size of 22, McMahon and colleagues (2018) had a sample size of 25,

Shadmehr and colleagues (2016) had a sample size of 25 and Slinde and colleagues

(2008) had a sample size of 30. The only study to support the current findings of a “poor”

ICCLower and CV was Aben and colleagues (2020). With a similar sample size to that of the current study, an argument could be made that the use of the application could be reliable but is not shown in the current study due to the limited sample. Therefore, future research with more participants should be conducted to identify the true reliability of the use of My Jump 2 remotely.

Limitations

A major limitation of the current study was COVID-19. Due to the closing of school after spring break for the remainder of the semester, testing was forced to be

50 completed remotely. After researching different options to complete testing, previous studies supported the phone application My Jump 2 as a valid and reliable tool to measure participants (Balsalobre-Fernández et al., 2015; Coswig et al., 2019; Cruvinel-

Cabral et al., 2018; Gallardo-Fuentes et al., 2016). Therefore, the My Jump 2 application was selected to be used to complete testing.

It is important to note that this was the first study to use the phone application to measure jump performance remotely. Previous studies utilizing the application had the rater present for jump testing to ensure correct set-up and jump technique (Balsalobre-

Fernández et al., 2015; Coswig et al., 2019; Cruvinel-Cabral et al., 2018; Gallardo-

Fuentes et al., 2016). In the current study, a video explanation of the camera set-up and jump technique were provided to the participants. Therefore, it was unclear if the participants positioned the camera the appropriate distance from their feet and had completed the jumps with the exact technique described. Participants were also asked to wait 3 minutes between each jump and to allow 3 – 7 days of rest between test sessions.

By receiving each jump video individually, it is unclear if the appropriate amount of time was taken between jumps and sessions

Another potential limitation is the sample size of the current study. Described previously, reliability is not only affected by the repeatability of the measurements but also the sample size (Koo & Li, 2016; Morrow Jr. & Jackson, 1993). Due to the nature of the situation, it was difficult to recruit a large sample to participate. During the recruitment process, 12 athletes had shown interest, but only eight provided videos to be analyzed. A suggestion by Koo and Li (2016) is to recruit at least 30 participants when completing a reliability study. Due to the poor sample size, it is unclear to determine if

51 the reliability of the phone application used remotely is poor or that an insufficient number of subjects were analyzed.

Practical Implications

During times of breaks throughout the school year (i.e. summer and winter), athletes can be away from their coaches for extended periods. Strength coaches are unable to monitor and assess their progress while athletes are away. Creating the inability to modify or update workouts throughout the break or accurately predict the physical shape their athletes will be in when returning to campus. Although the current study’s results does not support the use of My Jump 2, the phone application offers the ability to assess the athlete’s jump performance and power capability. Due to the high reliance of power in sport, power performance must be maximized. If strength coaches are allowed the opportunity to measure power in their athletes during times away from them, they have the opportunity to modify and change the athlete’s workouts to better fit their training needs as they would when together during the school year. Currently, the application is not supported to be used solely as a measurement tool but assist with other measuring devices. Future research is required before identifying if the application is a reliable tool that may be used remotely with athletes.

Summary

In the current study, FT was the only variable that exhibited differences between

CMJ-WAS and CMJ-WOS conditions. All other jump variables saw no difference. With regards to inter-session reliability, MT, FT, and JH all showed weak reliability index values when completing a CMJ-WOS. MT, FT, and JH for CMJ-WAS as well as RSImod for both jump conditions produced poor reliability indexes. Based on these results, the

52

My Jump 2 phone application was not a reliable tool to assess athletes remotely in the current study. This was likely due to the small sample size (n=8), but future studies are needed to solidify the reliability of the test when used remotely.

Chapter 6

SUMMARY, CONCLUSIONS, AND RECOMMENDATIONS

Summary

The present study examined the reliability of variables measured under a remote administration of CMJ-WOS and CMJ-WAS in collegiate athletes. The CMJ with and without the use of an arm swing has been shown to be a reliable test across a wide range of ages and training levels when using force plates, contact mats, and recently phone applications while the tester was present. The current study is the first to complete jump testing with the My Jump 2 application remotely. Given the academic calendar of collegiate athletics and the extended break given to student-athletes, it seemed logical to assess the reliability of the My Jump 2 phone application when used remotely.

Eight healthy, resistance-trained college athletes volunteered to participate.

Across two testing sessions, participants completed six CMJ, three CMJ-WOS and three

CMJ-WAS. MT, FT, JH, and RSImod were measured using the My Jump 2 phone application. A two by two repeated measures analysis of variance was used to assess interaction and main effects for jump conditions (CMJ-WOS and CMJ-WAS) across the two days. ICC with 95% CI, and CV were used to examine the reliability of all four variables (MT, FT, JH, RSImod) in both jump conditions. Results showed FT to be the only variable to be significantly higher when participants utilized an arm swing. When comparing intersession reliability, CMJ-WOS showed a weak reliability index for all variables excluding RSImod, which showed poor reliability. When comparing intersession reliability for CMJ-WAS, all four variables showed a poor reliability index. It appears that the My Jump 2 application may not be a reliable tool to measure CMJ variables on

53 54 athletes remotely. Future research should expand the current sample size to more appropriately assess the reliability of the phone application.

Conclusions

The results of this study yielded the following conclusions:

1. A weak reliability index was reported for MT, FT and JH when completing the

CMJ-WOS condition

2. A poor reliability index was reported for MT, FT, and JH when completing the

CMJ-WAS condition as well as RSImod for both jump conditions.

Recommendations

The following recommendations for further research were made after this study:

1. Testing should be increased in size to include at least 30 participants to assess the

reliability of the My Jump 2 application.

2. Individual sports should be looked at to identify the applicability on different

sports teams as well as if the application is better suited for certain sports

compared to others.

REFERENCES

Aben, H. G. J., Hills, S. P., Higgins, D., Cooke, C. B., Davis, D., Jones, B., & Russell, M.

(2020). The reliability of neuromuscular and perceptual measures used to profile

recovery, and the time-course of such responses following academy

match play. Sports, 8(73), 1–19. https://doi.org/10.3390/sports8050073

Alemdaroğlu, U. (2012). The relationship between muscle strength, anaerobic

performance, agility, sprint ability and vertical jump performance in professional

basketball players. Journal of Human Kinetics, 31, 99–106.

https://doi.org/10.2478/v10078-012-0016-6

Andersson, H., Raastad, T., Nilsson, J., Paulsen, G., Garthe, I., & Kadi, F. (2008).

Neuromuscular fatigue and recovery in elite female soccer: Effects of active

recovery. Medicine and Science in Sports and Exercise, 40(2), 372–380.

https://doi.org/10.1249/mss.0b013e31815b8497

Aragón-Vargas, L. F. (2000). Evaluation of four vertical jump tests: Methodology,

reliability, validity, and accuracy. Measurement in Physical Education and Exercise

Science, 4(4), 215–228. https://doi.org/10.1207/s15327841mpee0404_2

Arteaga, R., Dorado, C., Chavarren, J., & Calbet, J. A. L. (2000). Reliability of jumping

performance in active men and women under different stretch loading conditions.

Journal of Sports Medecine and Physical Fitness, 40(1), 26–33.

Atkinson, F., & Nevill, A. M. (1997). Statistical Methods for asessing measurement

error(reliability) in variables relevant to sports medicine. IEE Colloquium (Digest),

26(32), 217–238. https://doi.org/10.2165/00007256-199826040-00002

Baechle, T. R., Earle, R. W., & Wathen, D. (2008). Step 7: Rest periods. In T. R. Baechle

55 56

& R. W. Earle (Eds.), Essentials of strength training and conditioning (3rd ed., p.

422). Human Kinetics.

Balsalobre-Fernández, C., Glaister, M., & Lockey, R. A. (2015). The validity and

reliability of an iPhone app for measuring vertical jump performance. Journal of

Sports Sciences, 33(15), 1574–1579. https://doi.org/10.1080/02640414.2014.996184

Bradshaw, E. J., Maulder, P. S., & Keogh, J. W. L. (2007). Biological movement

variability during the sprint start: Performance enhancement or hindrance? Sports

Biomechanics, 6(3), 246–260. https://doi.org/10.1080/14763140701489660

Brendersen, H. J. C. (2011). A student’s guide to data and error analysis. Cambridge

University Press. https://ebookcentral.proquest.com/lib/ithaca-

ebooks/detail.action?docID=691865.

Brown, L. E., & Weir, J. P. (2001). Accurate assessment of muscular strength & power,

ASEP procedures recommendation. Journal of Exercise Physiology, 4(3), 1–21.

https://doi.org/10.1177/1090198110382503

Buckthorpe, M., Morris, J., & Folland, J. P. (2012). Validity of vertical jump

measurement devices. Journal of Sports Sciences, 30(1), 63–69.

https://doi.org/10.1080/02640414.2011.624539

Bui, H. T., Farinas, M. I., Fortin, A. M., Comtois, A. S., & Leone, M. (2015).

Comparison and analysis of three different methods to evaluate vertical jump height.

Clinical Physiology and Functional Imaging, 35(3), 203–209.

https://doi.org/10.1111/cpf.12148

Castagna, C., Impellizzeri, F. M., Chamari, K., Carlomagno, D., & Rampinini, E. (2006).

Aerobic fitness and yo-yo continuous and intermittent tests performances in soccer

57

players: A correlation study. Journal of Strength and Conditioning Research, 20(2),

320–325. https://doi.org/10.1519/R-18065.1

Castagna, C., Krustrup, P., D’Ottavio, S., Pollastro, C., Bernardini, A., & Araújo Póvoas,

S. C. (2018). Ecological validity and reliability of an age-adapted endurance field

test in young male soccer players. Journal of Strength and Conditioning Research,

1–17. https://doi.org/10.1519/jsc.0000000000002255

Castaingts, V., Martin, A., Hoecke, J. Van, & Pérot, C. (2004). Neuromuscular efficiency

of the triceps surae in induced and voluntary contractions: morning and evening

evaluation. Chronobiology International, 21, 631–643.

Choukou, M. A., Laffaye, G., & Taiar, R. (2014). Reliability and validity of an accele-

rometric system for assessing vertical jumping performance. Biology of Sport, 31(1),

55–62. https://doi.org/10.5604/20831862.1086733

Comfort, P., Stewart, A., Bloom, L., & Clarkson, B. (2013). Relationships between

strength, sprint, and jump performance in well-trained youth soccer players. Journal

of Strength and Conditioning Research, 28(1), 173–177.

Cormack, S. J., Newton, R. U., & McGuigan, M. R. (2008). Neuromuscular and

endocrine responses of elite players to an australian rules football match.

International Journal of Sports Physiology and Performance, 3, 359–374.

https://doi.org/10.1123/ijspp.3.3.359

Cormack, S. J., Newton, R. U., McGuigan, M. R., & Doyle, T. L. A. (2008). Reliability

of measures obtained during single and repeated countermovement jumps.

International Journal of Sports Physiology and Performance, 3(2), 131–144.

https://doi.org/10.1123/ijspp.3.2.131

58

Coswig, V., Silva, A. D. A. C. E., Barbalho, M., de Faria, F. R., Nogueira, C. D., Borges,

M., Buratti, J. R., Vieira, I. B., Román, F. J. L., & Gorla, J. I. (2019). Assessing the

validity of the MyJUMP2 app for measuring different jumps in professional cerebral

palsy football players: An experimental study. JMIR MHealth and UHealth, 7(1).

https://doi.org/10.2196/11099

Cronin, J. B., & Hansen, K. T. (2005). Strength and Power predictors of sports speed.

Journal of Strength and Conditioning Research, 19(2), 349–357.

Cruvinel-Cabral, R. M., Oliveira-Silva, I., Medeiros, A. R., Claudino, J. G., Jiménez-

Reyes, P., & Boullosa, D. A. (2018). The validity and reliability of the “my Jump

App” for measuring jump height of the elderly. PeerJ, 1–14.

https://doi.org/10.7717/peerj.5804

Currell, K., & Jeukendrup, A. (2013). Validity, reliability and sensitivity of measures of

sporting performance. Sports Medicine, 43(2), 297–316.

https://doi.org/10.2165/00007256-200838040-00003

Doré, E., Martin, R., Ratel, S., Duché, P., Bedu, M., & Van Praagh, E. (2005). Gender

differences in peak muscle performance during growth. International Journal of

Sports Medicine, 26(4), 274–280. https://doi.org/10.1055/s-2004-821001

Ebben, W. P., & Petushek, E. J. (2010). Using the reactive strength index modified to

evaluate plyometric performance. Journal of Strength & Conditioning Research,

24(8), 1983–1987.

Ehlert, A. M., Cone, J. R., Wideman, L., & Goldfarb, A. H. (2019). Evaluation of a

goalkeeper-specific adaptation to the Yo-Yo intermittent recovery test level 1:

Reliability and variability. Journal of Strength and Conditioning Research, 33(3),

59

819–824. https://doi.org/10.1519/JSC.0000000000002869

Floria, P., Gómez-Landero, L. A., & Harrison, A. J. (2014). Variability in the application

of force during the vertical jump in children and adults. Journal of Applied

Biomechanics, 30(6), 679–684. https://doi.org/10.1123/jab.2014-0043

Fraenkel, J. R., Wallen, N. E., & Hyun, H. H. (2015). How to design and evaluate

research in education (Fourth). Mcgraw-Hill.

Gallardo-Fuentes, F., Gallardo-Fuentes, J., Ramírez-Campillo, R., Balsalobre-Fernández,

C., Martínez, C., Caniuqueo, A., Cañas, R., Banzer, W., Loturco, I., Nakamura, F.

Y., & Izquierdo, M. (2016). Intersession and intrasession reliability and validity of

the my jump app for measuring different jump actions in trained male and female

athletes. Journal of Strength and Conditioning Research, 30(7), 2049–2056.

https://doi.org/10.1519/JSC.0000000000001304

Guette, M., Gondin, J., & Martin, A. (2005). Time-of-day effect on the torque and

neuromuscular properties of dominant and non-dominant quadriceps femoris.

Chronobiology International, 22(3), 541–558. https://doi.org/10.1081/CBI-

200062407

Haff, G. G., & Nimphius, S. (2012). Training principles for power. Strength and

Conditioning Journal, 34(6), 2–12. https://doi.org/10.1519/SSC.0b013e31826db467

Harman, E. (2008). Principles of test selection and administration. In T. R. Baechle & R.

W. Earle (Eds.), Essentials of Strength Training and Conditioning (Third, pp. 238–

246). Human Kinetics.

Harman, E., Rosenstein, M., Frykman, P., & Rosenstein, R. (1990). The effects of arms

and countermovement on vertical jumping. Medecine & Science in Sports &

60

Exercise, 22(6), 825–833.

Harrison, A. J., & Gaffney, S. (2001). Motor development and gender effects on stretch-

shortening cycle performance. Journal of Science and Medicine in Sport, 4(4), 406–

415. https://doi.org/10.1016/S1440-2440(01)80050-5

Heishman, A., Brown, B., Daub, B., Miller, R., Freitas, E., & Bemben, M. (2019). The

influence of countermovement jump protocol on reactive strength index modified

and flight time: Contraction time in collegiate basketball players. Sports, 7(2), 37.

https://doi.org/10.3390/sports7020037

Heishman, A. D., Daub, B. D., Miller, R. M., Freitas, E. D. S., Frantz, B. A., & Bemben,

M. G. (2018). Countermovement Jump Reliability Performed With and Without an

Arm Swing in NCAA Division 1 Intercollegiate Basketball Players. Journal of

Strength and Conditioning Research, 34(2), 546–558.

https://doi.org/10.1519/JSC.0000000000002812

Hopkins, W. G. (2000). Measure of reliability in sports medicine and science. Sports

Med, 30(1), 1–15. https://doi.org/10.2165/00007256-200030010-00001

Hopkins, W. G., Hawley, J. A., & Burke, L. M. (1999). Design and analysis of research

on sport perfromance enhancement. Medecine and Sceince in Sports and Exercise,

31(3), 472–485.

Hopkins, W. G., Schabort, E. J., & Hawley, J. A. (2001). Reliability of power in physical

performance tests. 31(3), 211–234. https://doi.org/10.2165/00007256-200131030-

00005

Joseph, C. W., Bradshaw, E. J., Kemp, J., & Clark, R. A. (2013). The interday reliability

of ankle, knee, leg, and vertical musculoskeletal stiffness during hopping and

61

overground running. Journal of Applied Biomechanics, 29, 386–394.

Kenny, I. C., Cairealláin, A. Ó., & Comyns, T. M. (2012). Validation of an electronic

jump mat to assess stretch-shortening cycle function. Journal of Strength &

Conditioning Research, 26(6), 1601–1608.

Kipp, K., Kiely, M. T., & Geiser, C. F. (2016). Reactive strength index modified is a

valid measure of explosiveness in collegiate female volleyball players. Journal of

Strength and Conditioning Research, 30(5), 1341–1347.

https://doi.org/10.1519/JSC.0000000000001226

Klavora, P. (2000). Vertical-jump Tests : A critical review. Strength and Conditioning

Journal, 22(5), 70–75.

Knudson, D. V. (2009). Correcting the use of the term “power” in the strength and

conditioning literature. Journal of Strength and Conditioning Research, 23(6),

1902–1908. https://doi.org/10.1519/JSC.0b013e3181b7f5e5

Koo, T. K., & Li, M. Y. (2016). A guideline of selecting and reporting intraclass

correlation coefficients for reliability research. Journal of Chiropractic Medecine,

15, 155–163. https://doi.org/10.1016/j.jcm.2016.02.012

Laffaye, G., Wagner, P. P., & Tombleson, T. I. L. (2014). Countermovement jump

height: Gender and sport-specific differences in the force-time variables. Journal of

Strength and Conditioning Research, 28(4), 1096–1105.

https://doi.org/10.1519/JSC.0b013e3182a1db03

Lees, A., Vanrenterghem, J., & Clercq, D. De. (2004). Understanding how an arm swing

enhances performance in the vertical jump. Journal of Biomechanics, 37(12), 1929–

1940. https://doi.org/10.1016/j.jbiomech.2004.02.021

62

Lockie, R. G., Jeffriess, M. D., Schultz, A. B., & Callaghan, S. J. (2012). Relationship

between absolute and relative power with linear and change-of-direction speed in

junior players from . Journal of Australian Strength &

Conditioning, 20(4), 4–12.

López-Segovia, M., Marques, M. C., Van Den Tillaar, R., & González-Badillo, J. J.

(2011). Relationships between vertical jump and full squat power outputs with sprint

times in U21 soccer players. Journal of Human Kinetics, 30(1), 135–144.

https://doi.org/10.2478/v10078-011-0081-2

Markovic, G., Dizdar, D., Jukic, I., & Cardinale, M. (2004). Reliability and factorial

validity of squat and countermovement jump test. 18(3), 551–555.

Martin, A., Carpentier, A., Guissard, N., Van Hoecke, J., & Duchateau, J. (1999). Effect

of time of day on force variation in a human muscle. Muscle and Nerve, 22(10),

1380–1387. https://doi.org/10.1002/(SICI)1097-4598(199910)22:10<1380::AID-

MUS7>3.0.CO;2-U

Martínez-Lagunas, V., & Hartmann, U. (2014). Validity of the Yo-Yo intermittent

recovery test level 1 for direct measurement or indirect estimation of maximal

oxygen uptake in female soccer players. International Journal of Sports Physiology

and Performance, 9(5), 825–831. https://doi.org/10.1123/ijspp.2013-0313

McGuigan, M. (2016a). Administration, scoring, and interpretation of selected tests. In G.

Gregory Haff & N. T. Triplett (Eds.), Essentials of Strength Training and

Conditioning (4th ed., pp. 259–293). Human Kinetics.

McGuigan, M. (2016b). Principles of test selection and administartion. In Gregory G.

Haff & T. Triplett (Eds.), Essentials of strength training and conditioning (4th ed.,

63

pp. 250–252). Human Kinetics.

McLellan, C., Lovell, D. I., & Gass, G. C. (2011). The role of rate of force development

on vertical jump performance. Strength And Conditioning, 25(2), 379–385.

McMahon, J. J., Suchomel, T. J., Lake, J. P., & Comfort, P. (2018). Understanding the

key phases of the countermovement jump force-time curve. Strength and

Conditioning Journal, 40(4), 96–106.

https://doi.org/10.1519/SSC.0000000000000375

McMahon, J., Lake, J., & Comfort, P. (2018). Reliability of and relationship between

flight time to contraction time ratio and reactive strength index modified. Sports,

6(3), 81. https://doi.org/10.3390/sports6030081

Merriam-Webster.com. (2019). Test. https://www.merriam-webster.com/dictionary/test

Moir, G., Button, C., Glasiter, M., & Stone, M. H. (2004). Influence of familiarization on

the reliability of vertical jump and accerleration sprinting perfromance in phsyically

active men. Journal of Strength and Conditioning Research, 18(2), 276–280.

Moir, G., Sanders, R., Button, C., & Glaister, M. (2005). The influence of familiarization

on the reliability of force variables measured during unloaded and loaded vertical

jumps. Journal of Strength and Conditioning Research, 19(1), 140–145.

https://doi.org/10.1519/14803.1

Moir, G., Shastri, P., & Connaboy, C. (2008). Intersession reliability of vertical jump

height in women and men. Journal of Strength and Conditioning Research, 22(6),

1779–1784.

Morrow, J. R., Jackson, A. W., Disch, J. G., & Mood, D. P. (2011). Measurement and

evaluation in human performance (L. D. Robertson (ed.); Fourth). Human Kinetics.

64

Morrow Jr., J. A., & Jackson, A. W. (1993). How “significant” is your reliability?

Research Quarterly for Exercise and Sport, 64(3), 352–355.

Mosier, E. M., Fry, A. C., & Lane, M. T. (2019). Kinetic contributions of the upper limbs

during counter-movement verical jumps with and without arm swing. Journal of

Strength and Conditioning Research, 33(8), 2066–2073.

https://doi.org/10.1519/JSC.0000000000002275

Newton, R. U., & Kraemer, W. J. (1994). Developing explosive muscular power:

Implications for a mixed methods training staretgy. Strength and Conditioning

Journal, 16, 20–31.

Nibali, M. L., Tombleson, T., Brady, P. H., & Wagner, P. (2015). Influence of

familiarization and competitive level on the reliability of countermovement vertical

jump kinetic and kinematic variables. 29(10), 2827–2835.

Nicolas, A., Gauthier, A., Bessot, N., Moussay, S., & Davenne, D. (2005). Time-of-day

effects on myoelectric and mechanical properties of muscle during maximal and

prolonged isokinetic exercise. Chronobiology International, 22(6), 997–1011.

https://doi.org/10.1080/07420520500397892

Nuzzo, J. L., Anning, J. H., & Scharfenberg, J. M. (2011). The reliability of three devices

used for measuring vertical jump height. 25(9), 2580–2590.

Peng, H.-T., Song, C.-Y., Chen, Z.-R., Wang, I.-L., Gu, C.-Y., & Wang, L.-I. (2019).

Differences Between Bimodal and Unimodal Force-time Curves During

Countermovement Jump. International Journal of Sports Medicine, 40(10), 663–

669. https://doi.org/10.1055/a-0970-9104

Peterson, B. J., Fitzgerald, J. S., Dietz, C. C., Ziegler, K. S., Ingraham, S. J., Baker, S. E.,

65

& Snyder, E. M. (2015). Division I hockey players generate more power than

Division III players during on- and off-ice performance tests. Journal of Strength

and Conditioning Research, 29(5), 1191–1196.

https://doi.org/10.1519/JSC.0000000000000754

Raffalt, P. C., Alkjær, T., & Simonsen, E. B. (2017). Intra-subject variability in muscle

activity and co-contraction during jumps and landings in children and adults.

Scandinavian Journal of Medicine and Science in Sports, 27(8), 820–831.

https://doi.org/10.1111/sms.12686

Reiman, M. P., & Manske, R. C. (2009). Functional testing in human performance (L. D.

Roberston, M. Schwarzentraub, & N. Gleeson (eds.)). Human Kinetics.

Ribeiro, A. S., Do Nascimento, M. A., Salvador, E. P., Gurjão, A. L. D., Avelar, A., Ritti-

Dias, R. M., Mayhew, J. L., & Cyrino, E. S. (2014). Reliability of one-repetition

maximum test in untrained young adult men and women. Isokinetics and Exercise

Science, 22(3), 175–182. https://doi.org/10.3233/IES-140534

Richardson, S. O., Andersen, M. B., & Morris, T. (2008). Overtraining athletes personal

journeys in sport.

Risso, F. G., Ja;ilvand, F., Orjalo, A. J., Moreno, M. R., Davis, D. L., Birmingham-

Babauta, S. A., Stokes, J. J., Stage, A. A., Liu, T. M., Giuliano, D. V., Lazar, A., &

Lockie, R. G. (2017). Physiological Characteristics of Projected Starters and Non-

Starters in the Field Positions from a Division I Women’s Soccer Team.

International Journal of Exercise Science, 10(4), 568–579.

Rodríguez-Rosell, D., Mora-Custodio, R., Franco-Márquez, F., Yáñez-García, J. M., &

González-Badillo, J. J. (2016). Traditional vs. sport-specfic vertical jump tests:

66

Reliability, validity, and relationship with the legs strength and spring performance

in adult and teen soccer and basketball players. Journal of Strength and

Conditioning Research, 31(1), 196–206.

Ronglan, L. T., Raastad, T., & Børgesen, A. (2006). Neuromuscular fatigue and recovery

in elite female handball players. Scandinavian Journal of Medicine and Science in

Sports, 16(4), 267–273. https://doi.org/10.1111/j.1600-0838.2005.00474.x

Rountree, S. (2011). The athlete’s guide to recovery rest relax and restore for peak

performance. Velopress.

Salaj, S., & Markovic, G. (2011). Specificity of jumping, sprinting, and quick change-of-

direction motor abilities. Journal of Strength & Conditioning Research, 25(5),

1249–1255.

Sattler, T., Sekulic, D., Hadzic, V., Uljevis, O., & Dervisevic, E. (2012). Vertical

jumping tests in volleyball: Reliability, validity, and playing position specifics. The

Journal of Strength and Conditioning Research, 26(6), 1532–1538.

Sedliak, M., Finni, T., Cheng, S., Haikarainen, T., & Häkkinen, K. (2008). Diurnal

variation in maximal and submaximal strength, power and neural activation of leg

extensors in men: Multiple sampling across two consecutive days. International

Journal of Sports Medicine, 29(3), 217–224. https://doi.org/10.1055/s-2007-965125

Shadmehr, A., Hejazi, S. M., Olyaei, G., & Talebian, S. (2016). Effect of

countermovement and arm swing on vertical stiffness and jump performance.

Journal of Contemporary Medical Sciences, 2(5), 25–27.

Shalfawi, S. A. I., Sabbah, A., Kailani, G., Toønnessen, E., & Enokse, E. (2011). The

relationship between running speed and measures of vertical jump in professional

67

basketball players: A field-test approach. Journal of Strength and Conditioning

Research, 25(11), 3088–3092. https://doi.org/10.1519/JSC.0b013e318212db0e

Sheppard, J. M., & Gabbert, T. J. (2016). Performance diagnostics. In I. Jeffreys & J.

Moody (Eds.), Strength and Conditioning for Sports Perfromance (pp. 204–205).

Routledge.

Slinde, F., Suber, C., Suber, L., Edwen, C. E., & Svantesson, U. (2008). Test-retest

reliability of three different countermovement jumping tests. Journal of Strength

and Conditioning Research, 22(2), 640–644.

Smirniotou, A., Katsikas, C., Paradisis, G., Argeitaki, P., Zacharogiannis, E., & Tziortzis,

S. (2008). Strength-power parameters as predictors of sprinting performance.

Journal of Sports Medicine and Physical Fitness, 48(4), 447–454.

https://doi.org/10.1016/S1470-2045(12)70211-5

Stokes, M. (1985). Reliability and repeatability of methods for measuring muscle in

physiotherapy. Physiotherapy Theory and Practice, 1(2), 71–76.

https://doi.org/10.3109/09593988509163853

Strojnik, V., & Komi, P. V. (2000). Fatigue after submaximal intensive stretch-shortening

cycle exercise. Medicine and Science in Sports and Exercise, 32(7), 1314–1319.

https://doi.org/10.1097/00005768-200007000-00020

Talpey, S. W., Axtell, R., Gardner, E., & James, L. (2019). Changes in Lower Body

Muscular Performance Following a Season of NCAA Division I Men’s Lacrosse.

Sports, 7(18), 12. https://doi.org/10.3390/sports7010018

Taylor, K. L., Cronin, J., Gill, N. D., Chapman, D. W., & Sheppard, J. (2010). Sources of

variability in Iso-Inertial jump assessments. International Journal of Sports

68

Physiology and Performance, 5(4), 546–558.

Tse, M. A., McManus, A. M., & Masters, R. S. W. (2005). Development and validation

of a core endurance intervention program: Implications for performance in college-

age rowers. Journal of Strength & Conditioning Research, 19(3), 547–552.

https://doi.org/10.1016/j.jmpt.2010.09.003

Vaverka, F., Jandačka, D., Zahradník, D., Uchytil, J., Farana, R., Supej, M., & Vodičar, J.

(2016). Effect of an arm swing on countermovement vertical jump performance in

elite volleyball players: Final. Journal of Human Kinetics, 53(1), 41–50.

https://doi.org/10.1515/hukin-2016-0009

Vescovi, J. D., & McGuigan, M. R. (2008). Relationships between sprinting, agility, and

jump ability in female athletes. Journal of Sports Sciences, 26(1), 97–107.

https://doi.org/10.1080/02640410701348644

Vincent, L. M., Blissmer, B. J., & Hatfield, D. L. (2018). National scouting combine

scores as performance predictors in the National Football League. The Journal of

Strength and Conditioning Research, 33(1), 104–111.

Vincent, W. J., & Weir, J. P. (2012). Statistics in kinesiology (Fourth). Human Kinetics.

Weir, J. p. (2005). Quantifying test-retest reliability using the intraclass correlation

coefficient and the SEM. Journal of Strength & Conditioning Research, 19(1), 231–

240. https://doi.org/10.1007/978-3-642-27872-3_5

Williams, C. A., & Ratel, S. (2009). Definitions of muscle fatigue. In Human Muscle

Fatigue (pp. 3–16). Taylor & Francis.

Ziv, G., & Lidor, R. (2010). Vertical jump in female and male basketball players: A

review of observational and experimental studies. Journal of Science and Medicine

69 in Sport, 13, 332–339. https://doi.org/10.1111/j.1600-0838.2009.01083.x

APPENDICES

APPENDIX A

INFORMED CONSENT FORM

Title of the Study: Measurement of Reliability in Countermovement jump with and without the use of arm swing in Male and Female Collegiate Soccer Players

Principal Investigator: Dakota Brovero, CSCS, Graduate Student Faculty Advisor: Dave Diggin, PhD., Assistant Professor of Exercise Science and Athletic Training, Sebastian Harenberg, PhD., Assistant Professor of Exercise Science and Athletic Training

Invitation to Participate in a Research Study

You are invited to participate in a research study. In order to participate, you must be a member of Ithaca College’s Men and Women’s Soccer team between the age of 18 – 25 years old. Taking part in this research study is voluntary. You are not required to participate in this study. You may stop or withdraw your participation from this study at any time.

Important Information about this Research Study Purpose of the study: This study aims to examine the consistency in countermovement jump performance when performed with and without the use of an arm swing. You will be required to complete six repetitions of a vertical jump (3 with an arm swing and 3 without an arm swing) on two different days. Procedures will take approximately 25-30 minutes during each day of testing. All procedures will be performed at your home. Procedures will require you to video yourself using your cell phone. The total time commitment for participating is 50-60 minutes. Risks and discomforts associated with this research: There is minimal physical risk associated with this study. Direct benefits to the participants: There are no direct benefits for participation in this study. Please read this entire form and ask questions before deciding whether you would like to participate in this research study.

70 71

Purpose of the Study The countermovement vertical jump is considered useful to identify athlete power capabilities. Test results are also found to be sensitive to fatigue and can be used to clarify athletes’ readiness to perform. However the level of variation expected in the performance of this test in soccer players is unknown. This study aims to examine the consistency in countermovement jump performance when performing with and without an arm swing.

1. Benefits of the Study • Participant Benefits: There are no direct benefits for participating in the current study. As a participant in the study you will receive feedback on your current jump performance as well as establish your expected jump variation when completing the countermovement jump without arm swing and countermovement jump with arm swing. • Researcher benefits: This study is intended to serve as a Master’s thesis project for the principal investigator to complete his degree requirements. It will also benefit the research team to improve their research experience. • Scientific Community benefits: It is expected the results of the study will suggest the need for coaches to establish the expected variation with every team that they work with. Following completion, data may be presented to scientific and coaching communities in the form of journal articles and/or conference presentations. 2. What You Will Be Asked to Do In agreeing to participate you will be asked to video yourself completing 6 countermovement jumps, 3 with the allowance of arm swing and 3 without the allowance of arm swing on two separate days spaced 3 – 7 day apart. During each session you will be asked to complete the GPP dynamic warm-up in the at home packet provided by Ithaca College Strength and Conditioning. You will then complete the six countermovement jumps randomly. Jump order for your test session both days will be provided to you with the return of the informed consent form. You will be asked to allow three minutes of recovery between each jump. Test Day Duration (mins) Day 1 25-30 Day 2 25-30 Total Participation Time 50-60

Participants will be excluded from participation if they have experienced an injury that has required them to miss three consecutive days of training in the past two months (see health questionnaire following this informed consent).

4. Withdrawal from the Study Participation in this study is voluntary. You may withdraw from the study at any point without explanation and without prejudice towards you.

5. Risks

72

There is minimal physical risk during the testing process.

6. How the Data will be Maintained in Confidence OR Anonymity of Research All the information will be treated with the strictest confidence and will not be disclosed to any party other than the investigator, supervisors, or yourself (if desired). Your videos will also remain completely anonymous at all times. You will be asked to upload your videos to a password protected platform in a folder that will be made available only to yourself and the research team. Videos will be viewed by the research team solely for jump analysis. Folders will be coded by participant number. Only members of the research team will have access to the names associated with the code, and the code will be kept separate from data files in a secure location (faculty advisor’s office: CHS 321). Data files will be kept for at least 3 years.

7. Compensation for Injury If you suffer an injury that requires any treatment or hospitalization as a direct result of this study, the cost for such care will be charged to you. If you have insurance, you may bill your insurance company. You will be responsible to pay all costs not covered by your insurance. Ithaca College will not pay for any care, lost wages, or provide other financial compensation.

8. If You Would Like More Information about the Study If you would like more information about the study feel free to contact the principal investigator Dakota Brovero: Email: [email protected] Phone: 609.970.2933

Ithaca College IRB Peggy Ryan Williams Center 953 Danby Road Ithaca, NY 14850 [email protected] (607) 274-3113

I have read the above and I understand its contents. I agree to participate in the study. I acknowledge that I am 18 years of age or older.

______Print or Type Name ______Signature Date

APPENDIX B

Health Screening Questionnaire

Please fill out this questionnaire to the best of your ability. You may choose not to answer any question though leaving a question blank may mean that you are not able to participate in the study. All recorded information will remain completely confidential.

Your cooperation in this is greatly appreciated.

Participant’s Name: …………………………………………. Date of Birth:

………………

Resistance Training Experience (years): …………………… Age:

………………………

Persons to contact in case of emergency:

Name: ………………………………………………………….

Phone Number: ………………..

Have you had to consult your doctor within the last six weeks? Yes □ No□

If ‘yes’ please give details: ………………………………………………………………………………………………

………………………………………………………………………………………………

………………………………………………………………………………………………

………………………………………………………………………………………………

………………………………

Have you currently or ever had:

□ Diabetes □ Asthma □ Bronchitis □ Heart complaints

If so, please give details:

73 74

………………………………………………………………………………………………

………………………………………………………………………………………………

………………………………………………………………………………………………

………………………

Injury History:

Have you experienced an injury within the last two months that has limited or resulted in the termination of your normal exercise activities? Yes □ No □

If yes please provide details of:

- Type of injury: ……………………………………………………………………………

- When it occurred: ………………………………………………………………………

Could these injuries prevent / limit your performance in the forthcoming exercise testing?

Yes □ No □

If you have answered NO to all questions then you can be reasonably sure that you can take part in the physical activity requirements of the testing procedures.

I ……………………………………………….. declare that the above information is correct at the time of completing this questionnaire Date ……../……../……… Participant’s signature …………………………………. Date ……../……../………

Investigator’s signature ……………………………….. Date ……../……../………

Please Note: If your health changes so that you can then answer YES to any of the above questions, please inform the experimenter / laboratory supervisor.

This study has been approved by the Institutional Review Board (IRB) of Ithaca College. If you have any concerns about this study and wish to contact someone independent, you may contact the IRB at:

Tel: (607) 274 3113 Email: [email protected]

APPENDIX C

Raw Data Key

Abbreviation Definition

MT Movement time of the jump, in seconds

FT Flight time of the jump, in seconds

JH Jump height calculated from the subject’s

flight time, in meters

RSI Reactive Strength Index calculated from

the subject’s flight time and movement

time

WOS Countermovement jump without an arm

swing

WAS Countermovement jump with an arm

swing

75

APPENDIX D

Raw Data

Day 1

Subject Jump Jump MT FT JH RSI

Type

1 WOS 1 1.39 0.40 0.20 0.29

1 WOS 2 1.28 0.43 0.22 0.33

1 WOS 3 1.10 0.40 0.20 0.37

1 WAS 1 1.17 0.40 0.20 0.35

1 WAS 2 1.15 0.45 0.25 0.39

1 WAS 3 1.12 0.40 0.20 0.36

3 WOS 1 1.15 0.43 0.22 0.37

3 WOS 2 0.81 0.45 0.25 0.56

3 WOS 3 0.79 0.47 0.27 0.57

3 WAS 1 0.83 0.47 0.27 0.57

3 WAS 2 0.90 0.59 0.30 0.55

3 WAS 3 0.81 0.49 0.30 0.61

4 WOS 1 0.90 0.54 0.36 0.60

4 WOS 2 0.97 0.47 0.27 0.49

4 WOS 3 0.90 0.54 0.35 0.60

4 WOS 4 0.85 0.49 0.30 0.58

4 WAS 1 0.85 0.54 0.36 0.63

4 WAS 2 0.88 0.54 0.36 0.62

76 77

6 WOS 1 0.67 0.47 0.27 0.70

6 WOS 2 0.67 0.43 0.22 0.63

6 WOS 3 0.74 0.36 0.16 0.48

6 WAS 1 1.00 0.43 0.22 0.40

6 WAS 2 1.07 0.43 0.22 0.40

6 WAS 3 1.13 0.41 0.20 0.36

7 WOS 1 0.98 0.36 0.16 0.37

7 WOS 2 0.92 0.38 0.18 0.42

7 WOS 3 1.02 0.41 0.20 0.40

7 WAS 1 1.00 0.43 0.22 0.42

7 WAS 2 1.07 0.43 0.22 0.40

7 WAS 3 1.13 0.41 0.20 0.36

8 WOS 1 1.17 0.43 0.22 0.36

8 WOS 2 0.81 0.40 0.20 0.50

8 WOS 3 0.97\ 0.40 0.20 0.42

8 WAS 2 1.30 0.52 0.33 0.40

8 WAS 3 1.30 0.49 0.30 0.38

9 WOS 1 0.92 0.45 0.25 0.49

9 WOS 2 0.76 0.38 0.18 0.50

9 WOS 3 0.90 0.40 0.20 0.45

9 WAS 1 1.01 0.43 0.22 0.42

9 WAS 2 0.96 0.43 0.22 0.44

9 WAS 3 0.80 0.33 0.14 0.42

78

11 WOS 1 N/A 0.51 0.32 N/A

11 WOS 2 N/A 0.54 0.35 N/A

11 WOS 3 1.13 0.52 0.33 0.46

11 WAS 1 0.64 0.67 0.55 1.05

11 WAS 2 0.64 0.58 0.41 0.91

11 WAS 3 N/A 0.56 0.38 N/A

Day 2

Subject Jump Jump MT FT JH RSI

Type

1 WOS 1 1.33 0.40 0.20 0.30

1 WOS 2 1.37 0.43 0.22 0.31

1 WOS 3 1.14 0.40 0.20 0.35

1 WAS 1 1.10 0.45 0.25 0.41

1 WAS 2 1.08 0.36 0.16 0.33

1 WAS 3 1.06 0.47 0.27 0.45

3 WOS 1 0.92 0.47 0.27 0.51

3 WOS 2 0.94 0.49 0.30 0.52

3 WOS 3 0.88 0.49 0.30 0.56

3 WAS 1 0.81 0.56 0.39 0.70

3 WAS 2 0.92 0.56 0.38 0.61

3 WAS 3 0.88 0.56 0.39 0.64

4 WOS 1 0.97 0.49 0.30 0.51

79

4 WOS 2 0.90 0.54 0.36 0.60

4 WOS 3 0.83 0.54 0.36 0.65

4 WAS 1 0.94 0.49 0.30 0.52

4 WAS 2 0.94 0.49 0.30 0.52

4 WAS 3 0.79 0.58 0.42 0.74

6 WOS 1 0.72 0.31 0.12 0.44

6 WOS 2 0.70 0.36 0.16 0.52

6 WOS 3 0.76 0.41 0.20 0.53

6 WAS 1 0.63 0.74 0.67 1.18

6 WAS 2 0.67 0.43 0.22 0.63

6 WAS 3 0.67 0.43 0.22 0.63

7 WOS 1 0.81 0.40 0.20 0.50

7 WOS 2 0.83 0.43 0.22 0.51

7 WOS 3 0.79 0.45 0.25 0.57

7 WAS 1 30.44 1.76 30.82 0.51

7 WAS 2 0.85 0.47 0.27 0.55

7 WAS 3 0.81 0.47 0.27 0.58

8 WOS 1 0.99 0.36 0.16 0.36

8 WOS 2 0.90 0.38 0.18 0.43

8 WOS 3 1.03 0.38 0.18 0.37

8 WAS 1 1.30 0.43 0.22 0.33

8 WAS 2 1.19 0.52 0.33 0.43

8 WAS 3 1.28 0.47 0.27 0.37

80

9 WOS 1 0.87 0.37 0.17 0.43

9 WOS 2 0.81 0.39 0.19 0.49

9 WOS 3 0.99 0.37 0.17 0.38

9 WAS 1 1.03 0.39 0.19 0.38

9 WAS 2 0.97 0.37 0.17 0.38

9 WAS 3 1.01 0.39 0.19 0.39

11 WOS 1 0.81 0.46 0.26 0.57

11 WOS 2 0.91 0.45 0.25 0.50

11 WOS 3 0.97 0.48 0.28 0.49

11 WAS 1 0.91 0.56 0.38 0.61

11 WAS 2 0.74 0.52 0.33 0.70

11 WAS 3 0.85 0.54 0.35 0.63