EXPLORING LEAST COST PATH ANALYSIS: A CASE STUDY FROM THE GÖKSU VALLEY,

A Thesis Submitted to the Committee of Graduate Studies in Partial Fulfillment of the Requirements for the Degree of Master on Arts in the Faculty of Arts and Science

TRENT UNIVERSITY

Peterborough, Ontario, Canada

(c) Copyright by Nayla Abu Izzeddin (2014)

Anthropology M.A. Graduate Program

September 2014 Abstract

Exploring Least Cost Path Analysis: A Case Study

From the Göksu Valley, Turkey

Nayla Abu Izzeddin

Least cost path analysis is considered by many scholars as being a good proxy for studying movement and interactions between sites in the landscape. Although it is widely used, there are many limitations and challenges yet to be overcome concerning the reliability of the results. The examples used from the Göksu Valley during the late

Roman Imperial rule emphasize the need to clearly understand how the tool works in generating least cost paths and how these can be interpreted and related to human movement. The resolution and accuracy of the elevation data used also play an important role in least cost path analysis and these depend on the topographical area being studied.

New venues are constantly being sought and the success of any analysis depends on how the results are compared and tested in concert with data obtained from various sources and through more visually advanced mapping software.

Keywords: Least cost path; Göksu Valley; late Roman Imperial rule; DEM resolution and accuracy

ii

Acknowledgments

It was that day in April (2011) I was at airport running to catch my flight to

Beirut, my home town, for a short vacation prior to writing my Masters dissertation for the University of Birmingham. I received two emails from Dr. Hugh Elton in this very brief instant where I had internet connection on my phone. These two minutes or so changed my world, made me realize that I am about to embark on a journey full of challenges and gratifications, filled with excitement and hope, a hope to grow. And this would have not been possible without the support and trust Dr. Elton put in me, a trust which made this opportunity an unforgettable and unmatchable one. I am thankful and honored to have had the opportunity to spend this valuable time with him, learning and growing with every comment or remark I received. Many sincere thanks to Dr. Elton who was an incredible advisor to me. Also many thanks to Dr. James Conolly whom I am honored to have worked with and for always providing his help and support when needed. A special thanks to Dr. Jennifer Moore whom I admire and who inspired me greatly on different levels.

I would also like to thank Dr. Jocelyn Williams for trusting and believing in my capabilities and for giving me the opportunity to TA with her again. Many thanks to

Kristine Williams for her support and encouragement along the way, and for all her help, and to Kate Dougherty for the great time spent together and for always being available for assistance. Also many big thanks to my colleagues and friends whom made this experience more enjoyable and fun especially in stressful times. Thanks to Amandah for all the time we got to spend together debating and talking about everything and anything, for being patient with me, Kat for the endless provision of wine always, Jessica for

iii always pushing me to play squash with her and to Christa, Kristin, Dan, and Jack (it was always fun to bump into you at clubs!!).

I would like to thank also all my friends whom made my time spent in Peterborough an unforgettable one and made life so much easier and especially to all who believed in me and my abilities and admired the determination I had towards my endeavors and always encouraged me no matter how stressed and worried I was.

Finally I would like to thank my family for all their support and especially my dad, for without him none of this would have been possible. I want to thank him for his countless encouragements (such as “You are Nayla the Great”), for believing in me and supporting my plans no matter how crazy they were.

Thank you all for all you have provided me with and big thanks to Trent University for providing such a great environment to work in.

iv

Table of Contents Abstract ii Acknowledgments iii Table of Contents v List of Tables vii List of Figures vii Chapter 1: GIS, Isauria and the late Roman Imperial period 1 1. Introduction 1 1.2. Movement and GIS 3 1.2.1. GIS development 5 1.3. Isauria and the study area 8 1.3.1. General view 8 1.3.2. The late Roman Imperial rule and the valley 11 1.4. Previous Works 16 1.4.1. GAP survey project 17 1.5. Aims and Objectives 21 Chapter 2: Theory of GIS and Challenges 23 2.1. Least Cost Path Analyses 23 2.1.1. Technical aspect 24 2.1.2. Challenges and Limitations 28 2.2. Previous Works 29 2.2.1. Case studies 31 2.3. Discussion and Conclusion 33 Chapter 3: Digital Elevation Models and GIS 37 3.1. Digital Elevation Models and data capture 37 3.1.1. DEM creation: survey points 37 3.2. Remote Sensing 40 3.2.1. Collection of raw data 41 3.2.2. SRTMs 43 3.3. DEMs in ArcGIS 45 3.3.1. “Size does matter”: 47 3.4. Slope and Cost surfaces 48

v

3.4.1. Creating cumulative cost surfaces 54 3.4.2. Anisotropism and direction of movement 55 3.4.3. Slope in ArcGIS 59 3.5. Conclusion: 60 Chapter 4: Cost Surfaces and Least Cost Paths 62 4.1. Resolution and Accuracy 62 4.2. Least Cost Path Analysis: 64 4.2.1. Distance Analysis 65 4.2.2. Least Cost Path and neighborhoods selection 67 4.3. Conclusion 71 Chapter 5: A case study from the Göksu Valley 73 5.1. Working with the DEM 74 5.1.1. DEM values: 74 5.2. DEM cell size, resolution, resampling 79 5.2.1. Resampled DEMs: interpolation 81 5.2.2. Resampled DEMs: resolutions 86 5.3. Adding Rivers and Bridges 90 5.3.1. The Göksu River cost surface 92 5.4. Analysis 3: Dağpazarı - 97 5.4.1. Google Earth 104 5.5. Conclusion 108 Chapter 6: Interpretation and Conclusion 111 6.1. Summary 112 6.2. Results 114 6.2.1. Late Roman Isauria 115 6.3. Computers vs. Humans 117 6.4. Conclusion 121 Appendix 1: Trip report 123 Appendix 2: Least Cost Path Analysis in ArcGIS 10 133 Bibliography 140

vi

List of Tables Table 1. Data types definition. 27 Table 2. Raster Data. 28 Table 3. Variation in generating least cost paths 57 Table 4. Least Cost Path Analysis from Mut to Ermenek: Adjusting cell values 76 Table 5. Resampling the DEM: four interpolation options. 84 Table 6. Least cost paths from Mut to Ermenek on different resolutions. 87 Table 7. Least cost path analysis with river and bridges: Mut-Ermenek and Mut- Adrassus 95 Table 8. Least Cost Path Comparison of two different topographical areas. 98 Table 9. Least cost path generated from resampled DEMs 101 Table 10. Topographical differences between both areas. 103

List of Figures Figure 1. The Göksu Valley Study Area 12 Figure 2. Location of sinks in the DEM 45 Figure 3. Slope calculation in degrees and percent rise 51 Figure 4. Figure extracted from Bell et al. 2002: 175 52 Figure 5. Least Cost Path Analysis from Mut to Ermenek: Adjusting cell values 79 Figure 6. Resampling the DEM: four interpolation options. 85 Figure 7. Least cost paths from Mut to Ermenek on different resolutions 89 Figure 8. Figure representing the slope effort based on Bell et al (2002) and taken from Newhard et al. (2008) 92 Figure 9. Least cost path analysis with river and bridges: Mut-Ermenek and Mut- Adrassus 96 Figure 10. Least cost path from Dağpazarı to Karaman. 99 Figure 11. Least cost path analysis from Dağpazarı to Karaman 102 Figure 12. Least cost path analysis from Dağpazarı to Karaman 103 Figure 13. Least cost path analysis in the Göksu Valley 106 Figure 15. Google Earth, least cost path in the Göksu Valley 107 Figure 16. Google Earth, least cost path in the Göksu Valley 108 Figure 17. ArcGIS data frame and search tool engine. 133 Figure 18. Working with the DEM. 135 Figure 19. Setting the coordinates framework. 136 Figure 20. Buffer tool. 138 Figure 21. Reclassifying the slope raster. 139

vii

Chapter 1: GIS, Isauria and the late Roman Imperial period

1. Introduction

The main objective of the research presented here is to take a critical look at GIS software and in particular at the least cost path tool provided by ArcGIS 10. Much of the constructive criticism that has been made concerning the limitation of this tool relies on failure of the software to take into account the cognitive and behavioral aspects when quantifying past human movement in a landscape. Less focus in the literature is found concerning the technical capabilities and limitations of GIS software; in particular, rarely does any study made so far explore in detail how the tool works; and most show less awareness that many different results could be generated for the same route that is being predicted. The challenge remains in choosing the path that fits best the research questions and objectives of any given study.

The case study used in this thesis denotes technical and computational issues within GIS by using an example from the Göksu Valley locating potential roads and exploring movement and interactions between the late Roman cities of Mut and Ermenek on the one hand and Karaman and Dağpazarı on the other. These four sites surround the

Göksu Valley and exhibit different topographical layers in the landscape.

The thesis is divided into six chapters. Chapter 1 presents the aims and objectives of the thesis and introduces Geographic Information Systems (GIS), their advantages and limitations in archaeological studies. This chapter also introduces in its second part the

Göksu River valley and its status during late Roman Imperial rule from about the third century A.D. to the seventh century A.D. Moreover, a focus on the most recent survey

1 project conducted around and through the Göksu Valley, the Göksu Archaeological

Project (GAP), is discussed. The initial data used in this thesis were taken from the GAP

GIS database.

Chapter 2 presents some theoretical approaches to the study of movement and interactions between sites as well as practical approaches, in this case using the least cost path tool, to map possible routes linking the main cities together and relate them to human movement. The practical approach does have limitations, either technical or in the interpretation part, and cannot be separated from the theoretical approach. Any computational outcome should be tested against a certain theoretical aspect depending on the research objectives. This chapter furthermore explores previous studies that have based their study research on least cost path analysis. Finally, this chapter raises a basic set of limitations that are still currently faced by the discipline and notes a need for awareness of the disadvantages that this tool may offer if not understood fully.

Chapter 3 introduces Digital Elevation Models (DEMs) and how they are processed for the use in an archaeological study. The DEM used in this research to derive cost surfaces and least cost paths dictates the accuracy of any statistical analysis. In its second part, this chapter explores the slope cost surface raster used in the analysis. It presents the different steps developed throughout the past two decades to produce accurate slopes that take into consideration the direction of movement as well.

Chapter 4 explores how ArcGIS computes least cost paths and presents the first steps prior to generating the paths; furthermore it discusses some challenges found in relating the results to human movement and the human perception of the landscape.

Chapter 5 explores the case study and provides further explanation and discussion of the

2 cost paths generated and how they can be related to movement and can be interpreted. It investigates the models created, mainly from the site of Mut to the site of Ermenek on the one hand and from Karaman to Dağpazarı on the other. These sites are separated by the

Göksu River valley and mountains and thus this area presents an interesting location in which to study movement. Many results were obtained using simple modifications performed on the Digital Elevation Models (DEMs) used for this study. The cost paths generated revealed interesting results that are critical to be aware of when using the least cost path tool provided by ArcGIS.

The final chapter, Chapter 6, then presents some conclusions and recommendations concerning least cost path analysis and its ramification on the study of archaeology.

1.2. Movement and GIS

The study of movement and travel between major and local sites is most revealing in a region of study where movement is known to be difficult or challenging. Although some landscapes are flat, such as deserts or grasslands, most others are characterized by vegetation covers, steep slopes, mountains and valleys that each present a different set of conditions that exert different cost for movement. Thus, it is crucial to take note of all features and key variables that would impede or slow movement when performing least cost path analysis. The objective here is to test a method widely used for the purpose of quantifying the cost required by an individual to travel across a rough and diverse landscape, and to explore its consequences on varying aspects that formed past societies.

One way to measure this cost can be according to the time spent traveling or through the

3 distance traveled. The use of spatial statistics and computational methods seem to be appropriate for this study, for nowadays the advent of GIS and their rapid development and progress enable the researcher to model past human landscapes more easily than former methods.

One of the most common uses of GIS in archaeology was, at the earlier stages, in investigating visibility studies. With GIS becoming widely popular, visibility analysis was considered an easy way to perform spatial statistical analysis: it produces binary maps coded with 1 for the visible areas and 0 for areas that are not visible from a certain location (fixed) point. This process is not a difficult one; it is simple and straightforward.

These visibility studies use GIS and statistical methods to measure or calculate which parts of the landscape can be viewed from a specific location, thus emphasizing the placements and locations of sites. Less known or available are those studies that strive to map movement and travel across a landscape, which could also reveal intriguing insights into the role of a site located in a wide network of sites; however, they are increasing in number with the years. Meghan Howey (2007), in her study using multi-criteria cost surface analysis, states that

movement through space is an intrinsic aspect of life and society. Even in settings where movement is constrained by available travel modes or by unfavorable travel conditions, people still travel considerable distance. For archaeologists, analysis of movement has the potential, then, to reveal important information on numerous aspects of past life” (Howey 2007: 1830).

The outcomes of computational methods, however, need to be tested against known factors that shape the human behavior in reality; that is, a researcher cannot only depend on the results of spatial analyses, but has to relate them to published literature, documentations and most importantly ground-truthing and surveys. It is therefore in some way a challenge to quantify accurately factors and features of the landscape and in

4 another way, even a greater challenge to quantify and measure relative efforts and decisions concerning movement and travel; it remains a challenge thus to find a way to relate the results generated by the computer to real-life settings and test how meaningful they can be for the study of movement.

This thesis explores and investigates Least Cost Path Analysis (LCPA) and takes a critical look at the paths generated and what they mean in terms of human movement and human perception of the landscape. The late Roman period and the Göksu Valley of

Isauria were selected to define the boundaries of the area and the location of start and destination points, which represent the important cities that marked that region and epoch. This research study thus sets a conceptual and a computational framework for further future studies exploring, for example, travel and communication routes in and around the Göksu Valley.

The next part of this chapter is divided into four main parts. The first introduces

GIS and their development. The second part introduces the geographic location of the

Göksu Valley and more specifically the sites of interest to the analysis. It also gives a brief overview on the impact of the late Roman Imperial rule and the changes it brought to the region of study from the third to the seventh centuries A.D. The third part of this chapter explores some previous work done in the study area, mainly works and surveys done by the Göksu Archaeological Project (GAP), and presents the more recent results that are relevant to the study of movement. Finally, the last part states the aims and objectives of this thesis.

1.2.1. GIS development

It is crucial to mention first that GIS studies are increasing in number and

5 popularity, which is not surprising when considering that this discipline is developing at a fast-growing rate: during the past twenty years GIS studies and technology have developed tremendously. At first, GIS had their roots embedded in the study of geography during the mid-twentieth century. Digitizing cartographic maps for the management of cultural resources has proven to be a much quicker and visually enhancing process. As Conolly and Lake (2006) state, “these early systems relied on point, line and polygon ‘geographic primitives’, which still form the building blocks of modern vector-based GIS” (Conolly and Lake 2006: 24). The process of rebuilding reality or representing real landscapes produces then what are referred to as ‘data models’. They represent very simplified versions of reality; nevertheless, as simple as they are, they “may become the building blocks for more complex models that are designed to quantify relationships between different entities” (Conolly and Lake 2006:

24).

At the beginning of the 1990s, GIS studies began to materialize and take shape, accompanied by the creation of new algorithms and tools used to perform research in many disciplines. It was considered a specialized domain, at first, where only specialists in GIS, statistics researchers and skilled mathematicians were able to produce complex functions and formulae to quantify elements in the landscape for spatial studies.

Nowadays, the software has been simplified for any user to be able to conduct spatial analyses. This was possible through introducing more sophisticated algorithms and functions, however, not always easily understood. What lies behind the creation of visually enhancing maps is more complicated than what it actually seems like. Therefore, any user should be aware and should know at least the basics behind each function that

6 he/she uses when working with GIS for various studies. In 2006, Conolly and Lake provided the most influential manual for the use of GIS in archaeology, still highly regarded today. The purpose of this manual is to explain how to introduce GIS into archaeological studies. It provides a detailed description and explanation of GIS as software and as a methodology that could have impact on the theoretical approaches relating to landscape archaeological studies and thus could aid greatly the discipline of archaeology. Conolly and Lake explore the different capabilities of GIS, their advantages and weaknesses, as well as the first principles to the application of GIS in archaeology.

They furthermore clarify in detail the types of data and files GIS can support and use and how to use these data for querying and several other spatial analyses. They delve into more detail when explaining how spatial analyses are performed by presenting several examples for the use of these tools in archaeology, for example for visibility studies, catchment analyses, hydrology and paths analyses. As these authors suggested, understanding the abilities of GIS and spatial analysis is only possible through exploring its functions and its basic tasks.

“While each of these tasks are important in themselves, above all GIS should be considered as both an integrated and as an integrating technology that provides a suite of tools that help people interact with and understand spatial information” (Conolly and Lake 2006:11).

GIS are considered as software used to perform spatial statistical functions in any region or given area of interest, of course, with the condition of having the relevant digital data needed or required for different types of analyses. Kenneth Kvamme (1991) stated that “one of the great potential advantages of GIS is that they facilitate the exploration and creation of new data types” (Kvamme 1991: 131). It is not surprising then, with the abilities that GIS has for digitizing large landscapes at different scales and

7 with its ability to store large data and standardized files in its structured database that the process for generating, manipulating and creating new data has become less time consuming and easier to handle. The data required depends to a great extent on the purposes or the objectives of the analyst or the archaeologist and the nature of questions that he/she intends to answer with the proper tools and functions that are provided and available within any GIS software.

The following part of this chapter introduces the Göksu Valley in Isauria and its main characteristics during the late Roman Imperial period. It is evident that movement was crucial for the inhabitants of this rugged area and perhaps a lot of planning was required beforehand.

1.3. Isauria and the study area

This part is divided into two broad sections; the first one covers some general information about the study area and the second one delves in more details into the archaeological remains found in the area. It also considers remains scattered around the vast landscape, dated to the late Roman Imperial period and relevant to the study of movement.

1.3.1. General view

To start, it is necessary to put forward some information about the importance of the Göksu Valley as a region or an area that connects many societies and sites together on a larger regional scale. is often considered a major crossroad linking Asia and

Europe together. To narrow down the geographic extent, the area of study here is located in one part of south-central Anatolia, a region known during the late Roman period as

Isauria. This region stretches from the edge of the plain down to the

8

Mediterranean Sea and covers a wide range of terrain. The usually agreed-upon limit of

Isauria to the east is the river Lamus, while the region spreads to the west up until the

Melas River. In 63 B.C., with the conquest of the area by Pompey, the region of Isauria was part of the Roman province of . From this date until A.D. 72, the province was divided into two separate regions: ough Cilicia or Isauria as the western part, and

Lowland Cilicia as the eastern part ( lton 2002: 1 4-1 5). The (G lek

Boğazı), which fall geographically on the north-eastern side of Cilicia, was thought to be one of the major access point into Anatolia by many nineteenth-century scholars, “one of the communication routes between the central Anatolian plateau and Syria and

Mesopotamia” (Newhard et al. 2008:8 ; French 1965; Postgate 1998).

Isauria, or Rough Cilicia, was considered a pass linking the inland Anatolian

Plateau of Isauria with the Mediterranean coast. The surround the

Göksu Valley, the main valley of Isauria, on three of its sides, similar to a rectangular- box shape open from one side only. This landscape is characterized by a mountainous region with trees and forested areas everywhere and thus, the Göksu Valley would seem an easier way to move through these mountains (Bikoulis 2012:36). The

(Sertavul Beli), at the northern side of the valley has been considered by some scholars, such as D. French and J. Mellaart, as being the easiest way to access the Göksu Valley and continue down to the Mediterranean coast (Bikoulis 2012: 36). This pass is located in the central Taurus Mountains 1600m above sea level, on the main road leading from

Karaman down south to Mut; this main road was not constructed above the ancient road but that the latter must have passed somewhere nearby.

French notes that the Göksu Valley was thus ‘the easiest means of access from the

9 coast to the plateau, and conversely, the easiest descent from the plateau to the coast

(French 1965:177; see also Postgate 1998; Mitford 1980; Bikoulis 2012). Also he adds that the environmental factors and settings change drastically and sometimes abruptly between the cities on the plateau and those on the coast. Even temperature drops abruptly while going up the mountains, by about 0.7 degrees Celsius every 100m while ascending.

This contrast created by the ecological environment helps one to understand the relationships of the valley to that of the plateau and the coast (French: 1965:177; Elton

2001&2002; Postgate 1998; Gough 1974; Newhard 2008; Bikoulis 2012). However, even though the Göksu Valley might convey the easiest mean of movement in a rough landscape, it still cannot be affirmed that it was indeed used by different groups of people during late antiquity. Movement, therefore, and especially in an area with the Göksu

River being a major barrier to cross, is considered interesting.

The main roads that lead in and out of the valley from most directions are as follow: from the North through the Taurus Mountains and the Sertavul pass; from the north-eastern side through the Dağpazarı plateau and from the western side through the road leading to and from Ermenek. To the south of the valley, following the road from the coastal city of into the valley “the Göksu forms a gorge” to the south of Mut

(Newhard et al. 2008: 88). Moreover, to the north of Mut another gorge, its eastern side bordering the Dağpazarı plateau, is attested forming the Çoğla Canyon, which is impassable (Newhard et al. 2008: 88). Another option to enter the valley from the west is to drive through the site of Adrassus which is located in the upper level of the Göksu

Valley.

10

These above-mentioned routes that provide access to the valley can be used to compare the least cost path results with; however, these represent modern roads and only few have been constructed on top of older Roman road remains beneath them. The next part gives a brief overview of the Late Roman period and its repercussions on the study area.

1.3.2. The late Roman Imperial rule and the valley

The history of the late in Anatolia, and more specifically in south- central Anatolia, has always been considered multifaceted and complex on many levels.

With the start of the late Roman Imperial period, c. third century A.D., many divisions, annexes and separations of provinces and regions took place. In the scope of this research, only a brief overview of the geographical and political changes in Isauria that occurred previous to the Imperial rule and throughout the mid third to the seventh centuries A.D. is stated here: Isauria became a separate province in the mid third century

A.D.

11

Figure 1. The Göksu Valley Study Area

12

1.3.2.1 The late Roman Imperial and Isauria

There has always been confusion concerning the geographic location of Isauria, its boundaries and the name used to refer to it. The names “Isauria” and “ ough Cilicia”, in the context of the research presented here, are used interchangeably to refer to the region stretching from the Turkish Mediterranean coast to the southern Konya plain, and from the modern town of Silifke south east to the city of Side (approximately). As its modern Turkish name indicates, “ ough Cilicia” is an area known for its rocky mountains, “Taşeli” meaning the “Land of ocks” (Varinlioğlu 200 : 290). With a mountainous and rugged terrain, this area was not as prosperous as the region of eastern

Cilicia, recognized for its fertile lands and located more strategically in proximity to

Syria and the Mediterranean coast. Prior to A.D. 72, eastern Cilicia, also referred to as

“lowland Cilicia” or the “Cilician plain”, was part of the oman province of Syria with a single governor ruling over it; about a decade later, it was joined with Rough Cilicia during the rule of Vespasian (A.D. 69-79) (Elton 2002: 175).

Rough Cilicia was overshadowed by other neighboring regions that were, in contrast, ruled directly by the Roman Empire. Appointed Roman officials would rule and control the territory and protect it from possible threats caused, for example, by the presence of bandits hiding in the rocky mountains of Isauria in remote areas difficult to access. There were, thus, different types of banditry in the area: amateurs, professional and ‘barbarian-bandits’ ( lton GAP report 7: 1-2). The province of Pamphylia bordered

Rough Cilicia to the west while both Lycaonia and the province of border it to the north (Elton 2002: 176).

13

According to H. Elton, the lack of mention of Rough Cilicia in official documents emphasizes the fact that it was not as prosperous as its surrounding provinces and that it “implies a oman concept of a region weaker than… Pamphylia” ( lton 200 :

29). The Romans did acknowledge the presence of Cilicia, however, only on a geographical level; that is “there was no state, kingdom, or political unity that might easily be defined” ( lton, 200 : 29) for Cilicia.

In A.D 2, eastern Cilicia and “ ough Cilicia” were joined together and had a governor usually staying at Tarsus in the modern province of and not in Syria, or at Antioch more precisely. This official ruled over the entire region (Elton 2007: 30).

During the third century A.D. Isauria was separated to form its own province (GAP report 1: 2).

1.3.2.2 Selected sites

The three main and largest cities in Isauria were known to have existed after the creation of a single province of Cilicia in A.D. 72 (Elton 2002: 176): Seleucia (modern day Silifke), Claudiopolis (modern day Mut), and Germanicopolis (modern day

Ermenek). Mut is located on the road leading from the inland Anatolian plateau down to the Mediterranean coast where the city of Silifke lies. It is situated at the bottom of the

Göksu Valley, and is considered as a focal point for the analysis. Ermenek lies to the west of Mut, in a different topographical region of the Göksu Valley, in a mountainous region, and is considered as the second focal point for the study. These two sites as well as those of Karaman and Dağpazarı, each located in different sides of the valley and in diverse topographical regions, were chosen as the main sites for the study conducted here for few

14 reasons. One reason is because of their geographic location on either sides of the Göksu

Valley; Mut is located at the south-east part at the bottom of the valley while Ermenek lies to the west, on the mountain top of the valley, overlooking the Göksu River from an elevation of about 1200 meters above sea level. Karaman lies to its north, while the

Dağpazarı plateau is located to its west. These were the largest known cities in relation to this period and area. There were other smaller cities and villages found in and around the

Göksu Valley; however, they were small and poor, a situation that seems to have been true for most cities in Isauria. The Taurus Mountains chain and the roughness of the terrain made agricultural activities minimal and absent except in some parts of the Upper

Göksu Valley.

The overall situation of Isauria can be stated as the following: sites located in this region vary in importance according to their geographic siting. Coastal cities attested from the first century B.C. and onwards were more developed than the smaller villages typical of the countryside of Isauria. However, the cities and villages of Isauria remained altogether less impressive than those of the provinces and regions surrounding it. The coarse and rugged terrain and the limited communication routes, characterize the region of Isauria. Therefore, it is valuable to look at how the inhabitants of Isauria traveled and moved in and out of the valley and how it “was still connected to the larger rhythms of the Mediterranean” (GAP report 2: 3), even though it was considered as a “backwater”

(GAP 6: 3; see also Varinlioğlu (2007), Newhard (2008), Bikoulis (2009)).

Archaeological remains from the late Roman Imperial period are much better preserved and recorded in the literature than those of the prehistoric times; the Göksu

Valley had been inhabited since the Neolithic period (Bikoulis 2012: 37). The late Roman

15 period brought about major changes in Anatolia as a whole and especially in the Göksu

Valley. The construction and/or restorations of bridges to facilitate movement is well attested, especially due to some rivers and waterways found in the area, such as the

Göksu River, known in antiquity as the Calycadnus River. Furthermore the development of road networks and restorations of roads are well attested in the record and the presence of rock-cut tombs and sarcophagi along most roads sides is noticeable (GAP report 2:1-

3). Also, with the spread of Christianity in the entire region of Anatolia, the presence of churches and cultic places is well-recognized (GAP report 2: 23).

1.3.2.3. The Valley

The Valley covers an area of approximately c.300 km2 where many villages still remain inhabited nowadays, sometimes built over the ruins of ancient Roman cities. Mut is one of the ancient Roman cities that is located at the southern-eastern part of the Göksu

Valley and is one of the main five cities during the late Roman period considered in this research, with rmenek and later on Adrassus, Karaman and Dağpazarı. The reason behind the choice of these specific sites is for their location in diverse topographical and geographical contexts, as well as their position surrounding the valley on all sides, meaning that in order to move between these sites, the Göksu Valley will most probably be crossed.

1.4. Previous Works

This section presents some previous research that was conducted in the Göksu

Valley throughout the last decade. The Göksu Archaeological Project (GAP) represents

16 the latest survey work that took place in there, more specifically, in the Upper Göksu

Valley.

1.4.1. GAP survey project

The Göksu Archaeological Project (GAP) was initiated by H. Elton in 2002, the

Director of the British Institute of Archaeology at Ankara, in 2002. Prior to 2002, some field research, mainly ground-truthing, traveler’s accounts and surveying, had been published throughout the 1950s and 1960s (Bikoulis 2012:42). The outcomes of these were of great importance, revealing a good deal of information unknown previously; however, documentation and records remained restricted due to the technological limitations characterizing the 50s and 60s. More work in the valley has been carried out in the last few decades.

The GAP had numerous objectives in mind, with settlement pattern and surveying as the main focus. Also, complementing the work of Michael Gough in the Upper Göksu

Valley was another objective. Gough had focused on the late Roman period and churches at Alahan, Mahras Dağ, Dağpazarı and other sites; however, his published work remains unfinished due to his premature death. Another aim for the GAP was to document and salvage archaeological remains due to the plan of constructing a dam at Derinçay, near

Mut which would cover the valley up to 300m, and all sites present in the lower reaches of the valley bottom would be flooded consequently (GAP report 6: 1).

The results of the GAP, in addition to other documented excavations, reports and historical accounts confirmed that in most cases there are numerous ancient late Roman sites and cities lying underneath modern towns and villages. Some of these ancient cities

17 were unearthed; however, others remain still unknown and few have no modern settlement burying them. The modern cities of Mut (ancient Claudiopolis), Ermenek

(ancient Germanicopolis), the ancient cities of Sinobuç and Adrassus (with no modern settlements above them) and the village of Dağpazarı all attest remains of fortification walls and churches (GAP report) dated mostly to the end of the fifth and sixth centuries

A.D. Fortification walls were often used during the survey to delimit possible roads and routes around the landscape. The routes are attested by ancient and modern bridges and were meant to facilitate communication and travel between cities.

The next section looks at the Göksu Valley from a more detailed perspective, focusing on archaeological remains belonging to the late Roman period, mainly bridges and roads; however, it is good to note that roads and bridges might have been constructed over older remains or just restored. These are all related to the concept of movement, as bridges, for example, could reveal a great deal of information about the routes taken by past travelers during the late Imperial Roman rule.

1.4.1.1. GAP results

The following part presents some results generated by the GAP team as well as some observation made through ground-truthing in the valley.

1.4.1.1.1. GAP: Roads and Bridges

The presence of different rivers and streams that flow into the Göksu River is attested in many places throughout the study area. Bridges are a fundamental factor that affect movement; ancient societies built these bridges for a reason and their location must

18 have been carefully planned and thought of. A few kilometers north of Mut is the Yapıntı

Bridge, which belongs to the Islamic period but incorporates earlier remains dated to the previous Roman period. The river flowing below it is the Pirinc Suyyu, which dries out in the summer and reaches higher level during spring time from all the snow melt that pours into it. It is the only river from this side of the valley, the eastern side, which passes through the Dağpazarı (DP) Gorge and runs into the Göksu River. The ancient road where the Yapıntı Bridge is found, if extended, can reach up to Mavga Kalesi and then the Dağpazarı plateau and continues north to reach the main road leading to Karaman

(GAP report 2: 1). “In 1190, Frederick Barbarossa marched this way” (GAP report 2:1).

It is important to keep in mind here that the Dağpazarı Gorge is not crossable in real life as it is made of rocky, steep-sided escarpments and has the Pirinc Suyyu River passing through it at its bottom. Another access to the valley from the north, also from the road that leads to Karaman, was the Sertavul pass, which falls on the modern road leading from Karaman to Mut; thus Karaman is also an important ideally located site. Also, many known, recently explored and excavated sites that are found throughout the Göksu Valley are of valuable importance to the study as they could have been suitable stop stations for the travelers: for instance, “Alahan itself was only a fountain on the Mut-Karaman road in the first part of the twentieth century” (GAP report 6: 2). Alternatively, another way to have accessed the Göksu Valley from Karaman that continued to the site of Ermenek is located at the site of Bucakkışla, where a road at Akın led down to the Göksu Valley

(GAP report 2: 2).

When continuing from Mut, at the village of Alahan, down to the bottom of the valley where the village of Köpr başı is located right on the Göksu iver, two sites,

19

Geçimli and Karacaağaç-Burun, are situated with remains of ancient wine and olive presses. The road down to the valley is steep with sharp road turns cut into its walls.

Some areas exhibit major landslides and therefore movement there would have been probably avoided due to the steepness of the terrain. The mountain of Mahras Dağ is considered as the boundary on the western side of the Göksu Valley. Many archaeological features and remains dating to the Roman period and later are attested, scattered along the way. Roman tombs are detected on several occasions carved into the rocky sides of the valley and constructed in the landscape near ancient roads. There is also a visible ancient path parallel to the modern road that once led to a bridge constructed on the Göksu, which has since collapsed, however.

One bridge is attested and located at the village of Köpr başı, built to make the

Göksu River crossable. The older foundation of this bridge dates back probably to the

Roman period (GAP Report 2: 2).The later additions are the work of the Seljuks (12th-

13th c. A.D) and Karamanids (14-15th c. A.D.). Gravel terraces are found along both sides of the river, which runs north-south, passing through the Taurus Mountains and heading towards the . The ancient bridge was built for accommodating carts as well and is very well-preserved to date. A reused Roman funerary inscription is found on the bridge.

Another bridge found in the area is located on the road that links the Göksu

Valley with the Dağpazarı plateau to the north-east. A bridge, possibly Roman, about two meters wide is well preserved. It sits at a critical location where the road splits in two: one way leading back to the Göksu Valley and another leading to the Dağpazarı plateau.

Furthermore, there is a bridge at the city of Silifke dating back to the Roman period; only

20 the modern bridge, which was built on top of the older one at a higher level than the latter, is visible today and allows the people to cross over the Göksu River. Moreover, the

Alaköprü Bridge is situated on the road leading from the coast to the inland city of

Ermenek and is dated to the 14th century A.D. On the same road, through the village of

Yerköprü, a Roman bridge with a Greek inscription lies completely hidden beneath the modern road (Bean and Mitford 1970: 219). The presence of many bridges around the

Göksu Valley is central to the study of movement, especially that these could give good insights about ancient roads and movement.

So far, a brief introduction on the advancement of GIS and the topography of the study area during the late Roman period has been given. The next part presents briefly the aims and objectives of this thesis.

1.5. Aims and Objectives

GIS and their analytic and spatial tools have been well tested over the past few decades and more significantly in the past few years. Least cost path analysis, more specifically, has proven to be a valid tool used to generate conceptual paths and models for modeling past human movement. Although many limitations exist within the proposed method, the tool, if it is used carefully and if its algorithms are well understood, can reveal a great deal of information that is otherwise unattainable with other methods.

Thus, this research builds upon previous works carried on in the last decade and presents an opportunity to test the usefulness of GIS and least cost path on a regional level.

The aim of this thesis is to test the analytical abilities of the least cost path tool in

ArcGIS in a region with various, diverse topographical landscapes, representing an area

21 where movement is rather difficult. The study of movement has always interested researchers and new tools and new algorithms are being developed constantly to provide more “realistic” visualizations and understanding of travel and interactions. A challenge remains in interpreting least cost paths and relating them to human movement. The concepts of energy and time spent traveling remain the key factors when converting or relating the paths generated to physical movement and interpreting them. As will be discussed later in this study, the results generated vary greatly according to time or to the distance crossed, depending on the topography of the area. Moreover, resolution and

DEM accuracy play an important role in the analysis, for they determine the values used by GIS to generate the least cost paths.

The research and methods explored here can be applied to different geographical regions and time periods. For the framework set here the late Roman period and the

Göksu Valley are investigated. Thus, exploring and critically analyzing least cost path analysis and how it relates to actual human movement is the objective of this thesis.

Different models are built using ArcGIS, and a focus on the importance of the resolution and the cell size of the Digital Elevation Model used (DEM) to locatate potential roads are explored in subsequent chapters. It should be kept in mind that there is no optimal path and that the results generated are selected by the software and might not represent reality accurately. Thus one advantage of GIS is their ability to generate many different models and to provide a controllable environment to work with.

The next chapter introduces the methodological and theoretical approaches that frame the study of least cost path analyses.

22

Chapter 2: Theory of GIS and Challenges

The current chapter defines the methodology used as the basis of the thesis research. It is divided into three parts: the first part introduces least cost path analysis, which is the main analysis investigated throughout this research. The second part introduces some theoretical approaches to the use of GIS in archaeology and presents some previous work done in this field, while the third and final part presents a discussion of some issues faced when interpreting the results generated by the software.

2.1. Least Cost Path Analyses

Researchers and archaeologists use GIS functions to explore many different topics and answer research questions they set. Therefore, the analyst should make sure of the accuracy of his/her data because results which are not verified and validated could be misleading and inaccurate.

Least cost path analysis is defined by finding the shortest route with the lowest cost associated with it that links any archaeological site or feature in the landscape with another point. Cost can be interpreted in different ways such as travel time or travel distance or by including, for example, the metabolic rate of an individual moving and calculating effort. It is based on a cost surface that is derived from a raster grid, a DEM in most cases. In the thesis presented here, the cost raster used was the slope raster, which measures the difficulty of movement from cell to cell. Slope algorithms vary from software to software; some offer more than one option; ArcGIS, however, only offers one option in calculating the slope of a DEM (Rodriguez et al. 2010: 78): “The neighborhood method”.

23

There exist different sources of variation in generating least cost paths and these will produce varying paths depending on the variables that are selected to be manipulated and those that remain constant. In this thesis, the main variable that is manipulated is the

DEM resolution. However, other sources of variation exist, such as 1) the DEM accuracy,

2) the slope algorithms, 3) the cost surface algorithm, and 4) the least cost path algorithms. These can all be manipulated individually or simultaneously and would generate different paths routes accordingly. The slope and cost surface algorithms usually dictate the cost unit to be taken into consideration, that is, cost measured in time or in energy units. Examples of these are discussed through table 3, provided below in Chapter

3.

2.1.1. Technical aspect

Least cost paths are derived from an accumulated cost surface map representing the cost needed to cross an individual cell in any given region over a certain distance. In the first analysis presented in the case study below, the accumulated cost surface is derived from the unmodified slope cost surface which in turn is derived from the DEM itself.1 And so the least cost paths are as accurate as the DEM or as the digital file used to derive them is. As Newhard and co-researchers (2008) mentioned, these cost surfaces such as slope are termed ‘primary cost surfaces’ and “are the weighted surfaces that reflect the ease by which movement between cells in possible” (Newhard et al. 2008: 92).

In their study, they took into consideration for creating least cost paths topographical cost surface ‘slope’ and ‘generated reliefs’ as well as geological and environmental features that could impede or halt movement, primarily the Göksu River. These cost surfaces were

1 Another option to generate cost surface is by using a vegetation map for example.

24 then “combined into a weighted composite that became the final cost surface” (Newhard et al. 2008: 92). The analysis explored below in Chapter 5, including the Göksu River as a barrier, is similar to an extent in methodology to the latter one.

With the creation of DEMs and digital data, the generation of least cost paths is processed first through “combining numerical values tied to spatial data (such as elevation and slope angle) in a weighted fashion, [and thus,] the surface of an area of interest can be represented with values indicating the expected effort required to traverse the area” (Newhard et al. 2008: 91). Datasets representing features or factors that impede movement are created and different values are assigned to them, each one according to how much it is a barrier to movement. For example, landform types, such as slopes and water such as rivers, could each be assigned a value and could be combined together to form a final cost surface which then will be used to generate least cost paths. The models created can be modified according to the topic of research and are thus weighted accordingly (Newhard et al. 2008: 91). Least cost path analysis functions and steps are explored in detail in the next paragraphs.

Bevan lists clearly and concisely all basic steps required to generate least cost paths from a cost surface. He states that the study of movement using GIS and more precisely least cost paths “are by now very well established…” (Bevan 2011: 5). He thus presents the methodological stages as follows:

1) To define a set of costs for each cell in a raster map 2) To create a ‘cost surface’ by accumulating these costs out from a fixed point of departure A, and 3) If required, to trace a route from another point B back to the departure point A and thereby define a ‘least cost path’ between them. There are different kinds of cost surface (Bevan 2011: 5).

25

Therefore, at its initial or simplest stage, generating least cost paths requires 1) a

DEM and 2) a file containing site locations with their geographic coordinates. Tables 1 and 2 below present the data types used and give a brief definition of these. A problem arises here with the simple “push button” functions that are made available in GIS software to any user wanting to perform analyses for a given area. Many users are not fully aware of the potential and limitations of GIS and especially of the data used in an analysis; and thus if a user is not aware or does not fully understand the purpose of each function provided by the GIS software, for example, misinterpretations and doubtful outcomes may result. One of the objectives presented in this thesis is to highlight the underlying inaccuracies of the “push-button” functions and to demonstrate why these are problematic.

The push-button slope function in GIS has been proven to “not well approximate the actual energetic cost to humans moving over a rugged terrain” ( ademaker et al.

2012: 38). What it does is that it scales “cost values according to a scale of slope values calculated from the elevation dataset” ( ademaker et al. 2012: 36). Many authors, such as

Rademaker, recommend that slope is manipulated and tested to produce “more reliable results by experimenting with the scaling of the slope cost values and comparing the resulting solutions” ( ademaker et al. 2012: 38). Although slope is not a fair representation of cost, the algorithm was not manipulated in this thesis, and this is for two reasons. First, the objective of this thesis is to examine whether manipulating DEM resolutions and cells produce varying least cost paths regardless of the slope algorithms used. It examines the methods and the possibilities a GIS platform can offer to locate potential routes which could set new insights on the studies of movement. For more

26 comprehensive results a recommendation would be to derive least cost paths from both energy-based cost surfaces and time-based cost surfaces and test the results against each other as well as against the paths generated from unmodified slope algorithms.

Nevertheless, this thesis aims to establish whether, regardless of the algorithms used, paths should be tested against the archaeological record and textual documentation as well as used and visualized in other software such as Google Earth. Second, the distance in meters from site to site is considered here rather than the energy or the time spent traveling, although slope does actually affect the length of the paths generated. “ nergy- expenditure based LCPs… are much shorter than their slope-based counterparts”

(Rademaker et al. 2012: 38). However, these differences in path length could be minimal

(more analysis is required to prove this point) and not affect the results greatly. The focus here remains on the underlying DEM and its values and the potential it has to affect the generated results.

Table 1. Data types definition.

Data Types Definition Raster Constructed as matrices defined by cells, each cell having the exact same size, creating a grid made of rows and columns. Each cell has a value assigned to it depending on where it falls in the landscape representing elevation values. Vector Generated as points, lines, or polygon files. They are defined by pairs of geometric coordinates. For examples, archaeological sites can be represented as points while rivers and roads can be represented by lines and lakes or forested areas by polygons. Processing raster datasets is considered less time consuming than working with vector data as computers nowadays are well equipped to handle and store grid data efficiently. However, raster data files are larger than vector data and require more storage space and the processing time varies accordingly: the larger the raster map is means there are more cells available for the computer to process and work with (Conolly and Lake 2006: 28).

27

Table 2. Raster Data.

Raster Data Definition DEM A digital Elevation Model is composed of a grid made of equal cell size, the latter noting its resolution ( for example each cell is equal to 90m x 90m) and each cell has an assigned value to it representing its elevation values. Cost surface The cost surface, such as slope for example, calculates the cost to move from one cell to another in the DEM. Cumulative cost A combination of different cost surfaces into a final one. surface Accumulated cost The cumulative cost surface adds up the cost from cell to cell surface over a certain distance (cost distance) The grid cells of different raster maps can be added together or combined, multiplied and so on. Different mathematical calculations and statistics can be performed with raster data.

2.1.2. Challenges and Limitations

There are in fact many limitations to the least cost path analysis and to the use of computational models which lie at the core of the critique made by post-processual archaeologists who stress that human agency cannot be accurately or realistically quantified. Nevertheless, as Andrew Bevan noted

…computational and quantitative techniques can be helpful. In particular, they can provide useful insight on patterns of movement and interaction, by better characterizing existing archaeological evidence, suggesting simple models of mobile decision-making or proposing expected patterns against which the observed record can be compared (Bevan 2011: 1).

It is worth mentioning here that the models created are just a mere image of reality which are used, and should be used in comparison and in concert with other factual and documented sources. As well, they should be verified on the ground whenever possible, to produce a more accurate understanding of the topics investigated.

28

Least cost path analyses have grown more popular in the archaeological domain during the past 10 years. Archaeologists and researchers have come to realize the importance that movement and travel have on the development of past cultures and the relations or the networks that link them together. It is not surprising that people would have crossed long distances over a diverse and rigid terrain to get to a destination for different purposes. Bevan states that least cost path analysis provides a mean to study interactions between different cultures, at different periods, while taking into consideration a near-realistic image of the geographical, environmental and topological factors and thus understanding the relationship between people and their surrounding landscapes (Bevan 2011: 1).

The next part of the current chapter considers in brief some theoretical approaches for the use of GIS and least cost path analyses and presents several studies of previous works done for archaeological purposes using this method.

2.2. Previous Works

The development of routes and road systems happened for the large part to facilitate movement through diverse landscapes and sites. These varied landscapes and terrains crossed by individuals are characterized by many different topological and geological features which sometimes appear to be costly because of the physical effort or metabolic rate a person spends while moving through them. Carballo and Pluckhahn

(2007) used GIS and settlement analyses to assess the political evolution of northern

Tlaxcala, Mexico, by emphasizing the existence of ‘transportation corridors’ to facilitate movement. They state that “highland people across the world negotiate the rugged terrain around them through natural transportation corridors that facilitate the movement of

29 individuals, goods, and ideas within highland environments and between highland and lowland” (Carballo and Pluckhahn 2007: 607). They furthermore add that territorial and thus political expansion were influenced and “were frequently shaped by the rapidity by which goods, armies, and information could move across the surrounding landscape”

(Carballo and Pluckhahn 2007: 607). While considering this study, it is plausible to relate it to the Roman world and more specifically the Late Roman Imperial period. With the addition of Constantinople as a second capital for the Roman empire, political expansions and wars, tax collection and trade, all were still made possible by the development of an efficient network of roads linking inland to coastal sites and to mountainous regions and connecting both capitals together. Therefore it is considered essential to understand and study the movement between these sites by examining and exploring the roads and paths linking them together.

The question whether GIS is actually a resourceful and robust method for spatial analysis still causes debates. The last ten years have been considered to be the highlight of GIS studies, with many challenges faced with critical reviews and suggested solutions.

However, with the improvement of the software functions and abilities, some limitations have been challenged and addressed, while other more advanced problems still arise. GIS studies and technology are not static; they are constantly changing and evolving with the years and with the establishment of new research methodologies.

The debate on how to create a model which could incorporate many different variables together, and the choices made of which of these are meaningful for a specific research, is still on-going. Nevertheless, some good previous works and research have

30 been done using GIS and least cost path analysis to bring new insights into this method of study with probable solutions for mapping movement.

2.2.1. Case studies

The importance of the study of past or historic landscapes has greatly been acknowledged by archaeologists and specialists over the last few decades.

Phenomenologists, or followers of the phenomenological approaches, agreed that landscapes are imbedded with meaning and that they could be reconstructed ‘through changing human experience’ (Chapman 2000:316). An emphasis on topography, paths, rivers and other physical or natural features of the landscape was at the core of these approaches. Their primarily interest was the relationship between perception and movement across the landscape, and thus, finding ways to measuring it (Chapman

2000:316). However, there was no matching of these new theoretical approaches to specific methods and techniques that could free them from bias and subjectivity driven by this personal experience with the landscape. Furthermore, they could not possibly produce tangible and scientifically proven results which would enhance the understanding of the relationships between sites, people and the landscape.

Malcom Wagstaff (2006) explored network analysis methodology and stated that

“routes also mediate power, linking its loci with territory, its peoples and their use of land. They are essential to the exercise of control and the extraction of surplus” (Wagstaff

2006: 69). He thus related network analysis with graph theory which “is concerned with order and contiguity, rather than distance or direction. Route networks are reduced to graphs” (Wagstaff 2006: 0). This therefore enables the researcher to generate different

31 indices which reflect the degree of connectivity and of centrality between regions and through time (Wagstaff 2006: 89).

Indruzewski and Barton’s (2008) chapter on “Cost Surface DEM Modeling of

Viking Age Seafearing in the Baltic Sea” explores and relates data from experimental archaeological tests with textual documentations. The sea is an important aspect of trade and communication routes in antiquity and certainly is the fastest and easiest way for movement between coastal cities, perhaps cheaper as well. However, because of the tight framework set for this research study, only inland movement is considered in the analysis. One further case study worth mentioning here is that of Newhard and co-authors

(2008) who conducted least cost path analyses and explored interregional interactions in the Göksu Valley. In their article, the authors present briefly previous archaeological applications of least cost path analysis as well as present their limitations and challenges.

Whitely and Hick (2003), for example, explored potential routes through using least cost path analyses for local or short as well as long distance travel (Newhard et al. 2008: 92).

Other developed studies such as that of Bell, Wilson and Wickham (2002) and that of De Silva and Pizziolo (2001), proposed a modification to the slope cost surface map. Both studies added a computational function to the equation or algorithm used:

“The resulting models therefore account for the fact that one strategy used to overcome excessive slope is to traverse across the angle of the slope, rather than with or against it”

(Newhard et al. 2008: 92). Taking into account the direction that the slope faces and thus creating anisotropic cost surfaces could provide “a more nuanced model”. Additionally, both studies noted that the cost required to ascend or descend a slope is not correlated with the angle of the slope (Newhard et al. 2008 92). Bell and co-authors established a

32 formula calculating the “relative cost” or effort required to walk uphill and downhill:

. This will be discussed below in more detail in Chapter 3. More recent studies, such as Bikoulis’ (2012) “ evisiting Prehistoric Sites in the Göksu Valley: A

GIS and Social Networks Approach”, combined the two methods of least cost path analysis and Social Network analysis to examine the relations and interactions between the southern Anatolian plateau and the coastal sites in the prehistoric periods.

With many new insights and research in the fields of GIS and more specifically in least cost path analysis, more modifications and improvements are being initiated. This suggests, then, that there are clearly certain advantages to the use of this computational method to model movement and travel through landscapes. This chapter introduced the methodology behind the research study presented here. It gave a brief overview of least cost path analyses. Moreover it introduced some previous work done in archaeology based on this computational method.

The next and last part of this chapter introduces in greater detail the framework set for this thesis and discusses some limitations concerning the software’s ability to generate least cost paths accurately. It introduces another aspect that reflects the limitations of the software: that of understanding clearly the tool used and how the results are being interpreted.

2.3. Discussion and Conclusion

Most criticism about least cost path analysis relies on the inability of the software used to properly represent the real world as a framework to recreate events that had happened in the past. When considering decision making by individuals, it is clear that it cannot only rely or depend on environmental and topographical features although these

33 are crucial for any GIS analysis. In many instances these digitized data are the only available information for a given study area and are being made readily available on the web through online open sources. It is noted by Livingood that movement is mainly shaped subjectively and is “influenced by three aspects: environmental features, travel time and travel effort” (Livingood 2012:1 ). Others noted that individuals moving “will have some prior knowledge of their environment” which has an influence on decision- making concerning which routes to take (Ullah et al. 2012: 15 ) and that “People familiar with the environment/landscape experience it differently from first time users” (Anderson

2012: 242).

Some issues that also concern least cost path analysis is that people do not actually move like chess pieces from cell to cell and also “do not behave like water… because unlike water, we have somewhere we want to get to quickly and efficiently”

(Kantner 2012: 227). Also some limitations faced by the software are crucial to be aware of such as “the optimal routes generated are not necessarily the ones used” (Branting

2012: 212), and that “… people may choose to deviate from what is viewed as the optimal path” (Branting 2012: 215). Moreover, also to be aware of is the fact that most of the time a person moving “does not know all possible routes he/she can take” and that

“the model[s] assumes that the person has many options and then chooses the best one

(Branting 2012: 214).

All these above-mentioned critiques of the selected weaknesses of the software, usually concern the lack of consideration of cognitive abilities of human decision making and the inaccuracy in attempting to quantify it. However, there is another important aspect related to GIS software and to using the right tools for the right research study.

34

Understanding the GIS tool used in an analysis, being aware of resolution and accuracy issues and the underlying problems that come with it is important. Researchers and GIS analysts should be aware of how the least cost path tool works and its ability to compute many different paths for the same origin and destination points, each different according to different selected options or variables. This is crucial, for modifying one small aspect in the calculation of least cost paths or selecting the default settings provided by the software produces varied results. Some authors suggest using more than one software together: for example, “ArcGIS could easily be complemented with other GIS, such as

SEXTANTE software which offers calculation variants that are not found in ArcGIS”

(Rodrighez et al. 2010:80).

GIS and ArcGIS’s purpose is not only to create visually pleasing and aesthetic maps, they has more to them. Being aware of the numbers, values and algorithms that compose these beautiful maps is crucial. More research should be done concerning this issue prior to the development of further advanced algorithms that would complement the analysis of movement by using social and behavioral variables.

Kantner (2012) in his chapter “ ealism, eality and outes” presents a thorough, detailed review of the latest studies that have used least cost path in their analysis. The challenges remain great in 1) choosing the right spreading algorithm to generate least cost paths, and 2) in selecting currencies for the costs, such as, energy or time, but also metabolic rate and Tobler’s hiking function as well as considering anisotropism. These concepts are explored later in the next chapters. Kantner states that “the most important component of successful cost-path analysis” is in selecting a “cost surface algorithm” and that “the problem is that numerous algorithms exist” (Kantner 2012: 226).

35

This current chapter introduced the research methodology followed in this study and some theoretical approaches to the study of movement. The next chapter explores in greater detail digital elevation models and the method used to create them and resample them to suit different purposes.

36

Chapter 3: Digital Elevation Models and GIS

The current chapter focuses on the technical limitation of GIS and the data that are used to perform least cost path analysis. As mentioned earlier, the results generated will be as accurate as the primary raster used, or in other words the DEM used. Most users are unaware that the DEMs downloaded from open sources have been processed prior to their release and this processing might have had an impact on their accuracy. This chapter then pinpoints and discusses how a DEM is generated and the potential errors or uncertainties that may arise when conducting a least cost path analysis. It is divided into two parts: the first one introduces digital elevation models and their limitations, while the second explores the slope calculation derived from the DEM itself.

3.1. Digital Elevation Models and data capture

There are different ways and methods for a researcher to create a DEM of a certain region or landscape. These have improved drastically over the years, moving from a simple cartographic, paper-based topographical map towards newly developed digital ones including more technologically advanced methods such as airborne laser scan. For the purpose of this study a brief overview on two different ways to create a DEM is given.

3.1.1. DEM creation: survey points

The basic method for creating a DEM is through point collection gathered during field surveys. Total stations and Global Positioning System (GPS) are two conventional technologies (Conolly and Lake 2006: 61) used in collecting points for creating topographical maps and mapping for example artefacts in a site. These provide raw data,

37 or data that were “not processed or transformed since the information was first captured”

(Conolly and Lake 2006: 61). Total stations are used in a survey site to collect points by shooting beams of light, such as infrared and laser lights, on a target point. The total station is “able to record horizontal and vertical angles and distances from itself to a target point” (Conolly and Lake 2006: 62). On the other hand, GPS is a “satellite navigation system” that collects points by measuring the time or distance it takes for the signal to be sent from the satellite and captured by the GPS receiver. The internal clock built in GPS devices enables them “to produce precise locations to receivers on the

arth’s surface” (Conolly and Lake 2006: 63). These range from the conventional handheld GPS to more complex receivers such as base stations. Usually three to four satellites are required to generate accurate positioning of a feature in the survey area when using a handheld GPS. An unassisted handheld GPS device is nowadays able to produce survey points with an accuracy of under 20m; A GPS supplied with correction mechanisms such as Wide Area Augmentation System (WAAS) or the European

Geostationary Navigation Overlay Service (EGNOS) provide a much higher accuracy

(less than 5m) (Conolly and Lake 2006: 63).

These measurements, however, are not free from errors caused for example by atmospheric interference or distortion and thus correction mechanisms are needed

(Conolly and Lake 2006: 63). These are included in more developed GPS receivers and vary accordingly. Some handheld GPS receivers that include correction sources can provide “locations ±3m of their true location in the USA and Europe (Conolly and Lake

2006: 62-63). Another form of GPS, termed DGPS or differential GPS, is used in survey areas that entail higher “precision and accuracy” and is “a system that provides

38 differential correction between the estimated and actual location of the receiver…” and these provide an accuracy between “0.5 and 5m” (Conolly and Lake 2006: 63).

Some limitations, however, exist: 1) the price range of a total station ranges from about $4,000 to more than $10,000 or even $30,000 for highly advanced machines, however, getting less expensive and more reliable nowadays, 2) the weight of the total station and its battery life, although nowadays improving, as well as the on-site handling of computers to process the data collected can also be considered not readily convenient, although nowadays with more improved technologies handling total stations has become easier and 3) the spatial extent they can cover, depending on their model and terrain accessibility (Rick 1996). The size of their coverage is a main drawback; it is not possible to use them to survey regions. Nevertheless, collecting points with total stations and combining them with GPS points enable the surveyor to create accurate topographical maps. In smaller scale sites, using a total station could generate accurate, high resolution

DEMs.

Topographical maps and more specifically D Ms are “approximations of continuous phenomenon” (Hageman 2000: 114). The accuracy and the resolution acquired for these DEMs depend on several factors as noted by Hageman:

1) How many sample points were collected; 2) Where they were collected; 3) The accuracy of the data collection device; 4) The skill and knowledge of the data collector; and 5) The explicit and implicit assumptions built into the interpolation algorithm (Hageman 2000: 114).

It is therefore enough to know in this framework that the outcome of digitizing and processing topographical data reveals a matrix composed of gridded cells, each cell assigned a value. Different statistical operations could then be performed on the DEM

39 and its derivatives, for example slope and other cost surfaces, extracted and generated from the DEM. These values generated from the slope and cost surfaces portray different weights and classes representing real life geological and environmental features such as elevation slopes. It is recommended to acquire or generate high resolution and accuracy

DEMs in any analysis, for these compose the most common digital topographical data that can be used in GIS to derive least cost paths; however, this is often not possible as the availability of high end data remains scarce and expensive to acquire. The next part presents a more time efficient way to create DEMs, although, also with some limitations faced.

3.2. Remote Sensing

Newly developed technologies such as airborne laser scanning and remote sensing are becoming widespread nowadays and the high resolution images they produce are also used in the creation of a DEM. They record variations in heights in the topography of a landscape (Devreux et al. 2005: 651). Usually the laser scanner that contains a high performance GPS is mounted on an aircraft and laser pulses are shot into the bare ground, sometimes hitting vegetation covers or building tops, which would decrease the reliability of the points collected. Measuring the time that it takes for the laser beam to be reflected back to the aircraft is then converted into distance by applying a mathematical calculation. It enables therefore the collection of ‘point clouds’ of height observation.

“Processing of the points into a regular grid results in a conventional digital terrain model

(DTM) or a digital elevation model (D M)” (Devreux et al. 2005: 651). The difference between these two digital elevation models is that the latter only includes elevation values while the former incorporates elevations as well as other explicit characteristics of

40 land features such as “ breaks in slope, drainage divides” which are coded accordingly

(Wood 1996: 3).

Remote sensing involves mounting sensors on satellites and produces multi- spectral imagery which enables the measurement of the landscape and features that form it rapidly and easily. For these digital images to be used effectively in an analysis they should be geo-referenced according to a known geographic coordinate system. Some of the sources of these digital images are the satellites SPOT, IKONOS, Landsat, Radarsat,

Quickbird, and more recently ASTER.2 For the purpose of this study the data available consisted of the Shuttle Radar Topography Mission (SRTM) DEM which is the final outcome of a project initiated “to collect elevation differences on the arth’s surface”

(Conolly and Lake 2006: 71). The latter is managed by the U.S Geological Survey department (USGS) which provides the users with downloads free of charge at a resolution of 90 meters for areas outside of North America.

3.2.1. Collection of raw data

The digital elevation model (DEM) used in this analysis was downloaded from the U.S. Geological Survey online open source,3 which provides records of topographical data, more specifically elevation values, and high resolution raster maps covering the total area of the earth. The DEM was generated by the Shuttle Radar Topography

Mission, which was set to provide topographical maps and digital elevation maps on a

2 ASTER DEM or Advanced Spaceborne Thermal Emission and Reflection Radiometer is another more recent remote sensing method with a mission to map and create a Digital levation Model of the arth’s surface. This was undertaken with collaboration between NASA and METI from Japan. It met its objectives by mapping around 99% of the arth’s surface; that is more than what the SRTM mapped. Its major advancements were that it “provides improved spatial resolution, increased horizontal and vertical accuracy and superior water body coverage and detection” (Jet Propulsion Laboratory: asterweb.jpl.nasa.gov/gdem.asp). 3 http://srtm.csi.cgiar.org.

41 world-wide scale. Initially generated for military and geographical purposes, the DEMs generated found a practical use in other disciplines such as environmental sciences and archaeology. The SRTM flew in 2000 on the Space Shuttle and measured wavelength scatters by using “dual radar antennas” with sensors that provide “good signal returns from rough surfaces” (Farr et al. 200 : 1). It mapped 80% of the total surface of the Earth with a 90% confidence level with a vertical accuracy of 16m and a horizontal accuracy of

20 meters (Farr et al. 2007: 3; Reuter et al. 2007: 986). Some areas, however, such as dense forests, might not allow the signal to hit bare ground and thus the returned results are less accurate. Building tops, snow and water reflection could also be encountered and affect the accuracy of the point collected. Also cloud scatters during the mission, covering the area to be recorded may have reduced the accuracy of the results. Regardless of these inaccuracies, “the goal of a radar interferometer is to measure the difference in range between two observations of a given ground point with sufficient accuracy to allow precise topographic reconstruction” and “S TM was designed to meet a particular map accuracy specification” (Farr et al. 200 : 5). It is important to know where in the D M these inaccuracies fall as these will be reflected later when generating the slope raster, discussed in the second part of this chapter.

It is necessary to note here and define some technical terms that might bring some confusion to the users or readers. One example could be the various terms referring to the resolution of a DEM: a DEM with a resolution of 1 arc second is also referred to as a

DEM with approximately a 30 meters resolution. Also, the term ‘pixel’ is used synonymously with ‘cell size’, for when a sensor records a value or a point in a landscape

(radar used to create DEMs), the digital imagery produced is composed of pixel values.

42

When having for example a value of 10m in any of the derived cost maps, “it means that getting to that pixel from the target point ‘costs’ as much as traversing 10 pixels with a base value of 1 or 5 pixels with a value of 2…” (Bell et al. 2002: 1 6). Others note that a pixel represents the depth of a cell or the average of cell heights in a DEM (Dixon,

NASA 1995). To limit this confusion, only the terms ‘cell size’ and ‘cell value’ are used here depending on the context: cell size is defined by the resolution, for example 30 meters or 90 meters whereas cell values represent the actual elevation which is recorded by the sensor first and then processed into height elevations in meters above sea level.

3.2.2. SRTMs

The S TM collected data using radar interferometry which is “the study of interference patterns caused by radar signals…which enables us to generate three dimensional images of the surface of the arth” (NASA interferometry explained).

SRTM has two radars placed on board of a spacecraft, one placed onboard, the other separated by a mast of 60 meters length hanging outside the spacecraft. The main onboard antenna sends signals, while both antennae receive back echoes which are multiple scatters of radar waves reflected from the surface of the Earth (NASA interferometry explained). These echoes or raw data collected by both antennas are somewhat different because their distance to the surface being measured is not equal.

Thus “using information about the distance between the two antennas and the difference in the reflected radar wave signals, accurate elevation of the arth’s surface can be calculated” (NASA interferometry explained). To be more specific here, a wavelength represents a “single cycle” of a pattern which recurs repeatedly depending on the distance needed to be reached. The “phase of the wave” thus is calculated by dividing the total

43 number of cycles, representing the distance, by the wavelength (Treuhaft, JPL, NASA).

To find, then, the phase (distance from the transmitter) the total number of cycles and the wavelength in cm is required. Thus the radar sends bursts and these are collected and processed through “unwrapping”. rrors can result in this process due to “errors in the baseline orientation” which affect the “absolute elevation” of a certain point (Farr et al.

2007: 5). Also errors in processing the data collected can be caused by the atmospheric layer which will affect the “refraction index through which the radar signal passes and thus corrupt the radar phase observable” (Dixon, JPL 1995). These errors can range up to

100s of meters in elevation heights; they are thus tested against ground control points with known coordinate systems and elevation values, as well as against Kinematic GPS data collected by driving around all areas for validation (Farr et al. 2007: 15).

The areas of the SRTM that are mostly affected by unwrapping errors are those found in location of steep cliffs, for example, where the light beam does not reach properly the bare ground if not placed at the right angle. “ egridding” or “smoothing” to a lower resolution is then performed so the voids or the unrealistic values can be

“smoothed out” by averaging values of neighboring points using the “width weighted convolution interpolation Kernel method” (Farr et al. 200 : 1 ). Some jumps in elevation values were noticed in the S TM and “were filled by interpolation of surrounding elevations, [while] larger voids were left in the data” (Farr et al. 200 : 22). The first

SRTM version did an accurate job in meeting the specification set for it, still the

“finished” version contained some 836,000 square kilometers of data voids4 (Reuteur et al. 2007: 983). The latest version, v 4.1., the one used in this study, has had most of its no

4According to euter et al., “of the 210 countries covered by the S TM data, two countries have void areas larger than 10% of their country size, nine countries more than 5% and 14 more than 2%. In total, 44 countries have 1% or 4 more of their area covered by voids” ( euter et al. 200 : 6).

44

data voids filled (Consortium for Spatial Information: SRTM). The SRTM downloaded in

this study does contain voids. The figure below represents the Göksu Valley and the sinks

found in the DEM.

Figure 2. Location of sinks in the DEM

3.3. DEMs in ArcGIS

The SRTM collected data at 1 arc second or 30m resolution for the United States

while providing a 3 arc second or 90m resolution DEMs for the rest of the world by

“averaging” and “thinning” which takes into account “one sample out of the nine

45 available posts” ( euter et al. 200 : 983). Thus the S TM made readily available online was resampled, by the National Geospatial-Intelligence Agency (NGA) and the Jet

Propulsion Laboratory (JPL) to a lower resolution; reducing the number of voids present was a secondary effect (Farr et al. 2007: 21). The DEM downloaded here proves the point that indeed there exist some flaws with the DEM. These errors can be critical to a point that will affect the least cost paths models depending on their locations.

The DEM used in this study displayed the lowest value assigned to a cell as -21.

There are indeed places on the earth’s surface that are below the sea level; however, the lowest point known in the Göksu Valley is 250m above sea level and this is because

“there is a normally distributed error probability around each cell's value” (Personal communication James Conolly, 2014).

There are different types of errors such as those caused by the edge effect on one hand or by the presence of sinks in the DEM on the other. This is shown in Figure 2 where sinks are present in the DEM. However, there is a certain percentage of errors that is generally accepted. These, although not many and located mainly at the edge of the

DEM are reflected in the derivative maps; thus, their location and the impact they have on the results of the analysis should be assessed.

GIS is able to represent landscapes and structures located on the surface of the earth by georeferencing them in map layers. This process gives features and areas of interest “map coordinates” which have “a specific geographic location and extent”

(ArcGIS Desktop help) that can be related to their exact position on the surface of the earth. A geographic coordinate system is represented by giving features longitude and

46 latitude coordinates and uses a three-dimensional spherical surface to define locations on the earth” (ArcGIS Resource Center 2012).

The raster map, or DEM used for the analysis is assigned to the same coordinate system of the map layer in ArcGIS so that it can be georeferenced and positioned relatively to its real position on the surface of the earth. The DEM is then inserted in

ArcGIS and used to derive a slope raster. The slope function and variables are discussed in the next part below. The case study presented in Chapter 5 explores in more details some methods used to improve the precision of the DEM for the analysis. The following section explores some ideas concerning the DEM used in a study and how this could affect the results of the least cost path analysis.

3.3.1. “Size does matter”:

It is crucial to understand and note here that the DEMs derived from satellite imagery or in this case from the SRTM, do contain errors in values and these may or may not affect the outcome generated when performing spatial statistical analyses. As the

DEM is composed of same size cells with different values, the statistics performed on it involve calculations using these values. If one or more values are missing or others are inaccurate then the results from these calculations might vary depending on the location of these voids in the DEM. The primary raster used in this study is a DEM with a 90 meter resolution and a vertical accuracy and a horizontal accuracy of 16 and 20 meters respectively (Farr et al. 2007: 3; Reuter et al. 2007: 986).

Pain (2005) noted the importance for GIS users, dealing with DEMs, to understand the limitations these present. He thus states that some “asked if earth scientists are fully aware of the limitations of DEMs?” and also points out that the “errors in a

47

D M will propagate through to model predictions” (Pain 2005:1431). Others noted that

“numerical values of attributes derived from D Ms differ considerably with D M resolution”, while Guth (2003) noticed “that the average slope values increased as the

DEM grid size decreased”. Also, “coarser resolution D Ms give lower slope angles than those obtained from finer resolution” (Pain 2005: 1432). Some compared the slope values generated by the DEM with slope values measured by surveying an area and concluded that “higher resolution D Ms (1m) produced much better results than lower resolution

D M’s (12m)” (Pain 2005: 1432). However, this might not be practically achievable for larger scale areas, as 1) the data collected would be high in file size and difficult to work with, 2) processing time would also increase and 3) the availability of DEMs with high resolution is scarce or highly expensive to acquire.

Therefore it is important to keep in mind that the resolution and the accuracy of a

DEM, that is its cell size and its cell values, do affect greatly the results of a GIS analysis.

In the framework set here, this concept is explored in the case study presented in Chapter

5 below. Resampling and filling voids was performed and the results varied sometimes greatly depending on the method used.

The next part of this chapter explores the slope function that is a first derivative order raster generated from the DEM and used subsequently to derive accumulative cost surfaces and least cost paths.

3.4. Slope and Cost surfaces

The advantages of GIS lie within their ability, by a matter of simple mathematical calculations performed on the DEM, to create derivatives of elevation such as slope

48

(steepness) or aspect (direction) (Kvamme 1992: 130). Kvamme (1992) states that there are three main categories representing terrain variables: “elevation and its products, slope and aspect”. These variables could then “be regarded as independent sources of information that describe different characteristics of landform” and thus are not fully correlated in a rough diverse landscape (Kvamme 1992: 130). When working with DEMs to generate raster surfaces, Brandt and co-authors state that “in the GIS literature, reclassification, overlaying, weighting and summation of map themes is as old as the technology itself; numerous examples and applications of these approaches exists, where they often are referred to as ‘map algebra’” (Brandt et al. 1992: 2 6). With the more advanced GIS technology, the use of map algebra or raster calculator is becoming more widespread and developed, and thus it enables the researcher nowadays to process and explore more complex statistical algorithms. Map algebra or raster calculator is used to adjust values of DEMs or to add map layers together. This concept is explored in more detail below.

As mentioned previously, the DEM is used to derive a slope raster that represents elevation change between neighboring cells in the raster map.5 The slope function, still at the base of many least cost path analyses, underwent several improvements throughout the years; however, it still faces many limitations; some of these are very basic and have to do, for example, with the units of measurement that GIS software uses while calculating the cost. For example, Herzog (2008) states that slope based on the Tobler’s hiking function6 quantifying movement in terms of time needed is referred to in degrees

5 “Slope is calculated differently in various software” (Herzog 2008: 240). 6 “Calculating the walking speed based on slope, and from this the time requirements can be easily determined” (Herzog 2008: 240. However, speed of a movement by a large group varies from that of a single person.

49 by some researcher and in percentage by others. She does conclude that calculating cost based on energy expenditure would be more appropriate (Herzog 2008: 240). However, there should be a realization that results differ greatly depending on which slope function one uses, as seen in the results of the figure 3 below.7

It is important here to note that people traveling or moving on foot would be able to climb or descend a steep slope more efficiently than someone moving in a cart or on a horse for example. And similarly the movement on a horse is considered to be more feasible on steeper slopes than movement in wagons. The important thing to keep in mind is that cost maps or effort surfaces represent “the ease or difficulty by which a person would traverse the landscape” (Newhard et al. 2008: 94-95). “The end result is a combined cost image that more closely reflects the relative difficulties of moving through a natural landscape” (Bell et al. 2002: 1 ). Fewer studies have involved measuring and analyzing movement of animals and carts in steep areas. This would require adding modification to the Tobler’s hiking function used to calculate the slope angles in relation to movement (see Bevan 2011).

7 The Tobler’s hiking function is anisotropic and based on the slope function. It calculates cost in time measuring unit and emphasizes the speed of walking. It does not necessarily relate to a regular individual and his/her capacity to climb and descend steep areas; thus, “LCP results based on slower walking speeds require less overall energy expenditure than those based on faster walking speed” ( ademaker et al. 2012: 39).

50

Using the raw slope values generated by ArcGIS is unusual. The software returns values that give a linear correlation between effort and slope which is unrealistic. Thus the GIS software assumes that descending a slope of 50 degrees is equally as challenging as ascending one, when in fact, in real life settings, going down a steep hill is considered rather more difficult and sometimes more dangerous than climbing a steep slope. Bell and co-authors managed to find a solution for this problem by assigning and reclassifying slope values so that the very steep slopes “were given proportionally high costs reflecting the difficulty in traversing them from any direction”. They thus created an algorithm that includes what they termed as ‘corrections’ “for traversing a hillside at angles near- perpendicular to the slope” (Bell et al. 2002: 1 6).

Figure 3. Slope calculation in degrees and percent rise 51

As some authors mention (Herzog (2008); Chaoqing et al. (2003)), the critical slope for a switchback from vehicle to walking is about 10 %. For humans walking, the maximum degree of slope they could climb or descend comfortably is around 12 degrees

(maybe more)8 and thus, the slope values could also be reclassified to take account of this factor; that is to place all values about 12 degrees in one group and assign to it relatively higher values in the reclassification. It is important to keep in mind here that movement in steep areas becomes less meaningful or impossible after a certain threshold or a certain slope degree that varies according to the means of transport; that is, movement on animals or in carts is more difficult than walking on steep slope.

Surface slope is the basic derivative upon which most analysts base the generation of least cost paths. The slope is a reflection of topography where the direction of movement, whether up or downhill, is taken into account. The slope steepness is an important variable to take account of in an area which is topographically complex.

Bell and co-authors developed a formula calculating the “comparative cost” for climbing or descending a slope. They used the Idrisi GIS platform to proceed in calculating the ratio of the tangent of any slope to the tangent of a slope of 1 degree which produces an isotropic cost surface they termed ‘effective slope’. One degree was chosen on purpose to avoid dividing by the value of zero, which will return non valid results. With a slope of 0 on a flat terrain, cost paths “could travel forever over flat surfaces, because as far as the algorithms are concerned, there is no cumulative cost whatsoever to follow those paths” (Kantner 2012: 22 ). The formula established calculates the “relative cost” or effort required to walk uphill and downhill as

8 Humans are able to move on steep slopes; however, after a certain threshold walking would turn into climbing.

52

. “The change in potential energy after ascending the slope is

=mass*gravity*height ascended” (Bell et al. 2002: 1 5). As the gravity is always constant and the mass of the person moving is considered to be the same, therefore, the ratio indicating the energy expenditure over a distance can be summed up as being height ascended 1 or y1 over height ascended 2, or y2 (y1:y2). As shown in figure 4 below, the important factor here is the slope angles.

Figure 4 explains in a clear way the above formula. Therefore, in order to create a slope-based cost surface, the above-mentioned formula is used to represent or calculate the non-linear relationship between the slope angle and the energy expended, to calculate an isotropic cost surface.

Figure 4. Figure extracted from Bell et al. 2002: 175. Following the slope process, in ArcGIS, another important cost derivative is required to generate the least cost paths: ‘cost-distance’. This would involve quantifying the cost needed to cross an area in relation to distance. That is, the further away a person moves from a source point, the more effort he/she expends. This is directly derived from the cost raster and requires a vector file containing the location of a single site. This process is explained in more details in the next chapters.

53

3.4.1. Creating cumulative cost surfaces

In GIS, generating a slope raster is required to create accumulated cost surfaces and then least cost paths. From the slope raster, a cumulative cost raster denoting the accumulation of cost “stores the accumulated cost for travelling outward from the origin and then traces the LCP [least cost path] from the target location back to the origin”

(Herzog 2012: 2).

To make the results more accurate or more realistic to a certain extent, some researchers thought of including velocity and speed of movement in calculating slope which would be affected by the direction of travel. Thus, Verhagen et al. (1999) “specify the effect of slope on travelling speed by foot as: ” (Van Leusen

2002: 6). The velocity, in other words, represents the slope of the distance travelled at a specific time. This formula calculates the speed of walking represented by , with representing the slope in degrees. This can be somewhat important as, for example, if a person is walking uphill, his/her speed tends to decline with the increase of the degree of slope. There is a force that is being applied against the person moving uphill that should be taken into consideration when calculating the speed of movement. Adding the force magnitude to the above-mentioned formula will result in an anisotropic cost surface as different forces are applied with, when walking downslope, or against the person traveling, when walking up-slope.

There are thus “two types of costs” that are used in a least cost path analysis:

“isotropic and anisotropic”. Isotropic means that the cost of traveling from a cell to its neighboring ones is equal in every direction. Therefore when the cost surface is isotropic the cost of moving from a cell is the same and thus is not direction-dependent. When the

54 direction of movement from a cell to its neighboring one is not equal in all directions, that is, direction-depend, then the cost surface is an anisotropic one (Herzog 2012: 5).

The next part presents this concept in more detail.

3.4.2. Anisotropism and direction of movement

The study of Bell and co-authors (2002) reassesses the validity of the slope cost surface factor and stresses that cost variables are “user-defined” and adjustable. They note that features impeding or slowing movement can thus be given values which reflect effort needed to surmount them in the real world. “The ‘cost’ can be conceived of as time or energy, the actual units of measurement are irrelevant because the cost is relative not absolute” (Bell et al. 2002: 1 3). The authors further stress in their study that there is a need to generate “user-created scale of relative costs” when taking slope at the base of a computational model. Thus, as they stated simply and concisely, “climbing a 90 degrees slope is not 90 times more difficult than surmounting a slope of 0 degrees or walking on the level” (Bell et al. 2002: 1 5). They further emphasized the fact that slope is a “non- linear cost surface in which the cost of (ascending) a 60 degrees slope is nearly 100 times as difficult as traversing a nearly level surface” (Bell et al. 2002: 1 5). It is generally accepted that walking downhill on a steeper slope exerts more effort than climbing up a steep slope and that walking on slopes having a slight downhill angle is easier than walking along a flat one (Bell et al. 2002: 176). This algorithm incorporated ‘anisotropic’ cost surfaces, surfaces that depend on direction of movement. It could be defined as incorporating “both magnitude and direction of a force, so that the resultant cost surface is largely dependent on the direction of travel across these various forces from point A to

B” (Bell et al. 2002: 176).

55

Also, Bevan, in a study conducted with C. Frederick and A. Krahtopoulou in the

Mediterranean countryside (2003) revealed some crucial information for creating more accurate movement on slopes across a landscape. They examined track ways and more specifically ones that are found on slopes. They state that any person or observer can notice that “as slopes become steeper, track ways tend to follow more oblique routes, often winding rather than heading directly up hillsides. This has to do with the near- exponentially increasing effort required to climb steeper slope” (Bevan et al. 2003 22 ).

They used thus further statistical functions to confirm that “as slopes become steeper, the average difference between the direction of steepest slope (aspect) and track ways bearing also increases as more routes that cross slopes more obliquely are used” (For full results see Bevan et al. 2003: 228).

As Herzog (2012) states, “slope is anisotropic, and slope is so important that sometimes the term anisotropic costs actually means costs based on slope” (Herzog 2012:

5). To take into consideration the force magnitude results in an anisotropic cost surface, where the direction of movement is not the same in all directions; that is, the direction of travel will influence and affect the speed of walking and the relative effort expended while traveling.

The following paragraph summarizes the different ways possible to generate least cost paths while manipulating variables and creating various models shown in the table below.

56

Table 3. Variation in generating least cost paths

The advantage of GIS software is in their ability to provide the user with a controllable environment, one which would enable the handling of data and variables such as some remain constant while others are manipulated. The sources of variation that could be considered are numerous; this thesis manipulated only one variable, that of the

DEM resolution, and left the remaining ones constant. The chart above provides some different types of variation that could be used in ArcGIS and GRASS.

At the first level, the DEM values and cell size can be manipulated according to a number of set conditions. At the second stage, the angles of the slopes are calculated and raster calculator is used to manipulate the slope values and make them more realistic: 1)

57

Using Bell and co-author’s formula ( ) to calculate the “relative energetic expenditure”, 2) calculating the “absolute energetic expenditure” proposed by Van

Leusen and measuring the metabolic rate of a person walking, or 3) using Tobler’s hiking function to measure walking speed and time. These, when applied, produce partially anisotropic cost surfaces unless they are based on the “effective slope” which would take into consideration that “one might not be necessarily travelling in the direction of the maximum rate of change” and that the aspect (azimuth angle) and direction of travel are a must to produce fully anisotropic surfaces (Conolly and Lake 2006: 217). To create a more complete cost surface, isotropic cost can also be included such as river barriers or vegetation covers, so that the cost of crossing them varies according to the level of impedance they present (same resistance in all directions). These are then added to the cost surface through map algebra and then an accumulated cost surface can be generated from a precise location. This final step generates a cost distance raster in ArcGIS based on the Dijkstra algorithm, which varies between software. The least cost path is then traced back to the start point following the route of the least accumulated cost.

In GRASS software, r.walk and r.drain are implemented to generate least cost paths. The latter is equivalent to the cost distance tool in ArcGIS and needs a cost surface input; the difference is that G ASS has the option to use a Knight’s move and thus the search neighborhood is increased and more moves are allowed. The cost surface generated by r.walk is anisotropic and based on Aitken 1977/Langmuir 1984 formula.

This is given as follows:

T= [(a)*(Delta S)] + [(b)*(Delta H uphill)] + [(c)*(Delta H moderate downhill)] + [(d)*(Delta H steep downhill)]

58 where, T is time of movement in seconds, Delta S is the distance covered in meters, Delta

H is the altitude difference in meter” and the parameters represented by a, b, c and d provide different sets of conditions according to movement, for example, downhill or uphill movement (GRASS development Team).

r.drain on the other hand traces the least cost path from a destination point back to the start one and is equivalent to the cost path tool in ArcGIS. It uses an accumulated cost surface generated from either r.walk or r.cost (GRASS development Team).

3.4.3. Slope in ArcGIS

The slope algorithm generates a cost raster derived from the DEM used. The slope for this analysis was calculated in degrees where zero represents a flat surface and 78.2 degrees the highest slope angle found in the study area.9 When generating a slope in percentage rise, the closer the angle gets to 90 degrees the closer to infinity becomes the percent rise (ArcGIS desktop help). The slope generated from the DEM that includes the

Göksu River and bridges found on the river gives a higher value for the steepest slope, which is equal to about 89.8 degrees. The combination of the DEM with a raster or friction map that contains the Göksu River is discussed in the next chapter. This has to do with the way in which the software calculates the slope. Relating these values to real-life settings and to human movement on steep slope, these do not portray the reality of things.

Both are considered extremely steep slopes for individuals; that is, both are impossible to climb; whether at 78 or 89 degrees, they fall into the impossible to cross range. This is important to note when trying to convert or translate these slope values in terms of what it means for human movement.

9 When the DEM was cropped to the study area extent, the highest slope value was about 69 degrees.

59

3.5. Conclusion:

The reliability of the slope measurements and how these relate to human movement remain a challenging debate nowadays. Many scholars, such as Branting and

Kantner, have noted the potential limitations that slope has on the study and analysis of movement.10 Others argue that slope is not enough or is insufficient to quantitatively measure the cost of movement in terms of distance or time and that velocity and speed of movement also should be included. White (2012) and Livingood (2012) both used

Tobler’s hiking formula to generate cost of movement based on time (Kantner 2012:

227). Some researchers consider that the latter is not enough to portray actual human movement, which is more related to energy spent than to time. It is based on Pandolf’s formula for calculating metabolic rate of a walker and thus “emphasize[s] the energetic costs of human movement” (Kantner 2012: 228). This, however, was revealed to be not very reliable by many authors and perhaps is best used in conjunction with other algorithms. For example, White (2012) explored Pandolf’s algorithm in combination with

Tobler’s hiking function (Kantner 2012: 228). ademaker and co-authors “use[d] both a derivation of the Pandolf formula and a simple slope-based approach for generating their cost surfaces” (Kantner 2012: 228).

New ideas and concepts are constantly developing with more complex algorithms generated and tested through least cost path analyses. However, the majority of the studies still consider topography and slope, as well as other environmental factors such as vegetation and water features, as being a good aggregate to generate least cost paths.

10 They go into thorough detail in critically reviewing the limitations of least cost path (see chapter 12 by Branting (2012) and chapter 13by Kantner (2012) in Least Cost Analysis of Social Landscapes).

60

Regardless of the variables selected, the least cost paths generated through GIS are models of reality and should be tested against other methods.

The solution does not lie within the complexities of the algorithms generated; rather, it must be found within a combination of factors that present a medium between computational and conceptual frameworks. The GIS environment produces models, virtual environments that are based on factual evidence and cannot be viewed as absolute.

Many different models can be created even with simpler algorithms; however, these should be validated and tested where needed and used in conjunction with other software to enhance visualization aspects and therefore to increase the understanding of the outcomes.

To conclude this chapter, there remain some technical and social limitations to

GIS analysis; the important factor here is to be aware of these limitations and to realize that accuracy could be limited depending on the data used. It is important to critically assess the data gathered prior to processing them, as these vary according to the geographical area in question and according to the analysis to be performed. DEM and slope are explored in more detail in Chapter 4 and are also illustrated more clearly throughout the case study in Chapter 5.

61

Chapter 4: Cost Surfaces and Least Cost Paths

The previous chapter gave a brief overview of the concept of calculating the slope cost surface and of how the algorithms and the formulae have developed rapidly over the past few years to include variables that would give a more realistic representation of movement and costs associated with it.

The first part of this chapter presents a brief overview of the importance of DEM resolution and the scale of analysis used in a study, while the second part explores the steps needed to perform a least cost path analysis in ArcGIS. The third part presents the concept of neighborhood selection and cost distance analysis, which generate a cumulative cost surface. The case study from the Göksu Valley is explored in thorough detail in the next chapter.

In this thesis, the same analysis is performed or run on DEMs with different resolutions to examine how this variable impacts the generation of least cost path routes.

The research objectives and goals of a project usually dictate which resolution or scale of analysis to use, that is, which fits the framework created best. The first analysis represents a regional one, mapping the least cost path between Mut and Ermenek and between Dağpazarı and Karaman, while the second looks more closely at bridges found in the study area.

4.1. Resolution and Accuracy

The extent of a study area determines in general the resolution of the DEM needed. The accuracy of a DEM, that is, the "level of uncertainty in point location or distance measurement” (Allen 2000: 102-103) is defined generally by its resolution

62

(ArcGIS Desktop Help).

The cell size most convenient for any spatial analysis project differs according to the resolution selected for the project. The smaller the cell size, the higher the resolution and the more the details of the landscape are captured by the satellite imagery; that is, in general, “the cell size should be as small as the smallest unit area in which one is interested” (Allen 2000: 103). For example, a research analyst can choose cell sizes of

30m or more to capture specific landscape details when performing a regional analysis; a

10m resolution can be chosen for local analyses, mapping for example artifacts scattered around a site. In an area such as that of the Göksu Valley, attention should be made concerning detail. The area is characterized by its cliffs, steep descents and rocky canyons. One of the aims of this thesis, thus, is to test which DEM resolution fits best in the framework set here, and if there is actually one that captures and represents important landscape variables and relevant features better.

Data resolution and accuracy are topics well worth examining and taking into account when conducting any spatial analysis; these have not been extensively explored and investigated in the literature. There is a certain level of errors and voids present in the data, a fact that is generally accepted when creating a DEM from satellite images or from contour maps.11 The interpolation method used to create a DEM also determines this level of errors and there are indeed some ways to check whether the DEM used contains levels of errors equal or less than 0.1-0.2 per cent, the range that is accepted, of the total interpolated DEM (Conolly et al. 2006: 104).

DEMs are widely available from mapping agencies and readily available online from different sources. DEMs from free and open sources, however, are usually at a

11 Refer to Chapter 3 for DEM creation and processing of data.

63 coarser resolution. It is possible within the GIS framework to resample DEMs to finer resolutions, that is, for example, to resample a 90m resolution DEM to a 15m resolution one. This reveals the flexibility of GIS software and its ability to manipulate data to fit different conditions set by the researcher. However, some authors argue that resampling a

DEM at a higher resolution will produce more flaws or errors in the results because this would increase the number of cells used in the calculation. As Kantner (2012) states, first-order derivatives such as slope can contain artefacts because

a 10 m surface has 9 times as many cells as a 30 m surface, exposing a greater range of variation and providing the spreading algorithm many more near zero cost options (Kantner 2012: 231).

The next part discusses the steps in ArcGIS required to perform a least cost path analysis while the next chapter, Chapter 5, explores a case study that illustrates and discusses the results generated.

4.2. Least Cost Path Analysis:

So far, DEMs and slope cost surfaces have been discussed, as these are two important components for least cost path analysis. The software also needs some vector files, termed ‘shapefiles’ in ArcGIS, which include location of sites and features according to their geographic coordinates on the surface of the earth. Creating shapefiles is a relatively easy process, discussed further in Appendix 2. The software needs an origin point to be able to generate accumulated cost surfaces from this point; moreover it needs a destination point to be able to trace back the least cost path from that point to the origin.

For the study presented here, two types of shapefiles were created: 1) point-based shapefiles representing the locations of the sites of interest (Mut, Ermenek, Karaman and

64

Dağpazarı) and 2) a line-based shapefile representing the Göksu River as well as the modern roads that lie in the area.

The next part of this chapter introduces distance analysis: the step after computing the slope raster is to generate cost rasters based on weighted distance.

4.2.1. Distance Analysis

After computing the slope raster, calculating cost distance and cost backlink is the next step.12 The GIS platform is able to calculate distance in two different ways: by calculating the Euclidean distance, or the straight line between two points on the one hand, or by measuring the weighted distance on the other. The drawback for Euclidean calculation is that it fails to accommodate variables such as topography, terrain, environmental factor and other such variables that affect movement over a certain distance in space. In other words it simply floats in space regardless of the DEMs or maps used. It just computes a linear distance from a source point to a destination one. It lacks accuracy because even if many destination points occur at the same distance from a source point, to reach them involves different costs of travel depending on the terrain and their location in the landscape. However, the straight line remains the only constant or absolute value against which the paths generated can be compared.

The cultural and real-world key factors or variables are important to take account of when generating least cost paths as they do affect greatly human decisions concerning which routes to take. Cost distance, or weighted distance, computes cost values for each cell and thus generates an accumulated cost surface, derived from the slope cost surface, marking the relative cost to move from one cell to another and thus creating an

12 Cost Distance and Cost Backlink are tools available in ArcGIS only. Other software such as GRASS uses cost surfaces and drainage.

65 accumulative cost within the raster map.

The cost of movement to cross an area is the most important variable to consider while creating cost surfaces. Cost can be understood in different ways such as the effort needed by an individual to cross a certain area; however, it can also be solely related to topography, which is the case presented in this thesis. An individual’s effort of movement is not easily quantifiable because it cannot be measured accurately; it is actually related and relative to each individual or group of people separately; that is, human agency and human interactions with the landscape, the decisions that humans take when traveling and the possibility of change of route according to unplanned events, their metabolic rate and speed of movement, are all features or variables that are crucial for the understanding of movement and should be quantified where possible. In the case study presented later in chapter 5, cost is not related or relative to the effort that an individual puts in, but rather represents the general or absolute values calculated according to the topography of the area. These cost surfaces, whether representing slope and direction of movement, vegetation and land covers that could slow down movement, or rivers that could halt travel depending on the season, are created separately and then added or combined together to generate a cumulative cost raster displaying all cost surfaces in one final raster map. This process is discussed in further detail in Chapter 5.

Depending on the topic that is being researched, the archaeologist can choose which cost surface suits better his/her purpose. These could be modified and adjusted accordingly. Howey (2007), made an important notice about using multi-criteria cost surfaces for computing the cost of travel. She mentions some advantages of using a multi-criteria technique, most importantly:

66

1) “They avoid prioritizing one factor alone as the determinant of travel”. In real life there is more than one aspect influencing the choice of routes taken for example. 2) “Models can be tailored to the specifities and complexities of the cost of movement in any given setting”. Values assigned to slope or densities of forested areas or the flow of rivers reflect their ability to impede movement and can be tailored to suit the need of the researcher (Howey 2007: 1831).

To generate a cost distance raster map, a shapefile locating the source point or the site of interest is needed as well as the slope raster. Specifying the location point is crucial, as the GIS software computes weighted distance or, in other words, accumulated travel cost from the site of interest to all other cells in the raster map that fall in its direct vicinity or neighborhood. Creating a cost distance raster is not enough when the purpose of the analysis is to generate least cost paths; it needs a complementary raster map, the cost backlink, which denotes the direction to return from any cell into a source cell following the most efficient path that requires the least cost running back to the start point (ESRI GIS desktop help). The following paragraph explores the neighborhood concept in creating cost distance.

4.2.2. Least Cost Path and neighborhoods selection

ArcGIS, as other software, uses graph theory in computing a cost distance raster map. It is important to state here that least cost paths rely on the Dijkstra algorithm, which assumes that the centers of the cells are nodes and that the lines that connects each cell center with its neighboring one are termed ‘links’ which can be assigned a weight

(Saha et al. 2005: 1150; Chaoking 2003: 362). Thus it creates the framework for a conceptual network linking adjacent cell centers together as well as non-adjacent cells; however, the latter depends on the algorithm that is used or available in the software used. These are extracted from the cost assigned to the different cells or, in the case presented here, from the slope raster.

67

To give a brief example of how these virtual networks are created, a person should envision a chess game-board. This analogy is very popular in the GIS world and provides a clear explanation as to how cells and their neighborhoods contribute to the generation of least cost paths. Thus, a DEM or a grid raster can be visualized as a chess board, its square tiles representing the cells in a map. When constructing a path from a source cell, that is movement, from cell to cell, a certain algorithm is applied. This differs from software to software and depends on the neighborhood size they can take into consideration. In a chess game, this means that a certain pattern is followed to figure out the next move of a chess piece. Some pieces movements are limited to diagonal moves, others to horizontal ones, and each is limited to a certain neighborhood that represents the boundary or maximum extent to which a move is allowed. Different software thus include different extent or neighborhood limits: for example, some consider a neighborhood of only four cells; that is, the movement is limited to either the diagonals, such as the Bishop move in chess game, or to horizontals and verticals, which are squares able to be reached by the Rook in chess. This small neighborhood limits the choices of where the next move might be to only four possibilities.

The neighborhood extent used by ArcGIS 10 includes eight neighboring adjacent cells. That is the movement is allowed between all cells surrounding the source cell. In a chess game this could be associated with a Queen’s pattern, which is allowed to move vertically, horizontally and diagonally and thus provides more choices to select from for the next move.

These examples presented above all relate to the concept of a “direct neighborhood” to the source cell. To relate this concept to the software used, the eight

68 neighborhood movement possibilities is defined as a ‘3 by 3 movable window’ or

‘matrix’ (Saha et al. 2005: 1151). Thus the “direct neighborhoods connect directly the source pixel window” (Saha et al. 2005: 1162). An important point to make here is that the angle at which a certain path is drawn, that is, the path from the center of a cell to the other, is limited to a 45 degree angle (Saha et al. 2005: 1162). “The turn angle interval

([that is the] angle between incoming and outgoing paths at a pixel) for the route is restricted to a minimum of a 45 degrees angle” in a 3 by 3 pixel window (Saha et al.

2005: 1162). In other words, the angle that is made from two connected links will be limited to 45 degrees. This process explains the reason why the paths generated are coarse and marked by sharp edges.

Several researchers have generated varied algorithms that increase the number of neighborhood used to include 24 neighbors and 48 neighbors. They increased the size of the pixel window to 5 x 5 and 7x7 respectively such, as Saha and co-authors (2005) and

Herzog (2012) who focused on using the Knight’s move pattern and modifying it to allow more indirect neighboring cells to be reached. This is interesting to an extent, as extending the neighborhood means that the turn angle can equal 22.5 degrees, which would convey smoother and more accurate results. Chaoking states that “a Knight’s pattern can improve the accuracy of the least cost paths, especially in regions with steep slopes” (Chaoking 2003: 3 2).

Increasing the neighborhood size, however, also has some minor disadvantages, mainly storage capacity and processing time. In ArcGIS, the processing time for each

69 path only slightly varied, ranging from a couple of minutes to 5 minutes at most.13

Moreover, processing time varies according to the capacity for storage, the memory, and the processor speed of the computer used. ArcGIS provides the user with an option to use

Model Builder, an application that enables the analyst to set up at once all the analysis that he/she wants to perform. In other words, a new window is open, all the tools and the files needed are inserted and linked together through the tools available in the database toolboxes. After validating the model and making sure that all data is available and placed in the right order according to the analysis chosen, the user clicks on the play button and the software begins to process the data. Different tools and analyses can be processed in the same window. This way presents an advantage; the user can leave the computer running while working on other things. One disadvantage is that if the computer froze or stopped working while the user was away, then that would mean lost time and all the process would need to be restarted.

An interesting final point to make here is related to the presence of steep slopes in a region; the steeper the slope is, the more the walking becomes zigzagging, and this happens “at the expense of increasing path length” (Chaoking 2003: 365). For the framework set for this thesis, only the 3 by 3 pixel matrix is considered, as ArcGIS 10 can only consider eight direct neighbors in finding the least cost path and thus uses the

Queen’s move. However, it is important to mention here that the results of least cost paths vary greatly depending on the algorithm used and the number of choices available; an increase in neighborhood means an increase in choices for movement (Chaoking

2003:372).

13 It should be noted here that the DEM used was cropped to include fewer cells, only covering the study area, and thus speeding up the process. The processing time would increase greatly if the full DEM of Turkey was used.

70

4.3. Conclusion

There are thus various ways to perform a least cost analysis using GIS software and these depend on the software, the data, and the algorithms available for it. The user has then the option of creating varied models by modifying some variables and holding others constant. In the case study presented here, the models created modified only the resolution and the values of the DEM and proceeded with the steps provided by ArcGIS without further variation. Sources of variation can include, as mentioned previously and presented in table 3, modifications applied to the slope cost surface, or to the algorithms used to produce weighted surfaces, and those that allow an increase in neighborhood limit. These modifications can also include sub-groups of variables; for example, adding friction maps to the initial raster grid (DEM or other) or to the slope surface containing vegetation layers and water layers. Other sources of variation can include adding a time function (Tobler’s hiking function) or an energy-expenditure formula (adding the metabolic rate) or can include both.

Data resolution is an important aspect to take into consideration. As Herzog noted, comparing the slope values of DEMs that have different resolutions provides a good method to measure errors that are created by resampling DEMs (Herzog 2008: 238).

These were then used to explain why different least cost paths occur when performing the analysis after having resampled the D M. She notes that “the LCPs of the low resolution grid are shorter than the high resolution paths” (Herzog 2008: 239). This appears to be the general trend in the case study presented below. The fact that slope is the main, more common source for the generation of least cost paths, is important to consider, as it is one of the main reasons why paths diverge for the same area. But another important aspect to

71 note, that also affects results greatly, is the method by which a researcher uses to generate the slope raster.

After having performed the abovementioned steps, least cost paths can be generated. The next chapter thus presents a case study from the Göksu Valley. Several least cost paths are generated from DEMs that have been re-sampled to different resolution and also with different interpolation options. Also, a modification brought to the DEM by adding the Göksu River as a barrier to movement, as well as including bridges, is discussed. The point here is to show that any slight modification, or adding/removing a step, in the analysis would produce different results and the challenge remains of relating these with actual human movement.

72

Chapter 5: A case study from the Göksu Valley

This section presents the basic steps for performing a least cost path analysis, taking as a case study the Göksu Valley, originally encompassing the sites of Mut,

Ermenek on the one hand, and Dağpazarı and Karaman on the other, all sites surrounding the Göksu Valley. Later on, adding the site of Adrassus to the analysis was found to be useful and interesting results were obtained. The analysis here is a regional one, mapping and calculating routes that link these ancient sites together. The first part presents the steps and results of a first analysis, one that compares results from least cost paths generated using conditions to work with inadequate values found in the DEM. The second analysis explores the different results generated from resampled DEMs to lower and higher resolutions, and also resampled using different interpolation techniques. The third part then explores the addition of the Göksu River as a barrier to movement or as an impeding feature. It also investigates the addition of bridges that fall on the main river of the Göksu iver valley. Furthermore, the analysis was repeated for Dağpazarı and

Karaman, which represent a completely different topographical region, thus, making the comparison between these two areas more thorough and comprehensive. A final part compares these results and tests them against each other in order to understand how the

GIS software uses the data and generates models that map the real landscape virtually.

Relating the results to human movement remains a challenge; however, tools available nowadays, such as Google Earth, could aid in understanding and interpreting the results.

Data used

The data collected and used in this case study are taken from the Göksu

73

Archaeological Project (GAP) database, which was created and updated throughout the past decade. This database contains collections of features, archaeological sites, GIS files and waypoints that have been collected intensively by the GAP team.

Computing an accumulated cost raster showing the weighted cost of movement from a site, or a source cell to a destination point, is a crucial step for generating least cost path; it considers both the concept of travel distance, which can be converted to travel time by estimating through ground-truthing the time needed, and the travel cost of movement based on the effective slope that considers anisotropism or the slope magnitude. Barriers to movement or more complex features, whether social or topographical, that affect movement are discussed in the next, concluding chapter.

5.1. Working with the DEM

As mentioned previously in Chapter 3, the accuracy of an elevation model depends greatly on the method used in generating this DEM. Some voids and errors do exist, sometimes not affecting the results, when they fall outside the study area, and other times causing paths to divert in different ways. It is therefore important to know where these voids are and if these affect the outcomes of an analysis. Later it will be proven that they do in fact cause the results to diverge from a path. These voids are less likely to be found in flat areas.

5.1.1. DEM values:

There are ways available in GIS software to remove inadequate values in a DEM, whether they represent negative values, or ‘NoData’ values. “The fact that an input location can have NoData instead of a numerical value has ramifications for how the

74 tools handle them - NoData means that not enough information is known about a cell location to assign it a value” (ArcGIS Desktop Help). In the analysis here, three algorithms were used: the first one converts all negative values to zero following the formula below, which is inserted in ArcGIS raster calculator.

A note should be made here: “that NoData and 0 are not the same—0 is a valid numerical value” (ArcGIS Desktop help).

1. In this algorithm, there is a condition set to be met. The case/condition

here is that all negative values in the DEM raster must be first located and identified

as true, that is, meeting the condition first, and then converted to zero.

[con([dem]<0,0,[dem])

2. Second, the negative values were converted to NoData values,

representing the “absence of data”, which enables an option for the GIS software to

disregard or ignore those values in any analysis process (ArcGIS desktop help). These

are given a default value of -32,768 by ArcGIS. The following formula was used, also

through raster calculator:

SetNull([dem]<0,[dem])

When comparing the above-mentioned adjusted DEMs, with zero values or

NoData values, the raster maps generated are visually identical, and the only difference is that the number of cells containing a value of zero increases while the rest remains the same (Farr et al. 2007: 1); in the analysis presented here, least cost paths were generated and compared between Mut and Ermenek using both modified DEMs as well as the

75 original 90m one containing the negative values; the results are shown in figure 5 in this current chapter on page 79.

Another method used in hydrological models enables the filling of sinks found in the DEM. The fill is applied on a raster map in which sinks have been located. In the models presented here, the fill tool was directly used on the 90m DEM and a least cost path generated, following the basic steps. The fill tool returned a DEM with the lowest values of -17 instead of -21. The resulting least cost path was compared with the other two and the results are shown in Table 4 and figure 5 below and discussed in the next few paragraphs.

Table 4. Least Cost Path Analysis from Mut to Ermenek: Adjusting cell values

Acc. CP values DEM Type Interpolation Cell size LCP length (in m) (per unit distance)

DEM 90m Nearest neighbor 90m 256,639.31 73,125

Cond_0 Nearest neighbor 90m 256,639.31 75,040

Fill Nearest neighbor 90m 149,973.45 81,544

SetNull Nearest neighbor 90m 256,639.31 73,125

Table 4, demonstrates that applying any change to the cell values in a DEM will result in different cost paths. The least cost path values for the DEM with the original negative values (DEM 90m) is equal to 256,639.31; that is, the least cost path passes through cells with different cell values and these values are added together to create the accumulated least cost path total value. The total sum of all cell values that a certain path crosses is said to represent the total cost value of the least cost path generated. Movement

76 is costly from cell to cell. When isotropy is considered, it minimizes the distance cost, that is, the route preferred would favor the short distance over the effort required

(meaning more cost in terms of effort). Anisotropy favors the effort or cost rather than the distance crossed; that is, the preferred path makes sure to pass through cell with low cost values even though it represents a longer route.

The second modification brought to the DEM is converting all cells with negative values to zero. The value zero is an accepted numerical value by the software and cells containing the value zero are considered in the analysis. The least cost path value returned is equal to the one generated previously. Also when setting values to be equal to

NoData values, that is, the software gives these cells a set value of -32,768, the least cost path value generated also equals 255,6639.31, representing the accumulated cost of cells that form the least cost path. This is because the cell size is the same. The results are interesting when converting the least cost path raster map into a vector, polyline shapefile. This step was needed to convert the cost values into actual distance in meters.

Here most results vary: as seen in the table above, the length in meters of the least cost path generated from the original DEM with negative values and the one where negative values were set to equal NoData (setNull), are equal. However, the conditional algorithm used to identify all cell with negative values and convert them to zero generated a different length. This difference is equal to 1915 meters. Keeping in mind that the DEM used has a cell size of 90m (the level of uncertainty), then a difference of about two kilometers could be considered acceptable. The final least cost path generated from the original D M that was ‘filled’ presents strange results. The fill tool was used to determine where sinks in the DEM are and fill them with values calculated from

77 neighboring cells. The least cost path generated from this DEM was equal to 149973.45.

There is a difference of 106665.86 in accumulated costs between the least cost path values generated from the filled DEM and the ones generated from the other DEMs. This value is also reflected in the length in meters that was calculated for this path. The path was equal to 81,544 meters, a difference of about 6,500 meters from the other paths. This result seems to be less accurate than the other results; when compared with observations made through ground-truthing, the distance covered from Mut to Ermenek was about 85 km.

Figure 5 below represents the results discussed earlier in a visual way. It is clear that in figure a) the path generated produces faulty results in the area marked by the red square. The path in this section produces a straight line, which is clearly unrealistic. This is also another point confirming that filling the DEM is less accurate than setting conditions that would convert the values to specific set one. However, to be noted here is the difference at the start of the path from Mut. The paths generated in figure a) seem to follow a logical pattern as passing through a corridor following lower elevation values. In the maps presented above, the black areas represent low elevation values and the white parts high ones. Therefore the path generated in figures b) c) and d), for the first section of the paths differ greatly.

78

Figure 5. Least Cost Path Analysis from Mut to Ermenek: Adjusting cell values14

In Figure 5, the Göksu River is represented by the blue polyline passing through the Göksu Valley. The dotted yellow line represents the modern road that links Mut and Ermenek together. The optimal path (the straight line path) is shown in red and the least cost paths in blue, green, red and pink respectively from top left.

5.2. DEM cell size, resolution, resampling

It is important, then, for the analysis of least cost paths that the DEM used provides the least number of errors or voids possible. The resolution of the DEM at which an analysis should take place remains dependent on the researcher’s goal. The most available open sources for downloading a DEM for any part of the world outside North

America are available at a 90m resolution. The DEM selected for the next part of the

14 All maps generated in this thesis are better viewed in color and on the computer’s screen.

79 analysis is the original one downloaded for the following reasons: The results from adjusting the DEM by setting negative values to zero or NoData did not affect the routes chosen as the least cost paths. This DEM, therefore, was ignored and the original 90m

DEM downloaded, with negative values, was used for the rest of the study.

The need for resampling a DEM at a higher resolution depends on the goal of the study. For a regional analysis, such as the one presented here, usually a 90m DEM is acceptable. However, it was considered interesting to experiment here with the DEM downloaded and resample it to different resolutions, both higher and lower, to verify at which one the results are more qualified to be accurate and sound and to understand why the results generated are different. It is noted here that the lower the resolution is, the lower the number of cells processed by the GIS and, thus, the computation process is much faster.

Some scholars, such as Kantner (2012), do consider resampling as producing flaws or errors in the first order derivatives, such as slope, because the number of cells is multiplied, and thus, the more cells available in a DEM that have been interpolated and assigned a value depending on the interpolation algorithm used. However, scholars

Keeratikasikorn and Trisirisatayawong (2008) proved in the study that they conducted that resampling a 90m DEM into a higher resolution one, a 30m one, can result in the

“same quality” as an original 30m D M if interpolated by using the bicubic polynomial algorithm.15 This is not available in the built-in GIS system ArcGIS provides.

15 See Keeratikasikorn et al., (2008) Reconstruction of a 30m DEM from 90m SRTM DEM with Bicubic polynomial interpolation method.

80

5.2.1. Resampled DEMs: interpolation

In ArcGIS, resampling a DEM at a higher resolution results in the change of cell size only and does not results in a change in the accuracy level or DEM cell values; that is, if resampling a DEM from 90m to 30m, the cell size of the output raster map will contain cell size of 30m, however, still with a 90m accuracy (ArcGIS Desktop help). The most common interpolation algorithm for resampling the DEM is the ‘nearest neighbor’ technique, which is set to be the default parameter used in ArcGIS. This algorithm takes into consideration the 16 neighboring cells of the one being interpolated for the purpose of having the cell centers of the inputted DEM aligned with those of the resampled one, that is, to match the coordinates of the center of any cell within both datasets. The value of the cell is not altered through this process. Another resampling technique is the cubic convolution in ArcGIS, which produces a smoother surface for the DEM. It also takes into consideration the 16 neighboring cells and performs a weighted average of these and a new value for each cell is created (ArcGIS Desktop help).

In this research study, the 90m DEM was resampled using four of the built-in interpolation algorithms in ArcGIS, these being nearest neighbor, bilinear, cubic convolution and majority. All of these techniques use a different number of neighboring cells that are chosen to be processed. One of the advantages of GIS software is their ability to modify and alter cell size according to the research objectives in mind. One of the main objectives in this thesis for using different resampling methods is to compare the different least cost paths outcomes generated with each other and to note the difference in results according to the technique used and to relate them in a meaningful way to the actual human perception of movement. The results indeed vary greatly, not only visually,

81 but also in the length of the path; that is, some paths represent coarser results while others represent a smoother surface. These are clearly visible and any researcher must be aware of these minor differences in techniques that affect the results greatly.

1- Resampling 90m DEM, using nearest neighbor interpolation

technique, into a 30m DEM.

The nearest neighbor technique is the most common resampling method, since it requires less processing time than the other options. As for all interpolation and resampling techniques, an input DEM is processed and different algorithms applied to it to produce an output raster providing new values for each cell and a new cell size in the

DEM. The purpose with any method used is to calculate or identify new cell values in the resampled output raster. All use the center of each cell in the input DEM and locate the center of cells in the output DEM that is the nearest to the position of the original one in the input raster. Using the nearest neighbor resampling method identifies the closest cell center to the original one and gives it the same cell value allocated to it in the original

DEM (refer to figure 6 and table 5).

2- Resampling 90m DEM, using bilinear interpolation technique, into

a 30m DEM.

When choosing to perform resampling using a bilinear interpolation technique, the same principal as the nearest neighbor one applies. However, in this instance, four of the neighboring cells’ centers are taken into account by ArcGIS. That is, the closest four cell centers to the original cell center on the input raster are identified and a weighted average based on the distance between these centers is calculated. The final

82 representation reveals a smoother surface than that of the nearest neighbor technique

(refer to figure 6 and table 5).

3- Resampling 90m DEM, using cubic convolution option, into a 30m

DEM.

Also using the same principal of identifying cell centers and proceeding from there, cubic convolution determines the nearest 16 cell centers to the original one in the input raster. The value assigned to the new cell in the output raster is calculated using a weighted average as in the bilinear interpolation technique, however, this time using 16 neighbors. A greater number of cells is used here revealing a rougher (“sharp”) surface since four times the number of cells is used (refer to figure 6 and table 5).

4- Resampling 90m DEM, using majority option, into a 30m DEM.

The last resampling method or option provided by ArcGIS is termed majority resampling. This technique is slightly different from the other three mentioned above.

The algorithm relies on identifying cells that have the same value and that represent the majority of the cells being processed. That is, it identifies a cell with its four or eight neighboring or connected cells all having the same value in the input DEM and replaces it with this value. The cells used for determining the new raster cell value must represent a majority for this method to work; if not, then the cell value in the input DEM is not changed and remains the same in the output DEM (refer to figure 6 and table 5).

The four resampling techniques mentioned above assign the same map coordinates to the lowermost cell on the left corner of the DEM and from it the new map coordinates for the output raster are generated.

83

Table 5. Resampling the DEM: four interpolation options.

DEM Type Interpolation Cell size Acc. CP values LCP length (in m)

(per unit distance)

Resampled 90m Nearest neighbor 30m 217,739.14 87,560

Resampled 90m Bilinear 30m 245,885.05 68,488

Resampled 90m Cubic convolution 30m 255,866.66 68,852

Resampled 90m Majority 30m 152,139.25 101,817

Table 5 presents the results from generating least cost paths derived from DEMs that were resampled to a higher resolution using the four options provided by ArcGIS software. It can be noted here that results vary greatly between each of the least cost path generated. In the previous example, in Table 1, the average length in meters was about 74 km. Here the results vary greatly ranging from about 68 kilometers to more than 100 km.

It also can be noted that the smaller the accumulated cost path is in cost units, the longer it is when converted to distance in kilometers. The results here are up for discussion as a

30m resampled DEM, in theory, should represent more accurate results. The least cost path generated from the DEM that was resampled using the majority option seems to be unrealistic when compared to the other three. Also the path generated from the resampled original DEM also yields results that vary to about 13 kilometers from the ones generated from a 90m resolution DEM.

84

The results presented in Table 5 above are displayed in figure 6. The results look

to be coarser when using nearest neighbor and majority options for resampling. Also

noted is the fact that the visual difference, and accordingly least cost path calculated

length, tend to be coarser when resampling a 90m DEM into a 30m one using the

majority interpolation technique and also using nearest neighbor. The results from the

bilinear and cubic convolution interpolation appear to be finer visually.

Figure 6. Resampling the DEM: four interpolation options.

The results provided here vary greatly. This visual representation is just a

confirmation of the values generated from the results shown in table 5. It is crucial for the

85 analyst or the researcher to understand the tools used in any analysis and how the outcomes are computed. Any slight modification to the cell sizes or the cell values of a

DEM produces various results; also, the formula or algorithm used to predict cell values when resampling a DEM through interpolation techniques affects the results greatly.

There would be a difference if the new cell value were calculated based on 4 neighboring cells or 16 neighboring, cells for example.

When performing resampling at a higher resolution, the number of cells in any

DEM increases and therefore it is not surprising to see differences in the results generated for least cost paths, that is, a difference in the numbers of cells constituting the least cost path. At a lower resolution, fewer cells are available for mapping or displaying larger parcels of land. Therefore, comparing the least cost paths values with the number of cells present in the resampled DEM could provide new insights on the analysis method of least cost paths.

After having resampled the 90m DEM into a 30m resolution DEM, the least cost paths generated take different routes. This is clearly shown in figure 6 above, where the paths diverge at several locations marked by the arrows on the map in figure 6. This stresses further the concept of cell sizes and cell numbers used in the analysis and that the least cost paths generated are greatly dependent on the accuracy and reliability of the

DEM used.

5.2.2. Resampled DEMs: resolutions

The 90m DEM used in this analysis was also resampled at five different resolutions, 15m, 30m, 100m, 500m and 1000m using the nearest neighbor technique as the default algorithm (figure 7).

86

Table 6. Least cost paths from Mut to Ermenek on different resolutions.

DEM Type Interpolation Cell size Acc. CP values LCP length (in

(per unit distance) m)

DEM 90m Nearest neighbor 90m 256,639.31 73,125

Resampled 90m Nearest neighbor 15m 187,729.42 92,685

Resampled 90m Nearest neighbor 30m 217,739.14 87,560

Resampled 90m Nearest neighbor 100m 256,947.45 70,495

Resampled 90m Nearest neighbor 500m 199,007.63 60,655

Resampled 90m Nearest neighbor 1000m 172,674.47 57,484

Table 6 presents the results generated from resampling the original 90m DEM to higher and lower resolutions; that is, the cell size of the DEM was modified to be equal to

15, 30, 100, 500 and 1000 meters. The results here also vary greatly; the least cost paths generated from the 1000m and 500m resampled DEM are interesting to look at. It can be noted that there exist a general trend: the larger the cell size is, the shorter the distance in meters is, and thus the smaller the cell size is, the longer the distance traversed by the path is. The least cost path generated from the 15m resampled DEM provides the longest route: 92,685 meters. This observation, however, cannot be applied to the least cost path value or the accumulated cost path values generated in cost units: it can be noted that the accumulated cell costs tend to be smaller with larger cell size: for the 1000m DEM the least cost path equals 172,674.47 and for the 500m the path equals 199,007.63. These, when converted to meters correspond to the shortest routes obtained: 57,484 and 60,655m

87 respectively. The peculiar result in this table corresponds to the path generated from the

15m resampled DEM. The least cost path value is equal to 187,729.42 which is larger than the results of the 1000m DEM, yet provides the longest route. There is a difference of about 35 kilometers between the shortest path and the longest one generated.

The paths, as reflected in the table above, vary ; the figure above represents all the least cost paths generated from resampling the DEM to other resolutions, that is, to a different cell size. The larger the cell size is, the coarser the results are and the tendency that the software has to generate/follow straight lines is higher. In the 1000m resampled

DEM, the cells are clearly visible, each encompassing an area of 1 by 1 kilometers, and the least cost path tends to follow straight lines and have sharp edges which represent less realistic results. The least cost path generated from the 500m DEM also displays a route that tends to follow a straight line, but to a lesser extent than the previous one and it also has less sharp edges. The results derived from the 90m and 100m DEMs are the same for the most part of the path except the final section before reaching Ermenek.

It is interesting to note that only a difference of 10m in cell size could produce different results; and thus the level of accuracy and resolution should be taken account of depending on the research planned. Herzog (2012) states that “Sometimes very small deviations in the model parameters or varying the start locations result in widely differing

LCPs” (Herzog 2012: 15). The 100m generated path tends to follow a straighter line for the last part of the path than the 90m one and also appears to present a finer path with less sharp edges than the 90m one. This could be interesting as the general accepted trend shows that with smaller cell sizes the resolution is increased and thus the results generated are finer visually.

88

Figure 7. Least cost paths from Mut to Ermenek on different resolutions

All these observations made so far recall the importance of understanding how the

tools used function and they emphasize the importance of cell sizes and resolution in any

study. As seen in the example presented above, the slightest change brought to the DEM,

whether changing the cell size or resampling the DEM using different interpolation

techniques, produces varying results sometimes with a large gap between them. The

reason behind this can be related to the way the GIS software converts the rasters to grids

before calculating the least cost paths; their accuracy depend on “the constraints imposed

89 by the links restricted to the[ir] immediate neighbors” (Herzog 2012: 8). Herzog (2012), in her study, proved that

1) The LCP calculated by the Dijkstra’s algorithm [based on graphs and used in ArcGIS] is not the shortest path but deviates from the straight line [considered here as the optimal path]” and that 2) “by changing the orientation of the grid, the length of the LCP may increase by 8%” 3) “and even if a 48 cell neighborhood is considered, in the worst case scenario in uniform landscape, the maximum distance between the calculated cost path and the optimal path is 8% of the optimal path length” (Herzog 2012: 8).

The next part explores some modifications applied to the 90m DEM and discusses how the results vary greatly according to the positioning of actual sites.

5.3. Adding Rivers and Bridges

Least cost path analysis, as seen above, is a delicate tool; if not understood properly, the results it generates can be misleading and misinterpreted. The following section represents a least cost path analysis generated from a modified DEM: the Göksu

River was added to the DEM as well as known ancient bridges that fall on it and subsequently cost paths were generated.

The purpose for adding the Göksu River to the DEM is to represent a barrier that would impede movement, as this river is wide and flows heavily throughout the year; in spring time the water level is higher because of all the snow melt that happens after a rough winter. The Göksu River presents an impedance to movement as its levels are high and it is difficult to cross in areas where no bridges are found. The locations where bridges are present, some still standing and others completely collapsed and disappeared, provided a corridor or a passage for people to be able to cross it and continue their journey to a destination point. The example used here maps the least cost path from Mut to Ermenek. The steps needed to generate least cost path are the same for analysis one,

90 generating slope, cost distance and cost backlink. The only difference here is modifying the DEM prior to deriving the slope raster. There exist different ways to proceed in adding friction surfaces to the DEM. The next part thus presents some previous works exploring this idea.

Newhard and co-authors (2008) used in their analysis the concept of weighted overlays and assigned the Göksu River, the principle feature in their main study area, a value that considered it as a barrier equivalent to a 15 degrees slope. They based their analysis on deriving effort maps or surfaces while stressing that “the cost of climbing a given slope was not directly proportional to the degree of slope”. Bell and co-authors

(2002) used topography and land use as determinant factors for generating effort maps.

Newhard et al., on the other hand, concluded that land use “was found to be impractical” in view of the fact that the area of study was large, complex and diverse, and showed a marked variation between past and present use of the land. They thus added the slope cost surface with the cost surface indicating the Göksu River as an obstacle to movement and produced a final cost map representing the slope-effort cost map (figure 8 below represents this concept) (Newhard et al. 2008: 93-94).

91

Figure 8. Figure representing the slope effort based on Bell et al (2002) and taken from Newhard et al. (2008)

Howey (2007) in her study considered also waterways as barriers to movement and she divided them into categories and assigned each a unique value ranging from 1-

100, depending on their size or flow, with 1 denoting that movement through a particular cell with a value of 1 is easy and requires no effort and in contrast, a value of 100 represents a barrier to movement (Howey 2007: 1835).

The next part of this chapter introduces another way to include the Göksu River as a barrier to movement.

5.3.1. The Göksu River cost surface

The first step here was to create a shapefile representing the Göksu River. The buffer tool was used to envelop the river shapefile, extending its width 100 meters on either side of it. The reason for applying a buffer is to make sure that the least cost path

92 does not follow the river path or, in other words, the cells that contain the lower elevation values; some paths will tend to follow river lines as rivers would normally have the lowest elevation values in an area.

Therefore, the outcome generated represents a polygon shapefile enclosing the

Göksu River. An elevation value of 50,000 was assigned to it, a value much higher than the highest elevation in the DEM, for the purpose of representing high impedance levels; having assigned an elevation value that is very high would stop the least cost path from crossing the Göksu River.

The second step performed here was the addition of bridges to the Göksu River shapefile. A separate shapefile containing all locations of bridges was inserted and some editing performed. As the Göksu River has an elevation value of 50,000m the bridges represent openings or passageways through the river. Thus the bridges were drawn on the river shapefile with a 200m buffer around them and then were erased so the values of the cells where the bridges are located represented lower elevation values.16 The polygon shapefile was then converted to a raster map representing all cells composing the river with a value of 50,000 and all other cell values including the bridges were assigned by the software a NoData value, in this case equal to 65,535. Map algebra was then performed through using raster calculator to convert all NoData values to the value of zero. The results, shown in figure 9 on page 96, represent a raster map framing the Göksu

River and containing two values: 0 and 50,000. The river then is cut at the location of bridges which are given the value zero as well.

16 There exists different ways to create and include bridges in an analysis. The above-mentioned way was thought to be clear and straightforward.

93

The final step was then to add the original DEM and the river and bridges raster map together through using the ‘combine tool’. The results generated produced then a

DEM containing cells forming the Göksu River with values of 50,000 except where the bridges fell; in this case the initial DEM value was used. The modified DEM was then used to derive slope, cost distance, cost backlink and subsequently least cost paths. The results were somewhat disappointing. The least cost path from Mut to Ermenek still crossed the Göksu River diagonally instead of choosing a longer way and passing through the Kadiköyü bridge found nearby. A reason for this, as Van Leusen (2002) states, “that high cost barriers in the cost surface are skipped by diagonal moves if the barrier consists of corner-connected cells” (Herzog 2012: 10). A suggestion to overcome this limitation is through ‘magnification’, which increases “the breadth of the barrier to two cells [and thus] no move could cross the barrier without paying the appropriate costs”

Herzog 2012: 10).

According to GIS, the path would have a higher accumulated cost if it traveled via the nearest bridge. It was then thought to create a least cost path from Mut to another site that fell closer to the bridge and that was located closer to Mut: the best fit in this case was the ancient site of Adrassus. The same analysis was performed. The results are shown in figure 9 and table 7 below. The interesting part here is that the path from Mut to

Adrassus did pass over the bridge that was nearby. This proves that the least cost path tool does work properly, but, with some limitations that have to do more with the way the software is used by a researcher and the way it calculates the least cost paths.

94

Table 7. Least cost path analysis with river and bridges: Mut-Ermenek and Mut-

Adrassus

DEM Type Interpolation Cell size Acc. CP values LCP length

(per unit distance) (in m)

Cost path Adrassus Nearest neighbor 90m 172,387.33 40,447

Cost path Ermenek Nearest neighbor 90m 277855.38 73,001

Table 7 presents the results obtained when modifying the original DEM and thus the cost raster to include a cost or friction map representing the Göksu River as a barrier to movement. The least cost path from Mut to Ermenek obtained is equal to 73 kilometers, which seems to be the closest to the observation made through ground- truthing, which is equal to 85 km. In this example, however, the insertion of bridges did not affect the results.

The least cost path still crossed the Göksu River at a point where no bridges were found. These results were disappointing as the Kadıköy Bridge is only 6-7 km (if a straight line is drawn between the two points) away from Mut and it would make more sense for the path to cross the river via this bridge. A reason for this is “if the chosen spreading algorithm allows movement along the diagonals as well as the cardinal directions, then in some circumstances the barrier can be breached” (Conolly and Lake

2006: 216).

95

Adrassus

Mut

Ermenek

Cp_adrassus

Göksu

Figure 9. Least cost path analysis with river and bridges: Mut-Ermenek and Mut-Adrassus

Therefore, another site that falls on the same horizontal plane as the Kadıköy Bridge LCP with bridges and Göksu

River as high cost value was selected for the analysis: the site of Adrassus. The straight distance from Adrassus to

the Kadıköy Bridge is about 25 kilometers. The least cost path passed via the nearby

Kadıköy Bridge on its way to Adrassus, which is also closer to Mut than Ermenek.

Figure 9 represents the results contained in Table 7 above. The green circles

Ermenek located on the Göksu River represent the bridges found there. The path from Mut to

Ermenek fails to take into consideration this fact, although it would make more sense

visually for the path to pass through the nearby bridge instead of taking a detour away

96 from it. The path from Mut to Adrassus on the other hand, passes over the Kadıköy

Bridge.

The following part of this chapter generates least cost path analysis in the

Dağpazarı and Karaman area where the topography is characterized by a different set of conditions in comparison to Mut and Ermenek. The latter is characterized by a valley, gorges and mountains, while the former provides a flatter area, one characterized by a plateau and plains. The final part of this chapter, then, provides a discussion on the different models created to generate the least cost paths and how to relate them to human movement.

5.4. Analysis 3: Dağpazarı -Karaman

In an area where there are steep cliffs, gorges and remote zones, there tend to be more flaws in the data value collection than in a flat area. Whether the DEM was created through surveying and total stations or generated from remotely sensed data, some areas are still difficult to measure and thus error values occur. It is left for the researcher to determine if these DEM value errors are acceptable or not, and if not meeting the adequate expectations, what other options might be used. The DEMs accessible from open sources for certain areas of the world are generally available in low resolutions and frequently they are the only source available for a respective area and sometimes there is no choice but to use the available data. However, thoroughness and delicacy is required with the interpretation of the results generated, as well as awareness of the consequences for using open source data.

97

Table 8. Least Cost Path Comparison of two different topographical areas.

DEMs Mut-Ermenek Dagpazarı –Karaman

DEM_90m 256,639.31 73,125m 88,853.125 52,931m

DEM_90m (condition: all cells <0, =0) 256,639.31 75,040m 88,853.125 52,931m

DEM_90m (condition: all cells <0, =Set Null) 256,639.31 73,125m 88,853.125 52,931m

DEM_30m (nearest neighbor option) 217,739.14 87,560m 77,935.68 64,243m

DEM_30m (bilinear interpolation option) 245,885.05 68,488m 86,526.352 58,225m

DEMs Mut-Ermenek Dagpazarı –Karaman

DEM_30m (cubic convolution option) 255,866.66 68,852m 86,526.352 57,978m

DEM_30m (majority option) 152,139.25 101,817m 55,570.86 72,281m

DEM_15m (nearest neighbor) 187729.42 92685m 70,166.668 72,284m

DEM_100m (nearest neighbor) 256947.45 70495m 86,499.883 52,321m

DEM_500m (nearest neighbor) 199,007.63 60,655m 74,053.719 51,943m

DEM_1000m (nearest neighbor) 172,674.47 57,484 60,654.793 49,197m

Straight line (absolute value) Vector file 48,174m Vector file NA 45,245m

As seen in the table 8 above, the results generated by the least cost path analysis

for the Dağpazarı–Karaman are similar in context to those generated earlier: they all

require some level of questioning and a critical review when analyzing and interpreting

them.

It is interesting to note here that results vary greatly according to the topography

of the area being investigated. That is, in the Mut-Ermenek example, the least cost path

generated from the original 90m DEM is about 25 km longer than the length of the

98

straight line drawn between these two sites. On the other hand, the least cost path

generated from the 90m DEM between Dağpazarı and Karaman only differs by about 7

km from it, although the straight-line distance is similar (48 to 45 km). The straight line,

or Euclidean distance, is the only absolute value, that is, the only constant value that is

available for comparison. It does not take topography into consideration. Thus, the small

difference in length in this area denotes that the topography in this region is closer to flat

with not much difference in elevation values. The accumulated cost path values also were

much smaller. In contrast, the least cost path generated from Mut to Ermenek proves the

fact that the topography in this area is indeed diverse as it differs greatly from the straight

line distance.

LCP from Dağpazarı to Karaman: Adjusting cell values

Figure 10 . Least cost path from Dağpazarı to99 Karaman.

In the second example presented, where DEM values are altered according to specific conditions, the results also point to the importance of cell values in an analysis.

The least cost path generated with the condition that all negative values should be converted to zero values showed some minor differences in path length, with the path generated from the 90m DEM only when converted to a vector file. The accumulated cost per unit distance generated by the raster file shows the same results. This has to do with the way GIS handles both vector and raster data, which presents an advantage that enables the researcher to convert different file formats to work with. However, it should be noted that converting a file from one format to another could produce some level of error.

In the examples presented here, the values in meters, that is the vector files, are taken into consideration when discussing the results. Moreover, when converting negative values present in the DEM to NoData values, the paths generated for both areas are equal in length to the original unmodified 90m DEM.

The least cost paths generated from resampled DEMs, using different interpolation methods proved interesting outcomes, especially in the Mut-Ermenek area.

The results varied greatly between the four methods (nearest neighbor, bilinear, cubic convolution and majority techniques) used to resample the DEMs to 30m ones. There was about 30 km difference between the nearest neighbor and the majority resampling methods. The bilinear and cubic convolution techniques generated paths that were almost equal in length. In the Dağpazarı – Karaman area the paths generated differed by about

15 km with also bilinear and cubic convolution techniques generating close results.

Lastly, the paths generated from DEMs resampled at different resolutions, 15m, 30m,

100

100m, 500m and 1000m, in both areas revealed notable results. In both examples, only

the 15m resolution paths seems to differ greatly from the other paths generating the

longest paths that connect Mut with rmenek and Dağpazarı with Karaman. The general

trend here seems to be that higher resolution DEMs will tend to generate longer paths as

seen in the table below.

Table 9. Least cost path generated from resampled DEMs

DEMs Mut-Ermenek Dagpazarı –Karaman

DEM_15m (nearest neighbor) 187729.42 92685m 70166.668 72284m

DEM_30m (nearest neighbor option) 217739.14 87560m 77935.68 64243m

DEM_90m (nearest neighbor option) 256639.31 73125m 88853.125 52931m

DEM_100m (nearest neighbor) 256947.45 70495m 86499.883 52321m

DEMs Mut-Ermenek Dagpazarı –Karaman

DEM_500m (nearest neighbor) 199007.63 60655m 74053.719 51943m

DEM_1000m (nearest neighbor) 172674.47 57484 60654.793 49197m

Straight line (absolute value) Vector file NA 48174m Vector file NA 45245m

101

LCP from Dağpazarı to Karaman: four interpolation resampling options

Figure 11. Least cost path analysis from Dağpazarı to Karaman

It can be noted, thus, that at least two key aspects appear to be crucial for any analysis: the DEM values derived from satellite images and the resolutions of the DEMs used. It should be mentioned here that interpolation techniques and the potential of GIS software also affect the results and are important to take account of. These, in their turn, depend on both the cell values of the DEMs and their resolution.

102

Table 10. Topographical differences between both areas.

Mut-Ermenek LCP Dağpazarı –Karaman LCP

Movement from the bottom of the valley to the top Movement from a plateau to a plain, mainly of the mountains movement on a flatter surface

River crossing, and steep escarpments There is no river present in the area

DEM values include more voids in this area DEM values seem to be more accurate in this area

DEM values and sampling methods seem to be Resolution, resampling and interpolation seem to more important than resolution be important

Difference in path length= 35 km Difference in path length= 12 km

LCP from Dağpazarı to Karaman: esampling using nearest neighbor interpolation

103 Figure 12. Least cost path analysis from Dağpazarı to Karaman In the Mut-Ermenek area, the DEM cell values seem to be more important than the resolution chosen for the analysis. In contrast, the results generated from the

Dağpazarı–Karaman area show less error in DEM values and seem to be more susceptible to resolution, or cell size. The difference in length between all paths generated between Dağpazarı and Karaman appears to be crucial, since it sometimes can reach more than 23km between the shortest and the longest paths generated. On the other hand, when looking closer at the paths between Mut and Ermenek, the difference in length of paths appears to be crucial as well.

The next part of this chapter presents Google Earth as an alternative way to view and interpret the outcomes of least cost path analyses. Understanding the results through three-dimensional visualizations provided by Google Earth could offer a new way to observe, analyze and understand the various results. The advantages of this software17 lie in its ability to provide the user with acceptable, high resolution imagery, giving a different viewing perspective of the area. The fly-in mode allows the user to interact with the respective map or imagery and to explore it in more detail, zooming in and out, thus viewing the landscape from a three-dimensional perspective at any resolution.

5.4.1. Google Earth

There is no need nowadays to manually draw lines and polygons on the DEM, marking important features such as roads, rivers, buildings and so on. This is an easy process to accomplish but requires a lot of time. This, however, is necessary often so as to enhance the visualization and the understanding of relationships between features as well as to help in the interpretations of the results. Nowadays, with the continuous

17 Google Earth is an open source and can be downloaded for free; other more professional Google Earth software is available for a certain price.,

104 development of GIS and visualization techniques as well as high resolution data, programs such as Google Earth could be useful in aiding the interpretation of the results.

It is a good visualization tool that derives its pictures and images from aerial and satellite imageries taken and dating to different years.

The paths generated from DEMs with low resolution (1000m and 500m) display straighter paths, less accommodating to the topography of the area. When translating these results in Google Earth software, a clear observation can be made: in the three- dimensional visual Globe surface, these paths do not follow the topographical layers of the region displayed; they jump over gorges and do not take into consideration slopes and steep escarpments which characterize the Göksu Valley; they rather ignore these features and form a straight line drawn across them; they appear to be as floating lines lost in space. And this makes one wonder about their validity and how to treat these results.

Thus, as mentioned earlier, DEM values and resolution are crucial facts to take account of when performing any GIS analysis and in particular least cost path analysis. As the results in the tables above prove, these key facts when modified or adjusted and predicted give different results. There is an important question to pose here: which of these paths generated is the accurate one? And, furthermore, accurate in comparison to which relative or absolute value? Especially when results provide clear discrepancies, which one should be dismissed and which one holds true? All these questions are interesting in a way; however, they all focus on a goal, which is finding the ultimate route

105

Least cost path analysis in the Göksu Valley: two different landscapes

.

Figure 13. Least cost path analysis in the Göksu Valley

The truth however, is that there exists more than one possible outcome for a route being predicted, depending on factors such as DEM values, interpolation and resampling techniques. The fallacy that an ultimate route exists should be avoided. The many paths generated represent models of the real world and cannot be equated with the actual movement of individuals, their perception of the landscape and the political and social aspects that affected their journey. These models, regardless of their accuracy, provide a new way to view the results of least cost path analysis and understand them.

106

Further to enabling a controllable environment to proceed with the analysis, GIS

software has the means to provide a different visual way to process and analyze

information. With the ability to read and convert many types of data, ArcGIS can convert

the paths generated to suitable format to be viewed in other software. The shapefiles of

all paths generated in this thesis were thus converted to Keyhole Markup Language

(KML) files and inserted into Google Earth for a better visualization of the topography.

These are shown in the figures below.

Figure 14. Google Earth, least cost path in the Göksu Valley

The colored lines in figure 15 represent the least cost paths generated in ArcGIS at different resolutions.

107

Figure 15. Google Earth, least cost path in the Göksu Valley

5.5. Conclusion

As seen in this chapter, least cost path tool remains a delicate and fragile tool; if not understood properly by the user, then, the results it generates will leave plenty of room for doubts.

There thus exist two main common problems when generating least cost paths as seen so far throughout the chapters presented here: the resolution of the DEM that is used and the ability that the GIS software has to include a number of neighborhoods in computing least cost path analyses.

All results generated in this case study vary greatly according to cell sizes, to resampling techniques and to modifications and additions made to the DEM used to derived the least cost paths. It is crucial for any user to be aware of and understand how

108 each tool functions and the steps in which the GIS generates and creates new results. The statistical aspect in ArcGIS should not be neglected or taken over by the visual aspect of it. There should be a realization that ArcGIS is not a tool used to produce beautiful maps solely; it has more to it, and understanding the statistics behind the visual representations remains critical.

The following chapter explores other limitations that least cost path has, mainly social and behavioral variables that affect movement and are deemed to be crucial to consider by researchers for more accurate and realistic results. Most movement and interaction studies that have used least cost path analysis have focused so far on the concepts of energy and time that an individual spends when traversing a landscape. These vary greatly from region to region and also from individual to individual. Agreeing on a general trend, that is, the best single way to generate least cost paths, proves to be difficult. Researchers are constantly generating and establishing new formulae that would measure movement accurately, sometimes considering energy as having more weight, other times taking time as the primary cost factor in measuring movement. In addition, topography plays a big role in this scenario, as it affects the energy and time values greatly. Whether in real life a person would prefer to consider time as more important for his/her journey over energy remains relative to each one; that is, a person could choose to take the longer route to avoid rugged terrain, or a person can take a shorter route, meaning less time spent traveling, but, more difficulty to travel through (more physical energy required).

When relating paths generated by the software to actual human movement, time and energy seem to be logical variables to consider because this is what usually people

109 think of when they are about to start their journey. The question to pose here is how to quantify these variables accurately, and if this is feasible to a satisfying point. This process requires thus working with base maps such as DEMs and rasters that provide data values, as these are the values the software would use in generating least cost paths.

Variables that are added to the analysis (that is, they are not generated by the software) that could be altered are, for example, the distance traveled, the metabolic rate of different individuals, their speed of movement depending on the group size and if they were moving on animals or carts. Therefore, it is important to take account firstly of the accuracy of the DEM used and how they were generated, which actually forms the base for any GIS analysis. The next and final chapter discusses in greater detail social, cultural and human behavior ‘cost’ variables that affect movement.

110

Chapter 6: Interpretation and Conclusion

Least cost path analysis is becoming a widely-used method in the study of movement and interactions between sites and cultures. There exists a countless number of ways to generate least cost paths; some reflect the time or the distance needed from one place in the landscape to another, while others emphasize the energy exerted by the walker on his/her journey. Some consider the topographical layers of the landscape as being a good proxy to base the analysis on, while others consider it not enough and argue for more energy-based algorithms that would exhibit more plausible results.18

Fewer researchers focus on the importance of resolution and accuracy and the possibility that these could and do affect the outcomes greatly. All these attempt to accurately measure movement whether according to topography, walking speed, or metabolic rate, present valuable insights for the study of movement. However, it should be kept in mind that there need not be one optimal path to be predicted and that there is no one single standard way to go about in producing least cost paths.

Each GIS software produces least cost paths in a different way, based on different algorithms and each provides a different set of rules and many options to select from when generating these paths. The advantages of GIS environments, in comparison to paper-based maps and textual historic documentation, is thus in their ability to provide a controllable framework to apply the data and to generate many different models, based on different variables, that attempt to replicate the real world. A challenge remains in relating the computational outcomes to real-life events and in interpreting them in

18 It can be debated whether the energy-based approaches based on first explorers are actually a good measure to the establishment of the first route phase. Topography and other variables should be included.

111 relation to actual human movement. However, if it is clear to the researcher that there is no one single answer or no one exact perfect path to find, then there will be more room for expanding and developing new ideas in interpreting the results. Comparing the models created and testing them against survey and field data as well as with other textual documentation and exploring new ways to visualize and understand the data are all crucial for validating the results. This is where the difficulty lies with least cost paths and not with creating more complex algorithms to convert or translate paths into human movement.

6.1. Summary

This thesis used least cost path analysis in a Geographic Information Systems environment, a tool used to spatially analyze relationships between locations in a landscape. The Göksu Valley located in South-Central Turkey was the area of focus, an area known for its rugged and diverse terrain as its name indicates “ ough Cilicia”. It is surrounded by the massive Taurus Mountains which made movement through the area rather difficult. However, some paths and natural corridors through the mountains still existed and aided in connecting the central Anatolian plateau with the larger rhythms of the Mediterranean coast, Cyprus and the Levant. This area is arguably a key location in the region and reveals numerous archaeological remains dating to the late Roman

Imperial period. This period had a great impact on the surrounding landscapes as road networks were built and perfected and thus communication and movement were made somewhat easier.

Therefore, the objective of the thesis was to investigate sites around the Göksu

Valley dating to the late Roman Imperial rule and to explore the relation and the

112 interactions between them. The least cost path tool in ESRI ArcGIS 10 was used to locate first potential routes that link three major sites surrounding the Göksu Valley together.

Least cost path analysis at a first glance seems like a fairly straightforward method, made easy to use by the push button function available in most GIS software nowadays. This increasing “user-friendliness”, however, masks the limitations of the software and undervalues the complex and detailed algorithms designed to make the tool attractive (Herzog 2012; Kantner 2012). The aim of the thesis thus was to take a detailed look at how ArcGIS calculates the paths by manipulating and modifying the data available for the study area.

There are three aspects that influence movement: “environmental features, travel time and travel effort” (Livingood 2012:1 ). nvironmental features are the most reliable ones used to virtually reconstruct as the landscape did not change much over the past few thousand years. The topography of a region has great influence on movement and usually dictates it. Social, cultural and political factors also affect movement as well as other new concepts such as that of “switchbacks” and uncertainty factors; these, however, are difficult to quantify although gaining more awareness in recent studies. All these factors influence the decisions made by individuals when traveling across a landscape in terms of time spent and/or energy exerted. It should be kept in mind that

No matter which GIS software one uses, the most crucial point during the process of creating a cumulative cost surface is the selection, combination, and weighting of these environmental factors. It is an individual decision of the single researcher and therefore a completely subjective process (Gietl et al. 2008:2).

113

6.2. Results

For the thesis presented here only one factor was manipulated, the DEM resolution. As the results of the analysis demonstrate, any slight modification brought to the DEM results in alternative paths, with higher resolution DEMs generating longer paths and low resolution ones producing lower slope angles. Adding isotropic cost to the cost surface such as river or vegetation covers also affect the route least cost paths take.

Weighing different topographical influences becomes important then, when more than one factor is in play.

. In the study conducted here, 14 different least cost paths were generated for the route from Mut to Ermenek and one for the route from Mut to Adrassus. The 30m resampled DEM using nearest neighbor option, produced a least cost path, of about

87km, that is, the closest in length to the actual distance needed to drive on the modern road leading from Mut to Ermenek, which is about 85 km. However, this is not enough to consider it the most reliable result when relating it into human terms of movement. The same analysis was conducted for Karaman and Dağpazarı and 11 paths were generated.

The difference in the topographical setting of this area is clearly visible and also reveals interesting insights. Selecting the paths generated for both areas that are considered to be the most accurate is not simple and depends on many factors such as the research objective, the DEM used and its resolution and accuracy, and limitations of the GIS software to map movement realistically.

New ideas and concepts are constantly developing with more complex algorithms generated and tested with the majority of the studies still considering topography as being a good aggregate to generate cost surfaces and least cost paths. Regardless of the

114 variables selected, the least cost paths generated through GIS are models of reality and should be tested against other contextual and textual documentation. As many authors agree, the results should not be considered as end results; rather they should be compared with attested archaeological features and used at the beginning stage of a research. Field surveying and field walking documentation are also important to consider and compare the results against. Furthermore, Google Earth which is a virtual globe seems to be a useful program in locating potential routes and roads as the maps it uses are derived from satellite images and provides a good visualization environment to compare least cost paths with modern routes and features of the landscape.

6.2.1. Late Roman Isauria

The main Late Roman sites of Isauria used in this study were the cities of Mut,

Ermenek, Karaman, Dağpazarı and Adrassus. These are located around the Göksu Valley in different topographical settings: Mut lies at the bottom of the valley, Adrassus on its higher western slopes, while Dağpazarı lies on a plateau east of the valley and Ermenek at the top of the mountain flanking the Göksu Valley on its western side. Karaman, which was a more prosperous city later during the 13th century A.D., lies north of the valley bordering the Konya province; it is characterized by plains and also lies on a plateau. The complexity of the terrain that characterizes the inland cities of Isauria provides a good setting to study movement and interactions between them. Also it provides a mean to test the reliability of least cost path analysis in different topographical regions and the effect of accuracy and resolution on the paths generated.

The Göksu Valley was an important area during the late Roman Imperial period.

It was thought to be an easy passage through the many high mountains that shape the

115 landscape around it. Slopes are known to be steep in Mediterranean mountains such as those that surround the Göksu Valley; “Most Mediterranean mountains are rugged, with slopes of less than 15 degrees rather rare” (McNeill 1992: 16). The Taurus Mountains that surround the Göksu Valley made movement rather difficult; there were, however, passageways through it that linked the Anatolian plains with the Mediterranean Sea:

At two points along the length of the Taurus, rivers cut through from the Anatolian Plain to the Mediterranean: at Mut, in the central Taurus, where the Göksu flows by, and at the Cilician Gates in the eastern Taurus, carved out by the Ҫakit iver (McNeill 1992: 20).

During the late Roman period these would have experienced a lot of traffic,

“For millennia armies and caravans have passed through the eastern Taurus at the Cilician Gates. The route through the central Taurus at Mut, connecting the Konya Plain to the sea at Silifke, has seen almost as much traffic” (McNeill 1992: 22).

The “large scale import and export of goods” along the Mediterranean ( lton 2002: 1 3) as well as “collecting taxes and maintaining law and order” ( lton 2002: 1 6) were important key factors that maintained the empire. Communication road networks enabled these above-mentioned movements to be more efficient in general. The roads around the

Göksu Valley were minor routes, the Cilician Gates being the only major road through the Taurus Mountains (Elton 2002: 181).

The least cost paths generated through the analysis presented above proved to be interesting: the path from Mut to Adrassus appears to follow the route that leads to the

Kadıkoy bridge located on the Göksu iver. This brings to mind some thoughts:

Adrassus falls approximately in between Mut and Ermenek on the western side of the

Göksu River and perhaps was a key city for travelers leaving from Mut to take a break and gather provision for the rest of their journey to Ermenek.

The path that links directly Mut and Ermenek together appears to jump over the

Göksu River at a location where there is no bridge and takes a different road to its

116 destination. The distance from Mut to Adrassus is equal to about 40 km and that from

Mut to Ermenek equal to about 70km. The distance between Adrassus and Ermenek is less, or about equal to 30km, thus making both different journeys equal in cost. It is approximately a two day long journey on foot.

A potential further analysis would be to explore the Mut - Karaman route and test the assertion that it is a major route, as stated by French and McNeill whose opinion conflicts with Elton (2004). Further research on sites and small villages that fall in the

Göksu Valley is also needed to explore and examine the concept of planning and dividing journeys into steps and for considering lay-over cities for rest and provisioning.

Furthermore, more research can be performed around bridges and can evaluate their importance during different periods, which would aid further the status of movement in the Göksu Valley. Regardless of the topic investigated, the results generated should always be tested against other textual documentation and surveying data and questioned.

To conclude this part, GIS represent a virtual world where data and files can be manipulated and controlled to suit different conditions or environments. It creates models that enable the generation of new data types or files each with its own set of specifications. The purpose of GIS analysis is to present a different practical way or perspective to view and understand past environmental and social human events; however, the outcomes should not be considered as end-results. Rather they should be used in concert with textual and contextual documentation, for they should in principal complete each other and provide a comprehensive understanding of the topic studied. The next part of this chapter attempts to bring these two approaches together.

6.3. Computers vs. Humans

117

Some archaeologists consider "that a low resolution DEM comes closer to the human perception of a landscape" (Herzog 2008: 238). Some thoughts could be that humans actually create some kind of mental maps based on their experience of the landscape. The more they experience it, the better/more accurate their mental map is (if a path is short, then it is experienced more often). On the other hand, the smaller the cell size in the DEM is, the higher the resolution is and more detail in the landscape can be captured. However, this concept that a greater resolution equals a greater accuracy (Elton personal communication, 2013) becomes questionable when taking into account how well the actual landscape experienced is known to the person traveling.

Understanding or relating functions, tools and algorithms in a GIS environment to human perception of the landscape is important, for this would help the researcher in knowing better the tools in GIS that need to be used for a study. When moving in a known landscape, landmarks and known locations are sketched into the mental map of the people moving and thus, choices of movement become dependent on variables other than the environmental and topographical ones. In some instances longer paths could be preferred by humans moving rather than shorter paths requiring more energy. For example, if known villages are to be located nearby the path, the traveler might decide to divert from the path to reach the village where he/she can refill provisions and rest. Cell size becomes less important in this case because the distance traveled based on cell size that is converted to time travel varies accordingly. Therefore, other variables should be taken into account when generating least cost paths that would depend greatly on documented sources and interpretation.

118

Human behavior and human experience are two concepts difficult to quantify and cannot simply be reduced to numbers. There is also a factor of ‘mental expenditure’ that should be taken into consideration. In other words, people would choose to move differently according to whether built roads, or tracks are found on their way or whether they have to pave their way through areas where no paths exist; the first assumption seems more logical (Personal communication Jennifer Moore, 2014).

Fewer studies have involved measuring and analyzing movement of animals and carts in steep areas. This would require adding modification to the Tobler’s hiking function that is used to calculate the slope angles in relation to movement (see Bevan

2011). Herzog (2012) discusses briefly this issue in her article stating that the same cost function for human/pedestrian movement can be used for walkers followed by pack animals; the walkers “choose the direction of movement, with walkers adjusting their paths to the needs of pack animals only if the animals are less resourceful than humans"

(Herzog 2012: 5 ). Furthermore, she notes that quantifying movement for wheeled vehicles needs a different cost function “as these cannot climb steep slopes as easily as walkers” (Herzog 2012: 6). She suggests using a function presented by Llobera and

Sluckin (2007): ( ) ( ) where, s is the slope and is critical slope “and typically is in the range of 8 to 16” (Herzog 2012: 6).

Moreover, a new concept that is finding its way through movement studies is the

‘risk’ factor. It can be noted that people would want to favor the ‘risk’ factors more likely than energy or time when traveling (Personal communication James Conolly, 2014); that is, they will tend to take safer routes regardless of distance or effort. Bevan (2011) included a variable in his model, that of ‘uncertainty’ which he added to the time and

119 effort variables. He considered that it is plausible to take into account the time spent traveling from a point to the other as well as the effort spent (the metabolic energy) and add to them the concept of uncertainty, which includes aspects of personal danger, losing a cargo, this “in terms of uncertain travel conditions”. He stresses that these three variables cannot be assumed to be “related in a simple linear way to one another”. Thus setting relationships between culturally-influenced costs and empirical ones should be established (Bevan 2011: 2). Moreover, Bevan took notice of the importance of modes of travel that involved different numbers of travelers in movement studies. These aspects are important to consider because “different group sizes (individual donkey or ship transport versus large caravans or fleets) obviously have different consequence of each of these different modes” (Bevan 2011: 2). This is interesting to a point because ‘uncertainty’ can be represented on different levels. Other issues relevant to be addressed include, as Bevan states:

1) different kinds of assisted travel and transport technology (on foot, horseback, in a boat), 2) different seasonal constraints, 3) different traveling agendas, 4) Intermodal change (switching from boat to cart or foot) 5) different group sizes (individual donkey or ship transport versus large caravans or fleets (Bevan 2011: 3).

As mentioned above, the easiest route to take, as detected by the least cost path tool does not necessarily mean that people actually used it in their travels. Many other factors, such as weather or time of the year or cultural areas, must have also influenced movement and decision-making. Thus, for Newhard and co-authors (2008) the end results should be “used in concert with other data or at the beginning of the hypothesis- forming stage of research, thus providing the researcher with questions based on basic

120 assumptions of topography” (Newhard et al. 2008: 92). Also, Howey states that there is no optimal path, but rather preferred or assigned ones. Howey not only took into consideration the geological and environmental factors in her study, but also made note of and incorporated features of ‘cultural resistance’, which also impede movement. She thus states that “the optimal path is the one that passes between points with the minimum accumulation of these resistances or costs” (Howey 200 : 1832).

It should be kept in mind that humans do not move in a landscape following a selected pattern or in straight lines. Is it interesting to note that the only pattern that could be considered relevant to movement is the vision or visibility pattern. People would tend to move according to what they see on the way more likely than moving in straight lines and jumping around like chess pieces. Visibility which impacts the perception of the landscape is a key concept to movement and should be explored in greater depth in future studies.

6.4. Conclusion

Least cost path analyses are becoming widely available and popular in many disciplines especially in archaeology nowadays. GIS software provide a different perspective to studying movement and mapping least cost routes. Along with theoretical and field documentation, they can convey a better understanding of how events happened in the past. The constant rapid development of GIS software enables more progress to be done on the least cost path tool to generate more realistic or reliable representation of human movement in the landscape. However, this tool remains a delicate one and if not understood properly at its initial stage then misinterpretations and doubtful results could occur. New thoughts around how to relate or translate the computational result into

121 human behavioral terms is recommended. Using Google Earth for example provides new insights into visualizing and interpreting least cost paths. Also more recent technologies that are available nowadays, such as portable computer tablets and online access to data, as well as other high resolution field work techniques, would aid in validating least cost paths. This however is not a simple and trivial process; it would require a great amount of time and energy to proceed in doing so and depends greatly on the size of the area of study and the scale of the analysis. Being able to survey and area and on-site examine the accuracy through using Google Earth, for example, could be an interesting approach to think about for the future.

122

Appendix 1: Trip report

The purpose of this report is to recount and give a general overview of the trip taken to Turkey in May 2012. The objectives for the two student participants were to visit sites considered important for the development of two MA theses, one about Roman agriculture and the other about travel and communication routes. The ground-truthing consisted of traveling in a Toyota across the different varied and diverse landscapes that south-central Anatolia offers, moving from village to village, from inland to coastal cities and on the Mediterranean coast.

The report given here considers and complements only the thesis regarding travel and routes around the Göksu Valley, which is the area of study. Movement has always been an important factor for ancient and modern societies, as relations and connections between various cities and/or villages reveal a finer image for many study topics related to settlement patterns, to interactions between the landscape and the people and many more. Therefore, one of the tasks was to get a more or less full understanding of the ease/difficulty of movement, taking into account routes and relationships between sites around the Göksu Valley, referred to as local centers, and coastal sites, as well as sites found on plateaus or at the top of mountains. This was mainly done by driving from 1) the inland plain of Konya, passing through Karaman located on a plateau, 2) reaching down south to the villages of Mut and Silifke, 3) then driving on the coast, from Silifke to , and finally back inland to Karaman and then to Ankara, which had been our starting point. This involved us spending on average 1 to 2 days in any village on the way and to visit archaeological sites of importance and archaeological features lying in the landscape and detected on the way. Global Positioning Systems (GPS) handheld devices were used to locate features that had not been surveyed previously and to keep track of the roads taken and the altitude reached as the latter changes drastically when moving from one area to another.

The report is divided according to the days spent in the field in ascending order, i.e. starting from day one, May 15th 2012, until the last day May 23rd 2012, a total of 9 days in the field. The main sites of interest in the study area, and the sites visited are noted, as

123 well as important factors learned about movement, ancient paths detection, and elements related to the study. This trip’s members consisted of Professor Hugh Elton and two MA students (Amandah van Merlin, Nayla Abu Izzeddin) with the collaboration of the British Institute of Archaeology at Ankara (BIAA) and Trent University. Also, both theses are considered as complementary to the Göksu Archaeological Project (GAP), which was initiated by Hugh Elton in 2002 and which consisted of conducting intensive and extensive surveys around the Göksu Valley.

Day 1: May 15th 2012 Leaving: Ankara towards Karaman

I arrived to Ankara on Sunday the 13th of May to meet up with Professor Elton and Amandah, who had arrived 2 days before. We stayed at the BIAA institute hostel where we shared the space with three students undertaking their doctoral studies at various universities and working at the BIAA. The first two days spent in Ankara were mainly used to go through readings and to take advantage of the library of the BIAA, which contained manuscripts, documents and publications rarely found in other libraries.

We left Ankara on the 15th of May at 7 a.m. to avoid traffic jams and to gain use of maximum hours of the day before nighttime, when driving is not recommended. The road we were following is considered simple: there is only one major road connecting Ankara with the south: highway D715 (Figure 1). This modern road is probably rebuilt on top of the older Ottoman and even ancient Roman road.

We thus took highway D715 from Ankara going to Karaman, stopping by Çatal Hüyük, considered one of the most influential archaeological sites ever, and excavated by Ian Hodder. Parallel to the highway is the Gölbaşi Lake that runs along the main road most of the way. It is worthwhile mentioning here that Turkey, or ancient Anatolia, is an immense country covering 783,600 km² of land and that moving from one province to the other, or from a region to the other, reveals drastic changes in the landscape and the environmental conditions experienced there. Thus, driving from Ankara down south to Konya revealed a landscape characterized by flat areas on the plain, where shepherds were herding sheep and the highway cut through small villages, dividing them in two parts. If asking who would actually pass through this area to travel from inland sites to

124 the coastal ones, one of the answers could be armies. They could walk easily, in this region that provides space for large number of people to move freely because of the flatness and the width of the terrain experienced that runs about 262 km to Konya. Another aspect to keep in mind here is the presence of water. Individuals or groups of people would tend to move towards areas where water is found; and that the roads, in general, would run parallel or follow the course of a water channels. Travelers, crossing a large and diverse terrain, would follow the easier way or route possible to reach their destination, roads which provided way stations or fountains on the way. And these were attested greatly during our travels across Turkey and to the countryside. Therefore, for the purpose of studying travel and movement around the Göksu Valley, the unit of measurement that was considered more appropriate is ‘movement per day’; that is for example, to travel from Constantinople to Antioch would take about 2 weeks or 14 days on foot, depending on the means and on the conditions of travel of course.

Also while driving on the highway, we discussed agriculture and the formation of hüyüks and their placement along the main road at a pace of about one hüyük every 20 km, or probably a day’s journey. This proves the regularity of settlement patterns throughout the years. It was also noted that villages along the way all share great similarities that prove the continuity of inhabitation there. They all possess a minimum of one mosque, depending on the size of the village or town, water towers and houses which are maximum two to three stories high.

To sum up briefly, few important concepts here are noted. There are thus two layers covering the landscape: the first one represents an agriculture level while the second one denotes the long distance travelled. The economic gain produced by movement is worth considering here; the people of the Bronze Age moving in such landscapes and over long distances would not profit from any economic gain to them or their communities. Things, however, were the opposite during the Late Roman Imperial rule, where business and economic markets exist and the whole empire could benefit from them. Also it is worth thinking about the number of people that are moving, using different means of transportation such as carts, animals or on foot. If people traveled then from Constantinople going to Antioch, which roads, routes, or paths did they take? Did they

125 continue to coastal cities and then move along the coast? Or did they travel inland, through a rigid terrain, using a shorter route? What element is more important to take into account here: Distance or the effort exerted in moving, or both?

At 12:45 pm we were still driving south towards Karaman, trying to learn and repeating few basic Turkish words and some numbers. The good thing about that field trip was that no plans or fixed schedule was prepared. We were able to change our plans freely according to the progress and other factors along the way. At around 2 pm we arrived at Karaman, where we had lunch and took time to visit the archaeological museum there and to walk around the village.

Day 2: May 16th 2012

The first stop that day was the known Sertavul Pass, which is considered by many as being the easiest way to access the Göksu Valley. This pass is located on the main road leading from Karaman down south to Mut. It is known that this main road was not constructed above the ancient road. The latter must have passed somewhere nearby.

The first thing to notice here is the drastic change of topography encountered while driving. We were now at the foothills of the Taurus Mountains, which surround the Göksu Valley from three of its sides, similar to a rectangular box shape open from one side only. This landscape is characterized by a mountainous region with trees and forested areas everywhere. About twenty minutes prior we had been experiencing a complete flat terrain, and now we were climbing up towards the plateau of Dağpazarı (DP) and heading towards the village of Lale, passing through the Sertavul Pass. The pass is located in the central Taurus Mountains,1600 m above sea level. There, we took an off- road leading to the DP plateau, where we drove around following potential paths. The drive on the plateau was going well until the Toyota truck got stuck in the mud; after many failed attempts to pull it out we sought help from shepherds met on the way. After other failed attempts, a tractor was needed, which later towed it up. We then continued through the pass heading south to the village of Mut in the . On the way a stop was made at Alahan, where a late Roman church complex exists built into the mountains. We took some time to explore the site while there were restorations taking

126 place and few hours later headed back on the main road leading south to Mut, where we spent the next two days.

Day 3: May 17th: Mut and the Göksu Valley

The plan for this day was to drive from Mut down to the bottom of the Göksu Valley. The Valley could be divided into three vertical parts or zones; villages lie all around the valley’s sides and also at its bottom. ach village then corresponds to a zone which sometimes differ greatly from other areas.

The first stop was the Yapıntı Bridge which belongs to the Islamic period but probably has earlier remains dated to the previous Roman period. The river flowing below it is the Pirinc Suyyu, which dries out in the summer and reaches higher level during spring time from all the snow melt that pours into it. It is the only river from this side of the valley, the eastern side, which passes through the Dağpazarı (DP) gorge and runs into the Göksu River. It is important to keep in mind here that the DP gorge is not possibly crossable in real life, as it is made of rocky steep-sided walls and has the Pirinc Suyyu river passing through it at its bottom.

The next stops on our way to the bottom of the valley were Geçimli and Karacaağaç- Burun, where some ancient wine presses were attested. Always heading towards the bottom of the valley, driving through steep descents, we also examined the olive oil press at Hacı Hasan Taşı and then reached the bottom of the Valley where the village of Köpr başı is located. The road down to the valley was steep with sharp road turns cut into the walls of the valley. Some areas on mountains visible from where we were driving exhibited major landslides and therefore movement there would have been probably avoided due to the steepness of the terrain. The mountain to our left is Mahras Dağ, which is considered as the boundary on the western side of the Göksu Valley. Many archaeological features and remains dating to the Roman period and later were attested, scattered along the way. The Aloda church where Michael Gough excavated was visible and Roman tombs were detected on several occasions carved into the rocky sides of the valley and constructed in the landscape near ancient roads. Indeed there was a visible

127 ancient path parallel to the modern road that once led to a bridge constructed on the Göksu, which had collapsed, however.

After reaching the bottom of the Valley, we furthermore examined the bridge found at Köpr başı, built to make the Göksu River crossable. The older foundation of this bridge dates back probably to the Roman period. The later additions are the work of the Seljuks who were powerful in this region during the 14th century A.D. Gravel terraces were found along both sides of the river which runs north-south, passing through the Taurus Mountains and heading towards the Mediterranean Sea. The ancient bridge was built for accommodating carts as well and is very well preserved up to date. A Roman funerary inscription was found on the bridge. A point to mention here is that the Göksu iver’s flow varies from season to season; the river is higher during the spring season.

The rest of our journey entitled us to drive through the landscape parallel to the Göksu River and then later to have lunch with the jendarma who kindly invited us to their offices after having questioned us about our purpose for the visit.

Day 4: May 18th: Back to the Göksu:

We took advantage of this day to drive some more around the valley, which has an area of about 300 square kilometers, a vast landscape. We left Mut from the eastern side of the valley and climbed up the hill in direction of Mavga Kalesi. We were now driving to DP to visit the domed ambulatory church, taking a new route parallel to some extent to the route we previously took to reach Mut, driving north towards DP. On the way we stopped at a still-standing Roman bridge about 2 meters wide. It falls at a critical location where the road splits in two: one way leading back to the Göksu Valley and another leading to the DP plateau. Bridges are a fundamental factor that affects movement; ancient societies built these bridges for a reason and their location must have been carefully planned and thought of. Also on the way, several Islamic cemeteries were attested as well as some fountains. We then arrived at DP, which is considered to have been one of the biggest cities with a minimum of four churches found there, and visited the church, which dates back to the early 6th century and is marked by the presence of window dividers which seem to have been common during that period. After taking a look around the church, we

128 drove down back to the main road and headed back to Köpr başı where we met the jendarma again for lunch at the chief’s house. The descent was pretty steep and abrupt, and the steep hills were densely forested. After lunch, we then resumed our trip and continued to the town of Kizkalezi, right on the Mediterranean coast of Turkey, where we enjoyed a peaceful, relaxing weekend. The next two days, Saturday the 19th and Sunday the 20th, were used to work on personal research and to reflect on the journey that we had encountered so far. We visited the Roman site of Elaiussa, where two large churches are attested, as well as a Roman cemetery and a Roman visible road linking this site with the neighboring site of (modern Kizkalezi). Tombs are found on either sides of the route which still maintained some of its original stone pavement at few locations along the path. We then later headed back to the Yaka Hotel where we were welcomed by Mr. Jacob, a perfect host, and taken care of by a great welcoming staff.

Day 7: May 21st_Coastal drive

We left the beautiful Yaka hotel in Kizkalezi early in the morning and headed inland towards two villages: Karakabaklı and Işıkkale. Both villages are at about 450 meters above sea level and their settlement pattern is different from that of urban sites, especially in their size. Ancient roads are attested in both villages and are well preserved, linking important features of the site together as well as with other villages. Both villages were abandoned after the Roman period and still are abandoned at the present time. One important still-standing and preserved feature at Karakabaklı is the tetrapylon which was converted to a church during the late Roman period. Around the church some houses are attested, based on which the population of this village has been estimated at few hundred people. At Işıkkale tombs and sarcophagi are found on either sides of the road running north-south and the road running east-west, dividing the village into several quarters. After exploring these two sites, we headed back to the main road in direction of Silifke, where we stopped to examine a bridge dating back to the Roman period; only the modern bridge, which was built on top of the older one at a higher level than the latter, is visible today and allows the people to cross-over the Göksu River.

Our next stop was Aghia Thecla, one of the most important cultic sites during the late Roman period. It is located just outside the city of Silifke, who its ancient remains are

129 fully interred under the foundations of the modern city. Aghia Thecla is known to be the largest church complex in the region and possesses also a cave church, where expensive building materials were attested. It is then logical to assume that pilgrims and individuals on cultic tasks would have headed there; thus this site can be considered as a destination point in the landscape for travelers. After taking a good hour to explore the site, we headed to the museum of Silifke. But it was Monday, and usually on this day all museums are closed, and the curator was not present, so we went back to the Toyota and headed towards to coastal route leading to Anamur.

The ancient city of Anamur lies directly on the coast of the Mediterranean Sea at about halfway between the city of Mersin and that of Antalya. The modern city was not rebuilt at the same location of the old urban center; it was rebuilt more inland. The old city is very well preserved and impressive: a necropolis is attested just outside the entrance of the city with tombs two stories high, some of them painted with plaster from the inside. These tombs as well as a church preserved date from about the 1st-4th centuries A.D. Two water aqueducts run along the ancient road, of which pavement is still well preserved as well. Almost all features are still preserved on the site, including an odeon, public baths and a massive part of a fortification wall with one base tower remaining.

Therefore, the focus of that day’s excursion was to note mainly differences and some similarities between urban and rural sites. The building techniques change as well as the material used to building them from region to region, depending on the location and the available resources.

Day 8: May 22nd: Drive to Ermenek

The goal of this day was to drive from the coastal city of Anamur straight inland to the city of Ermenek, for about three hours, to get a possible feel of how movement in this direction or the opposite would be. The first stop on the way was at the Alaköprü Bridge, where the ancient bridge is dated to the 14th century A.D. Few moments later we were still climbing up towards the plateau and again a change in topography and vegetation was highly noticeable here.

130

Driving through Yerköprü village we stopped at a location where a Roman bridge found in this area with a Greek inscription, completely lies hidden beneath the modern road. We took a look around trying to find the location of the inscription that has been published in the literature; however, is not visible nowadays. We continued then towards the site of Gökçeseki, known for its rock-cut tombs scattered all around and for its sarcophagi with lion lids. We then drove back to the hotel, where we had a discussion about the inscription mentioned earlier and enjoyed the rest of the day.

Day 9-10: May 23rd -24th Back to Karaman-Ankara

On our last day in the field, we drove back to Karaman passing through the Göksu Valley again, reaching it this time from its western side. We have thus driven and taken roads that lead in and out of the valley from most of its sides: from the North through the Taurus Mountains and the Sertavul Pass; from the north-eastern side through Dagpazari and from the western side driving from Ermenek. It is worth mentioning here that if a person is traveling from Constantinople to Antioch, the road leading from Ermenek is considered to be a good road to take, perhaps the easiest to reach the valley, although it represents a descent from an elevation of 1200 meters. The following day, we left Karaman and headed back to Ankara, about a five to six hour drive. The remaining five days at Ankara were used for personal research, to produce least-cost paths in my case, and to make use of the library again. The 29th of May was our last day in Turkey.

Conclusion:

Movement and travel has always been an important factor linking ancient and modern societies together. In a varied landscape such as that of south-central Anatolia, it is clear that traveling was not an easy task and that perhaps careful planning was done prior to movement. The main focus of my thesis is to conduct least cost path analyses, which are a statistical function measuring the shortest way between two or more sites, the shortest path that requires the least effort. In a landscape surrounded by mountains, and individual has a minimum of two choices: either to climb up the mountains and then descend it, or to go around the mountains. Many factors play an important role in the decision involving which route to pass through. These include the steepness of the terrain

131 experienced, seasonality and weather and taking safe roads where areas with bandits coming down from the mountains, known of since the 2nd century, might be avoided. Furthermore, it is important to take into consideration the number of people moving or traveling, as armies would tend to take different roads and routes from individuals traveling solo. They would need a vast flat terrain stripped of forests and trees. Also the means by which individuals traveled is important to take into account. Some areas we experienced proved to be too steep and difficult for someone moving in carts or on animals, though less challenging for someone traveling on foot. Thus, several aspects and variables need to be investigated and studied further for the study of movement to be accurate.

The main objective of the thesis, then, is to generate least cost paths analyses, using GIS, and test their accuracy in real life. Some areas such as the DP Gorge are known to be impassable; therefore if the least cost path reveal a path through this area, then the accuracy will not be realistic and the results inaccurate. And this brings us to the problem of resolution and accuracy of the Digital Elevation Model (DEM) used for the analysis. When facing a large landscape, the researcher tend to lose sense of the steeper slopes and the drop-offs experienced shown on the DEM. Therefore an optimal resolution should be sought that would display all topographical elements accurately, to the best extent possible.

Experiencing the diverse and large landscape encircling the Göksu Valley was imperative to gain a full understanding and experience of the landscape around the study area and thus enabled me to have a better perspective on the difficulty of movement and travel experienced by different types of travelers.

132

Appendix 2: Least Cost Path Analysis in ArcGIS 10

Generating least cost paths is relatively easy and straightforward. However, interpreting the results and validating them remains the challenging part. Most known and available GIS software calculate least cost paths in different ways; however, they all require the same type of data, vector and raster, and they all divide the process into three main steps: first generating a cost surface raster, second deriving an accumulated cost surface and then finally producing the least cost path. In ArcGIS, to generate least cost paths, slope, cost distance and cost path tools are available in the Spatial Analyst toolbox (see figure

Figure 16. ArcGIS data frame and search tool engine.

133

17 below).

There are many ways to find the tools in ArcGIS 10, the easiest one being through the search engine available in the Geoprocessing tool bar. The only challenge to this is to know the title given by the software to the tools. Another option is through the System toolboxes and seen in the pictures above.

The current appendix presents one way for generating least cost paths that includes a modification performed on the DEM used prior to calculating the slope. It presents firstly the data used and then the steps to take in more detail.

Data:

1- Digital Elevation Model: is a raster map that can be downloaded from online open sources usually without a fee, unless a very high resolution DEM is required and a fee must be paid. Most DEMs provided online have been resampled before being released. Thus their accuracy remains questionable; however, they remain the only digital data available to generate least cost paths.

134

Figure 17. Working with the DEM.

By right-clicking on the DEM and other files found in the table of content window panel on the left side of the screen, the properties of the file selected can be seen and values modified. Also the raster calculator found in the spatial Analyst toolbox under map algebra can be used to perform mathematical calculations on the DEM or the files used.

2- Shapefiles: to map the least cost path between two or more sites, a file including the actual location of the relevant sites is needed. This could be simply done by creating an Excel sheet with columns denoting the coordinates of the points, the northing in one column and the easting in another column. Also, other attributes could be added to the Excel sheet such as the name of the sites or any other information that would help identify them. This Excel sheet is then saved and inserted into ArcGIS framework. It is then required to actually position the shapefile according to the WGS 1984 coordinate systems. Note here that the framework coordinate system should be set in advance and that all raster and vector files should also be referenced to the same projected coordinate system so they can visually represent the actual location on the surface of the earth on a 2.5 dimensional plane.

135

Figure 18. Setting the coordinates framework.

Least cost path steps:

Step 1: Check for DEM errors using the sink tool; apply a condition to convert voids or unrealistic values in the data to limit possible errors in the analysis using raster calculator. The cell values are important as these represent actual elevations recorded by remote sensing and the reliability of the least cost paths generated depends on them. The sink and fill tools can be found in the spatial analyst toolbox and under the hydrology toolset.

Step 2: Insert a river polyline shapefile as well as a shapefile marking the location of bridges on the river and create a buffer zone around it. This would limit the least cost path from following the lines of the river; that is, the cells that represent the river have the lowest values in the DEM, and there is a chance that the paths generated would follow the line composed of the lowest cells, the river line. Thus a buffer zone set at a specific distance is required which envelopes the river line 100m on each side. This will prevent the cost path from crossing or penetrating the buffer zone.

136

Step 3: The next step is to insert the point shapefile representing bridges found on the Göksu and also applying a buffer zone around each one individually, with a radius of 200m. The purpose of the buffer here has to do with resolution, cell size and the accuracy of the GPS points collected in the field locating different bridges. A distance of 200m will increase the likelihood that a specific bridge can fall anywhere within the 200m buffer zone. These can then be cut-off at the locations where they intersect with the river buffer polygon to create openings on the river line locating known bridges. This would give the bridge opening a value of zero in contrast to the rest of the river line which was set to have a value of 50,000 m, an elevation value that is unrealistic: this would limit the least cost path from crossing over the river as it represents the highest values available in the cell neighborhood. The polyline river shapefile with the bridges is then converted to a binary raster grid file where elevations of 50,000 represent the river and elevations of zero all other cells. The following step is to use raster calculator to combine the original DEM with the river and bridges raster map. The values of both maps are thus added together and a new output DEM map created with new cell values for the river.

137

Figure 19. Buffer tool.

Step 4: The next step is then to derive a slope raster from the combined DEM. This is done by choosing the slope tool function in ArcGIS and setting it either to slope in degrees or slope in percent rise. The slope values generated represent the elevation change between neighboring cells. These represent the steepness of a route, most generally calculated in degrees. The default settings used by GIS can be questioned here: the software automatically divides and categorizes the values generated from the slope raster into several groups. It does not take into consideration how these values can be

138 related to individuals traveling and the maximum degree of slope they could walk on comfortably. Thus it is recommended to use Bell and co-authors’ ‘effective slope’ which would represent the non-linear relation between the angle of the slope and the effort. Also slope values can be divided into categories that would be more realistic when related to human movement. For example, all values above 11 or 12 degrees could be placed in one separate category representing the highest slope value group. One to three degree slopes can be grouped together to represent a low slope value group and so on.

Figure 20. Reclassifying the slope raster.

Step 5: The slope raster is then used in addition to a shapefile with the location of one site to compute the cost distance and the cost backlink rasters. These are relative to each site and differ greatly depending on the slope values. The cost distance divides the area around a site into zones of cost, circular zone with different radiuses, increasing with distance. The backlink raster then figures out the best route to return to a source point and thus from these two cost surfaces the least cost path can be generated.

As seen above, least cost past is relatively an easy method to generate routes that link sites together. However, interpreting the results and converting them into units of measurements relative to humans remains a challenge. Nevertheless, it does provide a model of reality that can and should be tested and validated through field-work where possible and in conjunction with other software.

139

Bibliography

Allen, Kathleen M. Sydoriak 2000 Considerations of Scale in Modeling Settlement Patterns using GIS: An Iroquois Example. In Practical Applications of GIS for Archaeologists: A predictive Modeling Toolkit, edited by Konnie L. Wescott and R. Joe Brandon, 113-127. Taylor and Francis, London. Anderson, David G. 2012 Least Cost Pathway Analysis in Archaeological Research: Approaches and Utility. In Least Cost Analysis Of Social Landscapes: Archaeological Case Studies, edited by Sarah L. Surface Evans, and Devin A. White, pp 239-257. University of Utah Press: Salt Lake City. Bean, G.E. and Mitford, T.B., 1970 Journeys in Rough Cilicia, 1964-1968. p 219-220, Inscription #251. Vienna Bell, Tyler, Andrew Wilson, and Andrew Wickham 2002 Tracking the Samnites: Landscape and Communications Routes in the Sangro Valley, Italy. American Journal of Archaeology 106(2): 169-186. Bevan, Andrew 2011 Travel and Interaction in the Greek and Roman World. A Review of Some Computational Modelling Approaches. Electronic document [=http://www.homepages.ucl.ac.uk/~tcrnahb/downloads/Bevan11a_manuscript.pd f] Bevan, Andrew, Charles Frederick, and Athanasia Krahtopoulou 2003 A digital Mediterranean countryside: GIS approaches to the spatial structure of the post-Medieval landscape on Kythera(Greece). Archeologia e Calcolatori 14: 217–236. Bikoulis, Peter 2012 Revisiting prehistoric settlement in the Göksu Valley: a GIS and Social Network approach. Anatolian Studies 62: 35-59. Bikoulis, Peter 2009 The structure and role of settlement in the Göksu Valley and south-central Anatolia: A GIS and social network approach. Unpublished MA dissertation, Trent University (Canada), Canada. Brandt, Roel, Bert J. Groenewoudt and Kenneth L. Kvamme 1992 An Experiment in Archaeological Site Location: Modeling in the Netherlands using GIS techniques. World Archaeology 24(2): 268-282. Branting, Scott 2012 Seven Solutions For seven Problems with Least Cost PAthways. In Least Cost Analysis Of Social Landscapes: Archaeological Case Studies, edited by Sarah L. Surface Evans, and Devin A. White, pp 209-224. University of Utah Press: Salt Lake City. Carballo David.M., and Thomas Pluckhahn 2007 Transportation corridors and political evolution in highland Mesoamerica: Settlement analyses incorporating GIS for northern Tlaxcala, Mexico. Journal of Anthropological Archaeology 26: 607–629.

140

Chapman, Henry P., and Benjamin R. Gearey 2000 Palaeoecology and the perception of prehistoric landscapes: some comments on visual approaches to phenomenology. Antiquity 74(284):316. Conolly, James and Mark Lake 2006 Geographical Information Systems in Archaeology. Cambridge University Press. Cambridge. Devreux, B.J., G.S Amable, P. Crow, and A.D. Cliff 2005 The potential of airborne lidar for detection of archaeological features under woodland canopies. Antiquity 79: 648-660. Dixon, Timothy 1995 Basic Principles of SAR Interferometry: SAR interferometry and Surface Change Detection. Workshop Colorado. www.southport.jpl.nasa.gov/scienceapps/dixon/report2.html Elton, Hugh, 2007 Geography, Labels, Romans and Cilicia. In Regionalism in Hellenistic and Roman Asia Minor, edited by H. Elton and G. Reger, pp 25-31. Ausonius Press: Bordeaux. Elton, Hugh 2003 The Economy of Cilicia in Late Antiquity. 8 173-81 + pl. 35-36. Elton, Hugh 2002 Alahan and Zeno. Anatolian Studies 52:pp. 153-157. Elton, Hugh 2001 The Economic Fringe: The Reach of the Roman Empire in Rough Cilicia. In The Transformation of Economic Life under the Roman Empire, edited by Lukas De Blois and John Rich, pp.172-183. Gieben, Amsterdam. Elton, Hugh 1996 Warfare in Roman Europe A.D. 350-425. Clarendon Press, Oxford. Engels, Donald 1978 and the Logistics of the Macedonian Army. University of California Press, Berkeley. Foss, Clive 1977 Archaeology and the "Twenty Cities" of Byzantine Asia. American Journal of Archaeology 81(4): 469-486. French, D. H. 1965 Prehistoric Sites in the Göksu Valley. Anatolian Studies 15: 177-201. Gates, Marie-Henriette 1995 Archaeology in Turkey. American Journal of Archaeology 99(2): 207-255 Gietl, R., M. Doneus and M. Fera 2008 Cost Distance Analysis in an Alpine Environment: Comparison of Different Cost Surface Modules. In Layers of Perception. Proceedings of the 35th International Conference on Computer Applications and Quantitative Methods in Archaeology (CAA),edited by Posluschny, A., K. Lambers and I. Herzog Berlin, Germany, April 2–6, 2007 (Kolloquien zur Vor- und Frühgeschichte, Vol. 10). Dr. Rudolf Habelt GmbH, Bonn, pp. 336-341. Gough, Michael 1974 Notes on a visit to Mahras Monastery in Isauria. Byzantine studies 1:65-72.

141

Farr Tom G., Paul A. Rosen, Edward Caro, Robert Crippen, Riley Duren, Scott Hensley, Michael Kobrick, Mimi Paller, Ernesto Rodriguez, Ladislav Roth, David Seal1, Scott Shaffer, Joanne Shimada, Jeffrey Umland, Marian Werner, Michael Oskin, Douglas Burbank, Douglas Alsdorf 2007 The Shuttle Radar Topography Mission. Reviews of Geophysics 45, (2) doi:10.1029/2005RG000183 Giardino, Marco J. and Bryan S. Haley 2005 Geospatial Analysis and Remote Sensing from Airplanes and Satellites for Cultural Resources Management. Electronic Document http://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/20050184138_2005180638.pd f Göksu Archaeological Project 1 Chapter 7: Cities in the upper Göksu Valley in the Hellenistic to Byzantine Era, pp 1-21 Göksu Archaeological Project 2 Roads and Bridges, Churches, methodology Göksu Archaeological Project 3 2007 Chapter 4: The Survey, pp 1-20 Göksu Archaeological Project 4 Project Summary, pp 1-13 Göksu Archaeological Project 5 Post-Classical settlements, pp. 1-3. Göksu Archaeological Project 6 Conclusion 1-3 Hamdi, Sahin 2007 Previous Archaeology in Western Cilicia. In GAP reports, pp 1-13 Göksu Archaeological Project 7 Bandits,Cities and Aristocrats in the Roman Taurus and Amanus. By Hugh Elton, pp 1-27 Hageman, Jon B. and David A. Bennett 2000 Construction of Digital Elevation Models for Archaeological Applications. In Practical Applications of GIS for Archaeologists: A predictive Modeling Toolkit, edited by Konnie L. Wescott and R. Joe Brandon, 113-127. Taylor and Francis, London. Herzog, Irmela and Axel Posluschny 2008 Tilt – Slope-Dependent Least Cost Path Calculations Revisited. In On the Road to Reconstructing the Past. Proceedings of the 36th CAA conference 2008 in Budapest, edited by E. Jerem, F. Redö, V. Szeverényi, pp. 236-242. Herzog, Irmela 2012 The Potential and Limits of Optimal Path Analysis. Submitted for: A. Bevan/M. Lake, Computational Approaches to Archaeological Spaces. Left Coast Press Howey, Meghan .C.L. 2007 Using multi-criteria cost surface analysis to explore past regional landscapes: a case study of ritual activity and social interaction in Michigan, AD 1200-1600. Journal of Archaeological Science 34: 1830-1846.

142

Indruszewski, George, and C Michael Barton 2008 Cost surface DEM modelling of Viking Age seafaring in the Baltic Sea. In Beyond Illustration: 2D and 3D Digital Technologies as Tools for Discovery in Archaeology, edited by Bernard Frescher and Anastasia Dakouri-Hild, 56-64. International Series, British Archaeological Reports, Oxford. JPL, NASA Interferometry Explained. www2.jpl.nasa.gov/srtm/instrumentinterferometry.html JPL, NASA ASTER Global Digital Elevation Map Asterweb.jpl.nasa.gov/gdem.asp Kantner John 2012 Realism, Reality, and Routes: Evaluating Cost-Surface and Cost-Path Algorithms. In Least Cost Analysis of Social Landscapes: Archaeological case studies, edited by Devin A. White and Sarah L. Surface-Evans, pp 225-238. The University of Utah Press: Salt Lake City. Keeratikasikorn Chaiyapon and Itthi Trisirisatayawong 2008 Reconstruction of a 30m DEM from 90m SRTM DEM with Bicubic Polynomial interpolation method. The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences Vol. XXXVII: Part B1. Kvamme, Kenneth L, 1992a Geographic Information Systems and Archaeology. In Computer Applications and Quantitative Methods in Archaeology, edited by G. Lock and J. Moffett 1991, 77-84. BAR International series S577, Tempus Reparatum, Oxford. Kvamme, Kenneth L, 1992b Terrain Form Analysis of archaeological location through Geographic Information Systems. In Computer Applications and Quantitative Methods in Archaeology 1991, edited by G. Lock and J. Moffett 1991, 127-136. BAR International series S577, Tempus Reparatum, Oxford. Llobera, Marcos 2001 Building Past Landscape Perception With GIS: Understanding Topographic Prominence. Journal of Archaeological Science 28(9):1005-1014. Llobera, Marcos, and Thomas. J. Sluckin 2007 Zigzagging: Theoretical insights on climbing strategies. Journal of Theoretical Biology 247: 206–217. Livingood Patrick 2012 No Crows Mode Mounds: Do Cost-Distance Calculations of Travel Time Improve Our Understanding of Southern Appalachian Polity Size? In Least Cost Analysis Of Social Landscapes: Archaeological Case Studies, edited by Sarah L. Surface Evans, and Devin A. White, pp 174-187. University of Utah Press: Salt Lake City. McNeill J.R., 1992 The Mountains of the Mediterranean World: An Environmental History. Cambridge University Press. Cambridge.

143

Mitchell, Stephen 2007 A History of the Later Roman Empire AD 284-641: The Transformation of the Ancient World. Blackwell Publishing, Malden. Mitchell, Stephen 1998 The Cities of Asia Minor in the Age of Constantine. In Constantine: History and Historiography and Legend, edited by Samuel Lieu, pp 52-73. Routledge, London. Mitford, Terence B. 1980 Roman Rough Cilicia. ANRW Aufstieg und Niedergang der romischen Welt II.7.2:1230-61. Newhard, James, Norm Levine and Allen Rutherford 2008 Least-Cost Path Analysis and Interregional Interaction in the Göksu Valley, Turkey. Anatolian Studies 58: 87-102. Pain, C. F. 2005 Size does matter: relationships between image pixel size and landscape process scales. MODSIM, International Congress of Modelling and Simulation. Modelling and Simulation Society of Australia and New Zealand Inc. Postgate, J. Nicolas 1998 Between the Plateau and the Sea: 1994-1997. Anatolian studies 49: 127–141. Reuter H. I., A. Nelsonb & A. Jarvisc 2007 An evaluation of void‐filling interpolation methods for SRTM data. International Journal of Geographical Information Science 21 (9): 983-1008. Rick John W. 1996 Total stations in Archaeology. SAA bulletin. www.saa.org/Portals/0/SAA/publications/SAAbulletin/14-4/SAA16.html#data Rodríguez José L. García and Martín C. Giménez Suárez 2010 Comparison of Mathematical Algorithms for Determining the Slope Angle in GIS Environment. Aqua-LAC 2(2):78-82. http://www.unesco.org.uy/ci/fileadmin/phi/aqualac/GarciaRodriguez_et_al_p78- 82.pdf Rodríguez E., C.S. Morris, J.E. Belz, E.C. Chapin, J.M. Martin, W. Daffer and S. Hensley 2006 An Assessment of the SRTM Topographic Products. Photogrammetric Engineering and Remote Sensing 72.3: 249-260. Saha, A. K., M.K. Arora, R.P. Gupta, M. Virdi and E. Csaplovics 2005 GIS‐based route planning in landslide‐prone areas. International Journal of Geographical Information Science 19 (10): 1149-1175. Surface Evans, Sarah 2012 Cost Catchements: A Least Cost Application for Modelling Hunter Gather Land Use. In Least Cost Analysis Of Social Landscapes: Archaeological Case Studies, edited by Sarah L. Surface Evans, and Devin A. White, pp. 128-151. University of Utah Press: Salt Lake City. Takagi, Masataka

144

1998 Accuracy of digital elevation model according to spatial resolution. International Archives of Photogrammetry and Remote Sensing 32 (1998): 613- 617. Treuhaft, Robert Interferometry Explained: More Details. NASA, JPL www2.jpl.nasa.gov/srtm/instrumentinterfmore.html Ullah, Isaac I. and Sean M. Bergin 2012 Modeling the Consequences of Village Site Location: Least Cost Path Modeling in a coupled GIS-Agent-Based Model Village Agropastoralism in Eastern Spain. In Least Cost Analysis Of Social Landscapes: Archaeological Case Studies, edited by Sarah L. Surface Evans, and Devin A. White, pp 155-173. University of Utah Press: Salt Lake City. Varinlioğlu, G nder 2007 Living in a Marginal Environment: Rural Habitat and Landscape in Southeastern Isauria. Dumbarton Oaks Papers 61: 287-317. Wagstaff, Malcolm 2006 Network analysis and logistics: applied topology. In General issues in the study of medieval logistics: sources, problems and methodologies, edited by John Haldon, 69-92 Leiden: Brill. White, Devin A. 2012 Prehistoric Trail Networks of Western Papagueria: A Multifaceted Least Cost Graph Theory Analysis. In Least Cost Analysis Of Social Landscapes: Archaeological Case Studies, edited by Sarah L. Surface Evans, and Devin A. White, pp 188-206. University of Utah Press: Salt Lake City. White, Devin and Sarah Surface Evans 2012 Least Cost Analysis of Social Landscapes: Archaeological Case Studies. University of Utah Press Salt Lake City Wood, Jo. D. 1996 The geomorphological characterisation of digital elevation models. PhD Thesis, University of Leicester. Electronic Document http://www.soi.city.ac.uk/~jwo/phd Yildirim, Bahadir and Marie-Henriette Gates, 2007 Archaeology in Turkey, 2004-2005. American Journal of Archaeology 111.2: 275-356. Yu Chaoqing, Jay Lee and Mandy J. Munro-Stasiuk 2003 Extensions to least-cost path algorithms for roadway planning. Int J. Geographical Information Science 17(4): 361–376.

ArcGIS Resource Center and (ArcGIS Desktop Help) 2013 Working with spatial references available on http://help.arcgis.com/en/sdk/10.0/arcobjects_net/conceptualhelp/index.html#//00010000 02mq000000 access date: 1-Feb-2012 GRASS Development Team 2003-2014 manual tutorial http://grass.osgeo.org/grass65/manuals/r.walk.html, access date: 10-Feb-2014

145

SRTM downloaded from CGIAR Consortium for Spatial Information: http://srtm.csi.cgiar.org access date 25-Oct-2012.

146