Case Study: Home Range Estimations

The Basics

In this exercise, you will create density surfaces from point data.  Density surfaces can be used to map a wide variety of things: fire station locations, diseases, income levels, test scores, consistently cloudless days, or popular areas for dining, like on this Kayak map.

In this exercise you will create density surfaces from coyote location data and use them to estimate home ranges for coyotes.


    kayak map
Eric Gese Coyote Photo

Working with a spreadsheet containing radio collar location data for seven coyotes, you will first display the locations on a map and then calculate a generic home range using somewhat traditional methods. 

You will then calculate a complex density surface allowing you to estimate more precise general and core home ranges.  You will also utilize some basic scripts in ArcPy (python) to loop and automate repeated parts of the process.  

 

Working between ArcGIS, QGIS, GRASS, Python, and/or R is something many geospatial analysts do routinely. Remember, no platform can do everything. It is great to be familiar with many tools.

This dataset for this exercise was provided by Dr. Eric Gese at Utah State University and is used with permission. The python script was written by Chris Garrard and Tyler Hatch for demonstration purposes. They would want me to point out that the script is deliberately inelegant. 

 

Objectives

  • Estimate home ranges
    • MBG – Minimum Bounding Geometry (Convex Polygons)
    • KDE – Kernel Density Estimator
  • Calculate home range areas
  • Manipulate and run script commands
  • Batch process
  • Extract raster values to points

Data 

Landfire evt (existing vegetation type) downloaded from landfire.gov

Coyotes.csv

  • The data provided is a subset of a much larger dataset collected by Bryan Kluever of Utah State University during the period 2010-2014.
  • The CSV file contains High Frequency Radio Collar data for a limited sample population of coyotes near Dugway in central Utah.
  • This is not a complete dataset and is not intended for use beyond this lab exercise.
  • The spreadsheet includes information about the animals’ sex, date and time of data capture, XY locations, and approximate season (pup/pup rearing, breeding, dispersal).

 INLINE ALERT

Keep reading…

 

The lab in short:

  1. Calculate Minimum Convex polygons for each individual coyote
  2. Calculate Kernel Density surface for the whole population
  3. Calculate 50% and 95% quantile areas for the whole population from the Kernel Density surface
  4. Calculate Use vs Availability ratio for resources (part 2)
  5. Professional presentation of results

Part 1: Calculating Home Range Areas

Data Prep

  • Open the coyote.csv file in ArcGIS or Excel.
    • Verify the coordinate columns
    • Inspect the other attributes
    • The attribute fields are explained as follows:
      • The Animal field represents the coyote’s collar code.

 INLINE ALRT

      • The Month, Day, Year, Hour, and Minute fields record the moment the Location of the Coyote was recorded. They are split into individual fields, stored in a standard m/d/y format, as well as in Julian format.
      • The X/Y coordinates are in the projected coordinate system WGS 84 UTM Zone 12N.
        • Look through the X and Y values to ensure they are fully and ‘correctly’ populated. (Look for missing data and/or “0” fields)
        • Edit records as necessary to clean up data (i.e. remove any obvious errors like 0s or empty fields. You can just delete those rows).
      • Season is a descriptive organizational field for the life stage of the coyote related to its home range.
  • Plot the XY coordinates in ArcGIS

  • Display XY Data
    • You have experience displaying XY data and hopefully understand this process. But if not:
    • Verify the points are displaying in the Dugway region of west central UT.
    • Re-inspect for outlier points or mistakes. Try Zooming to the Layer's full extent (remember: Contents > Right click layer > Zoom to Layer).

  • Change the symbology to display by the Animal band code. (This is a visualization step to familiarize you with the data. It doesn’t affect the analysis.)

Arieal map view with different colored dots grouped together

Now we are ready to run the home range calculations.

Home Range Calculations

We will work with two types of home range calculations in ArcGIS Pro.

  • Minimum Convex Polygon (MCP)
  • Fixed Kernel Density Estimate (KDE)

Minimum Convex Polygons are also called Minimum Bounding Geometry. These are straight forward home range calculations based on the spatial extent of collected location data. These tend to overestimate home range areas and don’t provide variation of use within the mapped area. They produce a binary output – home range or not home range - defined by the outside extent of the collar location data.

Kernel Density Estimates are density surfaces that represent the probability that an animal will occupy an area. Density surfaces are calculated from known locations points and home ranges (general and core) are estimated by calculating quantiles and mapping with isopleth lines (or polygons as you will see).

Convex polygon overlaid with KDE derived density surface.

Comparison of home range methods. Minimum convex polygon (black line) overlaid with KDE derived density surface (green to blue).  Figure from http://gis4geomorphology.com/home-range-kernel/

 

Home Range Estimations Method 1: Minimum Convex Polygons

You will create an individual home range polygon for each animal in the dataset using ArcGIS Pro's version of the minimum convex polygon tool called “Minimum Bounding Geometry” (MBG).

  • Run the Minimum Bounding Geometry tool on your coyote location points.
    • Geometry Type = Convex Hull
    • Group option = List
    • Group Field = Animal

 INLINE ALERT

  • Evaluate the results
    • Do the polygons correspond to the point locations?
    • I have displayed the polygons with the same color as the animal’s point color (manual job for visualization purposes) (Below).
    • Notice that the overlapping areas don’t display as overlapping…

Arieal map from above with colored zones over colored dots

  • Open the polygons' attribute table.
    • There should be a polygon record (row) for each animal
    • NOTE: Animal C02 is animal C02 not animal 0 (FID value). In the instructions I am always referring to the animal’s “C” number, not their FID or ID number.  

 INLINE ALERT

Calculate Area to compare MBG home ranges

Calculate the area for each animal’s home range polygon.

Calculating polygon area should be second nature.
Review support documents from Esri or past exercises.

Choose an area unit that makes sense to you.
You will use this same area unit later in order to compare the areas generated by the two methods.

Remember, these are the simplest defined areas; polygons defined by the outside extent of the data points. Convex polygons tend to overestimate home ranges and may include a lot of area not actually utilized or occupied by the animal.

Home Range Estimations Method 2: Kernel Density Surfaces

Calculating home range for the population

 

Coyote radio collar locations are estimates for a couple of important reasons. First is that coyotes might not be occupying a space deliberately. Temporary environmental conditions could influence the behavior of a coyote. And second, GPS locations have margins of error that are impacted by weather, vegetation, and other things that can block the signal.

So instead of treating the XY points as exact locations of coyotes, we can replace each point with a probability curve -a three dimensional shape where the peak of the curve is centered on the point's location (the highest probability). As the peak tapers down on all sides, the probability of occupation decreases. The distance from the peak to the curve edge where there is zero probability of occupation is called the Search Radius. 

3D kernel

The bell shape has a volume. The volume default is “1.” But you can use a Population Field to weight the point’s location.

This probability shape is called a Kernel. The kernels can take many shapes: curves, flat-topped mesas, and triangles.

Choosing the most appropriate shape and search radius is the responsibility of the analyst. Known error, animal behavior, and the questions being asked can all inform the best choices when calculating density surfaces.

Imagine a field of points mapping an animal's location. Each point is replaced by a kernel. There will be locations between points where kernels overlap. The magnitudes of the overlapping kernels are added together at each raster cell's location.

2D kernel overap

Two-dimensional representation of a kernel density estimate. Six locations are marked by black dashes on the x-axis. Kernels are placed over each location (red dashed curves). Curve heights (on the z-axis) are summed at each location along the x-axis (blue curve). The blue curve is the kernel density estimate for the discrete locations. Image from Wikipedia.

On maps, the kernels are three dimensional.   3D kernel

 

Here's an example of overlapping kernels on a map and the resulting density surface:

kernels on map and density surface

The Kernel Density tool produces a raster outputs. Cell values represent the summed occupation probabilities. The units of measure for the cell values are ‘magnitude per unit area’.

You will create individual kernel density surfaces for each coyote and another to estimate the probability of encountering any of the coyotes in the population. 

By isolating the cells with the the highest half of the density values, a general home range can be mapped.
The top 5% of the values indicate the "core" areas for the coyotes. 

  • Run the tool Kernel Density
    • Population field: This field allows for ‘weighting’ the locations with a value (like magnitude of earthquake, population the point represents, severity of collision, reliability of bigfoot sighting…). But in this case, we are working with simple locations over time.
    • Replace the default cell size with 30 (meters)
      Why? Because the default will be a value calculated from the distribution and density of points, which is fine, but an awkward number. You can benefit from my foresight - later you will be working with 30 meter landcover data. 
    • Leave the search radius blank.

 INLINE ALERT

      • As the user you would normally be very careful about the search radius, control it, experiment with it, and apply a radius that is logical for your data. We will let Arc calculate a radius for us... this time.
    • Make note of the units.
    • Calculate Densities

 Evaluate results:

results of kernel density tool

I changed the color ramp to stretched and changed the color. The coyote points are displayed semi-transparently over the density raster for comparison.

Understand the results

  • The range of values in the table of contents is something like:

  • What is your specific range of values?
    • You should have noticed in the tool setup that the units were squared kilometers.
  • What do these cell values describe?
    • Does this mean that in the highest valued cell, 32 points are predicted within a sq km of that cell?

 IMAGE CAPTION

  • Do these values make sense?
    • Are they values you could have predicted (within an order of magnitude) to test your results?

In this case, I won’t be surprised if you say “No, I don’t know what they mean, and I had no idea what to expect.” Nevertheless, hopefully this semester you will get to the point in your analyses that you either CAN predict the resulting values as a check, or be able to defend and explain the resulting range of values after the fact.

Save your files

 

Calculate the Core and General home range for the whole population

Calculate the core and general home ranges from this density surface that represents locations for the whole population.

Later you will repeat this process and calculate individual general and core home ranges for each animal using a python script that automates the process. using a loop.

 

The First step is to identify the density value at each point location and print the density value into the coyote point attribute table.

Use the tool “Extract Values to Points.

 

This creates a new output of the coyote point shapefile with a field in the attribute table that contains the Kernel Density value at each point’s location. In this example it is called RASTERVALU. 

The second step is to do some calculatin’ to determine the 95th and 50th percentile density values.

The following workflow was adapted from Skye Cooley.

  • Sort the values in the RASTERVALU field by Descending order.
    • You CAN sort ascending, but then reverse the math down below.
  • Find the total number of points (records) by looking at the bottom margin of the attribute table window.
  • Divide the number of points in half to determine which record is at the halfway mark in the dataset. 
    • Example
      • If there are 855 total records: 0.5 x 855 = 428.
      • Scroll down to the 428th record and make note of the density value associated with the 428th point while sorted.
      • Record the KDE value (RASTERVALU) for the 50th % record. This is the threshold value for the Core Range.

You are saying that HALF of the points will be found in densities greater than or equal to this value. This is true because you have just selected half the points and you sorted the densities. Half of the densities are greater than this value.

  • Multiply the total number of records by 0.95 to find how many rows comprise 95% of the records.
    • Example:
      • 855 x 0.95 = 812
      • Select 812 of the records still sorted descending.
      • Check yourself: you should have 95% of the records selected.
  • Scroll down and use the Shift key to select 95% of the records and record the Kernel Density value (RASTERVALU) for the last record in the set.
  • Create two new rasters by running the Reclassify tool twice.
    • Core: Reclassify the Kernel Density raster into 2 new classes: “NODATA” and “50”.
    • The density value you recorded is the break value separating the two classes.
    • Class “NODATA” will contain all low density KDE values - from the minimum raster value to the break value.
    • Class “50” will contain values from the break value to the maximum (the most-dense cells).
    • You can use the Classify button to create 2 classes, which saves time deleting all the extra default rows.

reclassify set up

 INLINE ALERT

 

Repeat Reclassify for the "general" home range areas.
Name the outputs to reflect ‘core’ or ‘general.’

  • Convert the reclassified rasters to polygons representing the 50% Core and 95% General home ranges for the population.
    • Use Raster to Polygon tool and I challenge you to batch process. 
    • Right click on the tool name in the geoprocessing search panel to access the batch option.
  • batch processing
  • Repeat RECLASSIFY and RASTER TO POLYGON for 95% Home Range making sure to use the 95% density value as the threshold or break value.
  • Evaluate results

Question: Should the 95% areas be bigger or smaller than the 50% areas?

 IMAGE WITH CAPTION

Calculating Home Ranges for Individual Animals

At this point, you have calculated a density surface representing area usage by the group of animals. However, we cannot distinguish the whereabouts or probability of occurrence of individual animals.

The ArcGIS tools don’t allow for density surfaces to be generated based on an attribute field (ex., Animal) as the Minimum Bounding Geometry tool did in order to produce individual density surfaces and therefore individual territories.

Think about why it would be useful to find out each individual’s home range.
Perhaps you want to see if males and females have different habitats. Or perhaps females have different behaviors when brooding.

You could create individual shapefiles for each animal and run the Kernel Density tool over and over and over again. How long did it take you to do that for the community? Would you be willing to do that for ten individual animals? What about 1000? In the real world of wildlife science, processing this much data would be a common occurrence.

Luckily, we have the technology to tackle this little hurdle. It is possible to write a Python script that automates the entire process with only a few inputs.

In this section we will walk through installing a toolbox, running, and more importantly, reading a script. Python is awesome. And although your first reaction when encountering Python code may be fear, taking the time to understand what is happening can turn that fear into confidence. In fact, Python is not named after the snake, but rather the British comedy series Monty Python. So, you have nothing to worry about.

 

monty python

And now for something completely different.

 

INLINE ALERT

 

The purpose of this next section is to familiarize you with the interplay between ArcGIS and Python.  

  • In ArcGIS Pro > Catalog > navigate to the “CoyoteToolbox.tbx” toolbox found in the data folder you downloaded from Canvas.
    • A toolbox is a container for storing scripts and tools.

  • Double click on the toolbox to open it. This will reveal a script called “CoyoteScript.”
    • A script is a bunch of code that all gets run at once. In fact, every tool you’ll ever use in ArcGIS is a script! Think of this like a homemade ArcGIS tool, that does exactly what we want it to.
    • This script was written by Chris Garrard and updated by Tyler Hatch for ArcGIS Pro in 2021.
    • The script was deliberately written to show the moving parts. (Chris is well known for her concise and elegant scripts.)

  • Double click on the script.
    • This will open up a window that should be very familiar to you! Just like any ArcGIS tool, it has multiple fields for inputs.

Coyote Script () - This tool takes an input of Coyote data (Points with an 'Animal' field that distinguished animals). It then uses this data to calculate home ranges and core ranges for each animal. All outputs go into the specified output folder.

 

  • Put in the original Coyote Point file (without the density RASTERVALUES), Choose your output folder for the outputs, and click OK to run the script.
    • It should take a minute or two to run. Make sure your coyote points attribute table doesn’t have a RASTERVALU field. (i.e. don’t run the tool on the point file with the density points.)

  • Check your output folder. (The output files don’t add automatically to your map.)
    • Remember to refresh the Add Data window
    • If everything worked correctly, you should now have
      • Core (50%) Polygons for each animal and 
      • General (95%) Polygons for each animal.
  • Put the polygons into your map and evaluate for correctness.

Script from above applied to map. Symbology adjusted manually to draw each animal with the same color group

That was almost like magic, right?

All of the work from the first part of the lab was done seven times over in just a minute!

 

Now, let’s take a deeper look into the script to find out what’s actually happening.

(You didn’t think we were going to let you get away with GIS “Magic” did you?)

Right click the script (in Catalog) and select edit. This will open up the script as a text document.

What you see is a list of commands that runs in order when you press “run” in the tool interface.

whole script

If you’ve never looked at code before, understanding it may seem like a daunting task. It’s easier than you’d think. In a real-world setting, it would be very useful to look at a script you are running, especially if you downloaded the script from somewhere else. 

 

Walking through this line by line, will help illustrate what is happening behind the simple user interface and demonstrate how any tool can by modified by learning a little Python.

Compare these snippets to your script window.

script 1

These lines import some libraries into this script. Basically, it lets this script use functions that aren’t part of default Python. The important one here is arcpy which lets this script talk to ArcGIS Pro!

script 2

The first line here lets us use the spatial analyst extension. That’s the extension that has the raster tools like hillshade, slope, and Kernel Density…

The second line is changing an environment setting in Arc, which lets us overwrite files. This is useful if you are rerunning a tool over and over, and you don’t want new files (or errors) to pile up.

 initial points

save rasters

These lines let the script map the locations of the files you indicated in the tool (the coyote point file and  the Output Folder). And the second snippit is showing the tool creating two temporary files and mapping their locations. The parameters are stored as a string of text that holds the file location of each input.

 

with arcpy

This block is a bit more complicated. The first line creates a SearchCursor, which is Python’s way of looking at a shapefile’s attribute table. Then, we create a list called animals. This list will eventually hold a list of all the unique animal IDs we need to do calculations for. 

In the third line (above) we start a ‘for’ loop that will look at every row (point) in the attribute table and make a note of which animal that row belongs to. If it’s an animal not already in our list of animals, the new ID is added to the list.  For example, the script scrolls down the field/column "Animal" and collects the unique values: C02, got it, C02, got it, C07, new one - add it to the list of unique identifiers, C02, got it, C04 - new one - add it to the list of unique identifiers...

 

home range list

This creates empty lists that will hold respectively:

  • every home range shapefile (general)
  • every core home range shapefile
  • every temporary general raster file
  • every temporary core raster file.

These will be used later.

 

for animals

This line is very important. This means that for every unique animal ID in our "Animals" field, a set of actions will be performed. The actions that we do for each animal are indented after this line. 

Take a look at the whole script. How much of it is contained within this loop. Answer: The rest of it. Every action from now on will be performed once for each animal.

 

arcpy

This adds a text message into the window where the tool is running to let you know what animal ID is currently being processed. Did you notice this when you ran the tool earlier?

 

coyote

The first line here creates a string of text called coyote_points. This is where we will save a temporary shapefile that only has points for each specific animal.
Notice on the second line it says "Select_Analysis". This is the Select tool. The tool will select by attribute the unique identifiers in the field "Animal" and create temporary shapefiles of the individual animals' radio collar locations. 

 

extract values to points

Arcpy.sa.KernelDensity = this is running kernel density on each of the individual animals' points.

sa = the spatial analyst toolbox 

 

extract values

ArcPy.sa.ExtractValuesToPoints = running the 'extract values to points' tool and in the last parentheses you can see that for each animal it is running the tool on the coyote_points, using its corresponding kernel raster, and creating a new output called 'coyote_points_values'. 

 

kernel list

This part gets a bit tricky, so listen up.
First, we create a list called kernel_list. This list will hold all of the kernel density values for the animal currently being processed.
Then, we create a search cursor (do you remember what that does?) and look at the "RASTERVALU"s in each of the coyote_points_values file that was just created. 
All of the kernel density values are put into the kernel_list and the list is reverse sorted.
(Do you remember sorting descending?)

 

num records

The first line is creating a variable that holds some information.
In this case, that information is the length of the list we just made. The length of this list is the same as the total number of points for this animal.

In the second and third lines, we create two more variables. These variables store the number of records multiplied by 0.5 and 0.95 (sound familiar?).

Finally, we create two more variables. These variables look at the list and find the kernel density value at 50% and 95% of the way down the list.

  • "core_cut" is the core threshold value
  • "home_range_cut" is the general threshold value.

 

raster

Almost done.
Line one, find the maximum kernel density for the kernel density raster.

Note:
When you opened the Reclassify tool to create the binary Core and General rasters using the threshold density values - the classify window was automatically populated with the min and max density values, remember? This is the script that ran in the background of the Reclassify tool to do that for you.

In the second and third lines, we reclassify each raster, like you did before. Values below the cutoff are NODATA, and values above are kept data values. 

The reclassifed (binary) raster outputs are called:

  • core_raster
  • home_range_raster

 

wrapping things up

These eight lines are mostly just wrapping things up.
The first two lines save the core and home_range rasters.

The third and fourth lines add the binary core_raster and home_range_raster to a list to be used in the raster to polygon tool.

Lines 5 and 6 convert the binary rasters to polygons (from the list).

The last two lines add the polygon outputs (home_range and core) to another list we will use at the end.

 

arcpy delete

All of the temporary files are deleted.

 

merged home

These lines are outside of the loop created earlier. That means these lines are only executed once at the very end of the script run. What do you think they do?

 

if save

And finally, these lines are tied to the inputs for the script itself, the two checkboxes you see when you run the tool. Basically, it’s up to the user if they want to save the individual polygons and rasters for each animal (True) or if they want to delete those and only save the merged outputs (False).

 

That’s it. Done.
Give yourself a pat on the back, grab a drink of water, and do a victory lap around the room! 

 

Hopefully you understood most of what was happening, as reading and understanding Python code could play an important role in your future as a geospatial mastermind. 

 

Pop Quiz:

What is the name of the library that lets Python "talk" to ArcGIS?
Which line(s) in the code turn the script's parameters into Python text strings?

Can you imagine situations in which this tool might be applied to your own research or field of study?  Crime, human traffic in recreation areas, disease outbreaks… 

If there are attributes associated with location, KDE has an optional weighting field that can add a dimension to the analysis results.

 

Part 2: Calculating Preference Ratios

Perform a cursory analysis of available vegetation versus actual usage within the general home range.

 

For this part of the analysis you will use LANDFIRE data.

This data is provided in the zipped data folder on Canvas.

Add the landfire vegetation data: us_140evt.tif raster to your map
(evt = existing vegetation type)

Inspect the data.

If everything is going according to plan you can't help but ask yourself:

  • Why is the data not sitting squarely? (maybe yours is, that’s fine…)
  • What is the coordinate system of this data?
  • Is ithe data in a projected or geographic coordinate system?
  • In what coordinate system is my map displaying?
  • Is the display projected or geographic?
  • What is the cell size/resolution of the raster?
  • What are the values in the contents pane representing?
  • Are there units of measure that define the data?
  • Is this ranked numeric or categorical data?
  • Does the attribute table contain information that is helpful to the analysis goals?
  • What is the overall quality of this data?
  • Which EVT field will I use to describe landcover? (hint: you don’t get to pick randomly, guess, or opt to not know or care)

Continue pushing yourself by using the questions peppered throughout the instructions to reinforce good habits

 INLINE ALERT

 

You need to know:

Raster tools and functions process on the VALUE field. The VALUE is the data encoded in each raster cell.The other descriptive fields in the attribute table are appended/linked based on that VALUE.The landcover data is interesting because we have a VALUE and then different descriptions of the vegetation linked to that value. You should closely inspect the attribute fields for this data so you know what you are working with.

To begin the analysis, all you need is the “All_Home_Ranges_95” shapefile that the script created.

You will analyze the group of animals as a whole for this portion of the exercise.

Maps of general home range of coyote locations

Example of the general home range calculated from the coyote locations in Part 1. Your results may look a bit different.

Name and symbolize the shapefile in a way that helps you understand the data.

 

Preference and selection ratios, the big picture

You are determining the landcover types at the known coyote point locations (and thinking of this as “usage”) and comparing that with the landcover types available in their general home range (and thinking of that as what is “available” to the animals).

If coyotes are using all landcover types indiscriminately, then we would assume they have no vegetative preferences.And therefore, we would expect the point locations to be evenly distributed throughout the home range. If half the general home range was covered in water and the other half in grasslands, half of the coyote location points should be in the water and half in the grass. BUT, if they are disproportionately occupying certain landcover types more than other landcover types (ex. none were swimming, all were in the grass), it might be said that they are preferring those vegetative types.

Selection Ratio: What is used divided by What is expected to be used if the animal had no vegetative/landcover preference

Used: You will identify the landcover types at the known coyote point locations using Extract Values to Points.

Expected:

    1. Determine the proportion of each landcover type within the general range
    2. Count the total number of location points
    3. Multiply the total number of points by the proportion of each landcover type within the home range values greater than 1 indicate a preference. But make sure to think about this. Low counts might not be an adequate indication of usage.

 

Calculate Availability

Drawing a polygon around the coyote population points loosely defines an area available to the coyotes.

We will use the General Home Range you calculated in part 1 as it is a more refined space than the general convex polygon.  The script spit out a general home range you can also use.

Here’s how:

  • Clip the landcover data using the extent of the coyote’s general home range (95th percentile area).
  • Use Clip (Data Management) or Extract by Mask tool.

Clipping the landcover data with the general home range will help limit the extent of the landcover data and allow us to use the COUNT of each landcover type to determine what landcover types are considered “available” to the coyotes in this population group. From the attribute table you can derive the proportion of each landcover type that makes up the home range.

  • After clipping the landfire data with the general home range, calculate the proportion of each landcover type within the coyotes’ general home range.
    • Verify the results by sorting the Count field and evaluating (generally) the most dominant landcover types.
    • In short: the attribute table tells you how many cells there are of each landcover type within the general home range (the “count”).
      • Sum the Count field to find the total number of cells
      • Divide each landcover type’s “count” by the “total” number of cells to find the proportion each landcover type represents of the total general home range.
        • In the landcover raster’s attribute table, add a field (float) into which you will calculate proportion
        • Sum the Count field (right click > Statistics)
        • Field Calculate into the new field to divide the veg type Count by the Count Sum
        • Evaluate your results. (For example, use the statistics tool again to check the sum of this new field and verify that it adds to 1).
        • Report the top 15 or so landcover types (actual name and value) and their proportion to the whole. USE THE CLASSNAME field along with the VALUE field.

These results are going in a table for further use and ultimately - submission to Canvas.

Calculate Utilization

Now you need to determine what vegetation or land cover is "used" by the coyotes.

This means you will identify the landcover value at each coyote location point.

Only use the points that fall within the 95% home range.

You have several tools at your disposal to make this happen.

  • Select by Location > Data > Export features,
  • Clip
  • Intersect
  • Select by Location > Reverse the selection > Delete > Save Edits
  • Select the 95% of the points with the highest density values like you did earlier…

 


Once you have the general points isolated:

Run Extract Values to Points.

This tool extracts the landfire vegetation values from the cells below each point's location and writes the vegetation value to the point attribute table.

  • When setting up the tool, check the box to append all the input raster attributes...

INLINE ALERT

 

Open the attribute table of the resulting point file and evaluate your results, verifying that you have a new RASTERVALU field populated with a bunch of numbers that resemble the landfire VALUES that represent landcover types.

  • If you checked the "append raster attributes" box you will also have all the landcover descriptions (handy!)

Again, make sure you are only using the points within the general range.

 

Summarize the frequency of each vegetation type usage

In other words, count the number of times a coyote is found on each vegetation type.

This can be easily done using the Frequency tool.

The input table is your point file containing 95% of the points (that one inside the general range area). It should have a Raster Value field containing 4-digit numbers representing different landcover types.

 

 

You can instead use the CLASSNAME field for the Frequency Field. This can save a step later.

Frequency results using CLASSNAME.

Intermountain Basins Big Sagebrush Steppe was found under only 3 of the coyote points in the general home range. Not heavy utilization given the number of points included in the analysis.

 INLINE ALERT

This sum should match the number of coyotes in the general home range.

Make sure your results make sense before moving on.

 

Join Fields to add Available counts

Remember clipping the landcover dataset using the All Home Ranges 95 layer?

That output contains the Count of each landcover type in the general home range.

Raster attribute tables are frequency tables: Unique values and counts of those values.


Join the landcover Counts into your landcover Frequency table. 
To do this you need to identify the column headers for each unique record.

This is why  I like to use the values and not text fields.
The Values are much less likely to have misspellings, or extra spaces, or abbreviations that will make it difficult to match up the records.

Input table is your stand-alone frequency table. The one to which you just joined the classname field. the Join table is the landcover raster you clipped using the "All home range 95" polygon.

 

If you are working with CLASSNAMES and not Values, you should insert CLASSNAMES in the Join Table Field

You’ll end up with something like this:

 

Stop and know what you have here.

  • You have a landcover type (RASTERVALUE and/or CLASSNAME)
  • You know how many times the coyotes were found on each landcover type (FREQUENCY)
  • You know how many cells in the general home range each landcover type covers (COUNT).

This is a good time to get the table out of ArcGIS so you can make some new calculations and make presentation improvements.

 

Export your table to a CSV file to work with in Excel

Export is found in the upper right of table (at the time of this writing).

In Excel:

The COUNT numbers need to be normalized.
The 'number' of cells that are “Barren” is not very helpful. But the proportion of the total general home range that is Barren is meaningful. 

Calculate proportions of the total general home range covered by each landcover type.

  • Add a column to the table and name it something like LC_prop for landcover proportion.
  • Sum the Counts to find the total number of landcover cells in the general home range.
  • Divide each individual landcover Count by the Sum of the counts.

Sum the proportions to evaluate your process. 

 INLINE ALERT

 INLINE ALERT

 

 

Calculate Selection Ratio

The selection ratio compares the landcover types that are available to what is actually being utilized to describe landcover types that the animals might be differentially preferring.

You do this by comparing the frequency of coyotes on each landcover type to the frequency we would expect to see (based on the proportion of different landcover types) if the points were evenly distributed across the general home range.

And we figure out the expected frequency by multiplying the total number of coyotes by the proportional fraction of each landcover type.

Example:
  • There are 800 coyotes in the general home range.
  • 60% of the general home range is covered by grassland
  • 40% is covered be an enormous parking lot
  • Multiply 800 by 0.6 (representing the 60%) to get 480.
    This means we expect 480 of the coyote points to be in grassland and 320 (800 * 0.4) to be found on parking lot... if coyotes location points are distributed evenly across the home range. Meaning coyotes don’t care where they are.
  • But we actually observed 720 coyotes on the grassland and only 80 on parking lot.
  • We calculate the Selection Ratio by dividing the Observed or Utilized Frequency by the Expected Frequency.
    • Grassland: 720 observed / 480 expected = 1.5 selection ratio
    • Parking lot: 80 observed / 320 expected = 0.25 selection ratio
Landcover Classname  Available Cell Count Available Proportion Used Point Count Expected Count Preference Ratio
Grass 1000 1000/6000 = 0.2 91  106*0.2 = 21 91/21 = 4.3
Urban 3000 3000/6000 = 0.5 12 106*0.5 = 53 12/53 = 0.23
Wetland  2000 2000/6000 = 0.3 3 106*0.3 = 32 3/32 = 0.09
Total 6000 1.0 106 106 (whew)  

This table is an example of the math involved - values are simplified to make it easy to follow the process.

Evaluate your results: selection ratios greater than 1 (in general) are landcover types that the coyotes are differentially preferring. Ratio values less than one are landcover types being under-utilized by the coyotes. Values close to 1 or low count numbers should scrutinized before including them in results or drawing conclusions from them.

 INLINE ALERT

 

Create a table for your submission

Organize your available vegetation types (and proportions), utilized vegetation types, and Selection Ratios for your submission. Sort the final results by selection ratio.

See submission details on Canvas.